Mathematical Foundations

 1. Define vector space.Give example.

A vector space is a set V on which two operations + and · are defined, called vector addition and scalar multiplication.

The operation + (vector addition) must satisfy the following conditions:

Closure: If u and v are any vectors in V, then the sum   u + v   belongs to V.

(1) Commutative law: For all vectors u and v in V,     u + v = v + u

(2) Associative law: For all vectors u, v, w in V,     u + (v + w) = (u + v) + w

(3) Additive identity: The set V contains an additive identity element, denoted by 0, such that for any vector v in V,     0 + v = v   and   v + 0 = v.

(4) Additive inverses: For each vector v in V, the equations     v + x = 0   and   x + v = 0     have a solution x in V, called an additive inverse of v, and denoted by - v.

The operation · (scalar multiplication) is defined between real numbers (or scalars) and vectors, and must satisfy the following conditions:

Closure: If v in any vector in V, and c is any real number, then the product   c · v   belongs to V.

(5) Distributive law: For all real numbers c and all vectors u, v in V,     c · (u + v) = c · u + c · v

(6) Distributive law: For all real numbers c, d and all vectors v in V,     (c+d) · v = c · v + d · v

(7) Associative law: For all real numbers c,d and all vectors v in V,     c · (d · v) = (cd) · v

(8) Unitary law: For all vectors v in V,     1 · v = v
 
The simplest example of a vector space is the trivial one: {0}, which contains only the zero vector (see the third axiom in the Vector space article). Both vector addition and scalar multiplication are trivial. A basis for this vector space is the empty set, so that {0} is the 0-dimensional vector space over F.  
 

2. State and prove Bayes Theorem 

Bayes’ Theorem describes the probability of occurrence of an event related to any condition. It is also considered for the case of conditional probability.

Bayes Theorem Statement

Let E1, E2,…, En be a set of events associated with a sample space S, where all the events E1, E2,…, En have nonzero probability of occurrence and they form a partition of S. Let A be any event associated with S, then according to Bayes theorem,

P(EiA) = P(Ei)P(AEi)k=1nP(Ek)P(A|Ek)

for any k = 1, 2, 3, …., n

Bayes Theorem Proof

According to the conditional probability formula,

P(EiA) = P(EiA)P(A) ⋯⋯⋯⋯⋯⋯⋯⋯(1)

Using the multiplication rule of probability,
P(EiA) = P(Ei)P(AEi)⋯⋯⋯⋯⋯⋯⋯⋯(2)

Using total probability theorem,
P(A) = k=1n P(Ek)P(A|Ek)⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯(3)

Putting the values from equations (2) and (3) in equation 1, we get

P(EiA) = P(Ei)P(AEi)k=1n P(Ek)P(A|Ek)

Note:

The following terminologies are also used when the Bayes theorem is applied:

Hypotheses: The events E1, E2,… En is called the hypotheses

Prior Probability: The probability P(Ei) is considered as the prior probability of hypothesis Ei

Posterior Probability: The probability P(Ei|A) is considered as the posterior probability of hypothesis Ei
Bayes’ theorem is also called the formula for the probability of “causes”. Since the Ei‘s are a partition of the sample space S, one and only one of the events Ei occurs (i.e. one of the events Ei must occur and the only one can occur). Hence, the above formula gives us the probability of a particular Ei (i.e. a “Cause”), given that the event A has occurred.

Bayes Theorem Formula

If A and B are two events, then the formula for Bayes theorem is given by:

P(A|B) = P(A∩B)/P(B)

Where P(A|B) is the probability of condition when event A is occurring while event B has already occurred.

P(A ∩ B) is the probability of event A and event B

P(B) is the probability of event B

3. State and prove Chapman- Kolmogorov Equations

Definition: The n-step transition probability that a process currently in state i will be in state j after n additional transitions.

By using the Markov property and the law of total probability, we realize that

 Pij(t+s)=rk=0Pik(t)Pkj(s)for all i,j∈ X,t,s>0 

These equations are known as the Chapman-Kolmogorov equations. The equations may be written in matrix terms as 

P(t+s)=P(t)·P(s) 

4. Derive Birth n Death Process 

The birth–death process (or birth-and-death process) is a special case of continuous-time Markov process where the state transitions are of only two types: "births", which increase the state variable by one and "deaths", which decrease the state by one. T

Birth–death processes have many applications in demography, queueing theory, performance engineering, epidemiology, biology and other areas.  

When a birth occurs, the process goes from state n to n + 1. When a death occurs, the process goes from state n to state n − 1. The process is specified by birth rates and death rates

5. State Pollaczek–Khinchine formula

The Pollaczek–Khinchine formula states a relationship between the queue length and service time distribution Laplace transforms for an M/G/1 queue (where jobs arrive according to a Poisson process and have general service time distribution). The term is also used to refer to the relationships between the mean queue length and mean waiting/service time in such a model.[1] 

6. Define Markov Chain

A stochastic process has the Markov property if the conditional probability distribution of future states of the process (conditional on both past and present values) depends only upon the present state; that is, given the present, the future does not depend on the past. A process with this property is said to be Markovian or a Markov process. The most famous Markov process is a Markov chain. Brownian motion is another well-known Markov process.

Markov property refers to the memoryless property of a stochastic process. That is, (the probability of) future actions are not dependent upon the steps that led up to the present state.

A Markov chain is a stochastic model describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event.

Markov chains have many applications as statistical models of real-world processes, such as studying cruise control systems in motor vehicles, queues or lines of customers arriving at an airport, currency exchange rates and animal population dynamics.


 

Ergodicity

A state i is said to be ergodic if it is aperiodic and positive recurrent.

Queueing Theory

Queuing theory (or queueing theory) refers to the mathematical study of the formation, function, and congestion of waiting lines, or queues.

Queuing are the most frequently encountered problems in everyday life. 

For example, queue at a cafeteria, library, bank, etc. Common to all of these cases are the arrivals of objects requiring service and the attendant delays when the service mechanism is busy. 

Waiting lines cannot be eliminated completely, but suitable techniques can be used to reduce the waiting time of an object in the system. A long waiting line may result in loss of customers to an organization. 

Waiting time can be reduced by providing additional service facilities, but it may result in an increase in the idle time of the service mechanism. 

To illustrate, let’s take two examples. When looking at the queuing situation at a bank, the customers are people seeking to deposit or withdraw money, and the servers are the bank tellers. When looking at the queuing situation of a printer, the customers are the requests that have been sent to the printer, and the server is the printer.

queue situation diagram

 

Queuing theory uses the Kendall notation to classify the different types of queuing systems, or nodes. 

Queuing nodes are classified using the notation A/S/c/K/N/D where:

  • A is the arrival process
  • S is the mathematical distribution of the service time
  • c is the number of servers
  • K is the capacity of the queue, omitted if unlimited
  • N is the number of possible customers, omitted if unlimited
  • D is the queuing discipline, assumed first-in-first-out if omitted

The general form of a queuing model as,

(a/b/c): (d/e)

where,

  • a is the arrival process or inter arrival time
  • b is the mathematical distribution of the service time
  • c is the number of servers
  • d is the capacity of the queue, omitted if unlimited
  • e is the queuing discipline, assumed first-in-first-out if omitted

For example, think of an ATM.

It can serve: one customer at a time; in a first-in-first-out order; with a randomly-distributed arrival process and service distribution time; unlimited queue capacity; and unlimited number of possible customers.

 Different types of Queuing Model

[(M/M/1):(/FIFO)]

[(M/M/s):(/FIFO)]

[(M/M/1):(k/FIFO)]

[(M/M/s):(k/FIFO)]


Operating characteristic of a Queuing System 

Queuing length(Lq) 

System length(LS)

Waiting time in queue (Wq) 

Waiting time in system(WS)

M/M/1 Model (Single Server Poisson Queue)

Queuing theory would describe this system as a M/M/1 queue (“M” here stands for Markovian, a statistical process to describe randomness).

This model is based on the following assumptions:

  1. The arrivals follow Poisson distribution, with a mean arrival rate λ.
  2. The service time has exponential distribution, average service rate μ.
  3. Arrivals are infinite population α.
  4. Customers are served on a First-in, First-out basis (FIFO).
  5. There is only a single server. 

Single Server Queuing System Example

System of Steady-state Equations

In this method, the question arises whether the service can meet the customer demand. This depends on the values of λ and μ.

If λ ≥ m, i.e., if arrival rate is greater than or equal to the service rate, the waiting line would increase without limit. Therefore for a system to work, it is necessary that λ < μ.

 
  • Traffic intensity ρ = λ / μ. This refers to the probability of time, the service station is busy

  • Probability that the system is idle or there are no customers in the system

P0 = 1 – ρ.

  • From this, the probability of having exactly one customer in the system is P1 = ρ P0.

  • The probability of having exactly n customers in the system is

Pn = ρnP0 =(λ / μ)^n P0

Expected number of customers in queue

Little’s Law connects the capacity of a queuing system, the average time spent in the system, and the average arrival rate into the system without knowing any other features of the queue. The formula is quite simple and is written as follows:

 or transformed to solve for the other two variables so that:

Where:

  • L is the average number of customers in the system
  • λ (lambda) is the average arrival rate into the system
  • W is the average amount of time spent in the system

A line in a virtual waiting room

At Queue-it, we show visitors their wait time in the online queue using a calculation based on Little’s Law, adding in factors to account for no-shows and re-entries:

 Where:

  • L is the number of users ahead in line
  • λ (lambda) is rate of redirects to the website or app, in users per minute
  • N is the no-show ratio
  • R is the re-entry rate to the queue
  • W is the estimated wait time, in minutes

Example 1: Arrivals at a telephone booth are considered to be Poisson distributed with an average time of 12 minutes between one arrival and the next. The length of phone call is assumed to be distributed exponentially, with mean 4 minutes.

(i) What is the probability that a person arriving at the booth will have to wait?
(ii) The telephone department will install a second booth when convinced that an arrival would expect waiting for at least 3 minutes for phone call. By how much should the flow of arrivals increase in order to justify a second booth?
(iii) What is the average length of the queue that forms from time to time?
(iv) What is the probability that it will take him more than 10 minutes altogether to wait for the phone and complete his call?

(v) Estimate the fraction of the phone will be in use. 

(vi) Find the avg. no. of persons waiting in the system.

Solution:  

Given λ= 1/12 = 0.08 person per minute.
μ = 1/4 = 0.25 person per minute.

(i) Probability that a person arriving at the booth will have to wait,
P (w > 0) = 1 – P0
= 1 – (1 - λ / μ) = λ / μ
= 0.33

(ii) The installation of second booth will be justified if the arrival rate is more than the waiting time.
We have, Wq >=3 min

by the equation of Wq ,we will be finding λ by simplifying  Wq =3

λ>=0.10

Hence the increase in arrival rate is, 0.10 arrivals per minute.

(iii) Average number of units in the system is given by,

Ls= ρ/1- ρ
= 0.3/1-0.3
= 0.43 customers

(iv) Probability of waiting for 10 minutes or more is given by,

P(W>10)= e^-( μ-λ)t

=e^-(0.25-0.08)10

=0.182

For More Problems 

Monk and Inversions

using System; public class Solution { public static void Main () { int T = Convert . ToInt32 ( Console . ReadLine...