The Long Run Beliefs of Markov Chains

Mark A. Pinsky , Samuel Karlin , in An Introduction to Stochastic Modeling (Fourth Edition), 2011

4.1.1 Doubly Stochastic Matrices

A transition probability matrix is chosen doubly stochastic if the columns sum to one equally well as the rows. Formally, P = ||P ij || is doubly stochastic if

P i j 0 and k P i yard = g P k j = i for all i , j

Consider a doubly stochastic transition probability matrix on the N states 0, 1, …, North − ane. If the matrix is regular, then the unique limiting distribution is the uniform distribution π = (1/N, …, i/N). Because in that location is merely one solution to π j = ∑grandπ k P kj and σ k π g = 1 when P is regular, we demand but to check that π = (ane/N, …, 1/N) is a solution where P is doubly stochastic in order to plant the claim. By using the doubly stochastic feature ∑ j P jk = i, we verify that

one North = j 1 Northward P j k = 1 Due north .

As an case, let Ynorthward be the sum of north contained rolls of a off-white die and consider the trouble of determining with what probability Yn is a multiple of 7 in the long run. Let Tennorthward exist the residue when Yn is divided by vii. Then, Xn is a Markov concatenation on the states 0, i, …, 6 with transition probability matrix

The matrix is doubly stochastic, and it is regular (P2 has only strictly positive entries), hence the limiting distribution is π = ( 1 7 , , one seven ) . Furthermore, Yn is a multiple of 7 if and just if 10n = 0. Thus, the limiting probability that Yn is a multiple of seven is ane 7 .

Read full chapter

URL:

https://world wide web.sciencedirect.com/science/article/pii/B9780123814166000046

Discrete-Time Markov Chains

Oliver C. Ibe , in Markov Processes for Stochastic Modeling (2d Edition), 2013

4.12 Problems

4.1

Consider the post-obit transition probability matrix:

P = [ 0.half dozen 0.ii 0.two 0.3 0.4 0.3 0.0 0.3 0.7 ]

a.

Give the state-transition diagram.

b.

Given that the process is currently in state 1, what is the probability that it will be in land 2 at the end of the third transition?

c.

Given that the procedure is currently in land 1, what is the probability that the outset time it enters state 3 is the fourth transition?

4.2

Consider the following social mobility problem. Studies signal that people in a club can exist classified every bit belonging to the upper form (state 1), middle class (country 2), and lower form (state 3). Membership in any class is inherited in the following probabilistic style. Given that a person is raised in an upper-form family, he will accept an upper-class family with probability 0.45, a middle-class family with probability 0.48, and a lower-class family unit with probability 0.07. Given that a person is raised in a heart-class family unit, he volition have an upper-form family with probability 0.05, a heart-class family with probability 0.70, and a lower-class family with probability 0.25. Finally, given that a person is raised in a lower-class family unit, he volition have an upper-course family with probability 0.01, a heart-class family with probability 0.50, and a lower-class family with probability 0.49. Determine the following:

a.

The state-transition diagram of the process.

b.

The transition probability matrix of the procedure.

c.

The limiting-state probabilities. Interpret what they mean to the layperson.

4.iii

A taxi driver conducts his business in 3 different towns 1, two, and 3. On any given day, when he is in boondocks one, the probability that the next passenger he picks up is going to a identify in boondocks one is 0.three, the probability that the adjacent rider he picks up is going to town 2 is 0.2, and the probability that the next rider he picks up is going to town three is 0.5. When he is in town 2, the probability that the next rider he picks upward is going to town 1 is 0.one, the probability that the side by side passenger he picks up is going to town 2 is 0.eight, and the probability that the next passenger he picks up is going to town 3 is 0.1. When he is in town 3, the probability that the next passenger he picks up is going to town 1 is 0.4, the probability that the next rider he picks up is going to boondocks 2 is 0.4, and the probability that the next passenger he picks up is going to town 3 is 0.2.

a.

Determine the state-transition diagram for the procedure.

b.

Requite the transition probability matrix for the process.

c.

What are the limiting-state probabilities?

d.

Given that the taxi commuter is currently in town 2 and is waiting to pick upwardly his first client for the day, what is the probability that the get-go time he picks up a passenger to town ii is when he picks upward his third passenger for the day?

e.

Given that he is currently in boondocks ii, what is the probability that his tertiary passenger from now will be going to boondocks 1?

4.4

The New England fall weather tin be classified as sunny, cloudy, or rainy. A educatee conducted a detailed study of the atmospheric condition weather and came upwards with the post-obit conclusion: Given that it is sunny on any given day, then on the following solar day it will be sunny again with probability 0.v, cloudy with probability 0.three and rainy with probability 0.2. Given that it is cloudy on any given day, then on the following mean solar day it will exist sunny with probability 0.4, cloudy again with probability 0.iii and rainy with probability 0.three. Finally, given that it is rainy on any given day, then on the post-obit day it volition be sunny with probability 0.2, cloudy with probability 0.5 and rainy again with probability 0.3.

a.

Requite the state-transition diagram of New England fall weather with the state "sunny" equally state 1, the state "cloudy" as land 2, and the land "rainy" as state three.

b.

Using the same convention as in part (a), give the transition probability matrix of the New England autumn weather.

c.

Given that it is sunny today, what is the probability that it will be sunny four days from at present?

d.

Determine the limiting-land probabilities of the weather.

iv.5

Consider the post-obit transition probability matrix:

P = [ 0.3 0.ii 0.5 0.1 0.eight 0.1 0.4 0.4 0.2 ]

a.

What is P n ?

b.

Obtain ϕ 13 ( v ) , the mean occupancy time of country 3 upwards to v transitions given that the process started from state 1.

four.6

Consider the following transition probability matrix:

P = [ 0.5 0.25 0.25 0.iii 0.3 0.iv 0.25 0.five 0.25 ]

a.

Calculate p 13 ( iii ) , p 22 ( 2 ) , and p 32 ( 4 ) .

b.

Calculate p 32 ( ) .

4.7

Consider the following transition probability matrix:

P = [ ane 0 0 0 0.75 0 0.25 0 0 0.25 0 0.75 0 0 0 1 ]

a.

Put the matrix in the canonical form P = [ I 0 R Q ] .

b.

Calculate the expected assimilation times μ ii and μ 3 .

4.8

Consider the following transition probability matrix:

P = [ 0.5 0.25 0.25 0.iii 0.3 0.4 0.25 0.5 0.25 ]

a.

Summate f 13 ( four ) the probability of starting time passage from country 1 to state iii in 4 transitions.

b.

Summate the mean sojourn time in state 2.

4.nine

Allow { Ten due north } be a Markov chain with the state infinite { ane , 2 , 3 } and transition probability matrix

P = [ 0 0.4 0.6 0.25 0.75 0 0.four 0 0.6 ]

Let the initial distribution exist p ( 0 ) = [ p 1 ( 0 ) , p two ( 0 ) , p 3 ( 0 ) ] = [ 0.4 , 0.2 , 0.4 ] . Summate the following probabilities:

a.

P [ X one = 2 , X 3 = 2 , Ten iii = 1 | X 0 = one ]

b.

P [ X 1 = 2 , X three = 2 , X 3 = 1 ]

c.

P [ X one = two , X four = ii , X 6 = two ]

iv.x

On a given day Marking is cheerful, and then-and so, or glum. Given that he is cheerful on a given solar day, then he will be cheerful again the next day with probability 0.6, and then-so with probability 0.two, and glum with probability 0.2. Given that he is so-then on a given mean solar day, and so he volition exist cheerful the next mean solar day with probability 0.3, then-and then again with probability 0.five, and glum with probability 0.ii. Given that he is glum on a given twenty-four hour period, and so he will exist and so-so the side by side twenty-four hours with probability 0.five, and glum once again with probability 0.v. Let country i denote the cheerful state, country 2 announce the then-so state, and state three denote the glum state. Allow Ten n denote Mark'southward mood on the nth day, then { X north , northward = 0 , 1 , 2 , } is a three-state Markov chain.

a.

Draw the state-transition diagram of the process.

b.

Give the country-transition probability matrix.

c.

Given that Marking was and so-so on Mon, what is the probability that he will be cheerful on Wednesday and Friday and glum on Sun?

d.

On the long run, what proportion of time is Marking in each of his iii moods?

Read full chapter

URL:

https://world wide web.sciencedirect.com/science/article/pii/B9780124077959000049

Markov Chains

Mark A. Pinsky , Samuel Karlin , in An Introduction to Stochastic Modeling (Fourth Edition), 2011

Exercises

3.seven.1

Consider the Markov chain whose transition probability matrix is given by

P = 0 one two 3 0 one ii 3 1 0 0 0 0.1 0.2 0.5 0.2 0.1 0.2 0.six 0.one 0 0 0 1 .

The transition probability matrix corresponding to the nonabsorbing states is

Q = 0 1 ane ii 0.2 0.5 0.2 0.6 .

Summate the matrix inverse to I − Q, and from this decide

(a)

the probability of absorption into state 0 starting from country 1;

(b)

the mean time spent in each of states 1 and ii prior to absorption.

3.seven.2

Consider the random walk Markoff chain whose transition probability matrix is given by

P = 0 ane 2 3 0 1 2 3 i 0 0 0 0.3 0 0.7 0 0 0.3 0 0.7 0 0 0 i .

The transition probability matrix corresponding to the nonabsorbing states is

Q = 0 1 1 2 0 0.7 0.three 0 .

Calculate the matrix inverse to I − Q, and from this make up one's mind

(a)

the probability of absorption into country 0 starting from land one;

(b)

the mean time spent in each of states 1 and ii prior to absorption.

Read total chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123814166000034

Markov Processes

Scott L. Miller , Donald Childers , in Probability and Random Processes (Second Edition), 2012

9.ii Calculating Transition and State Probabilities in Markov Bondage

The country transition probability matrix of a Markoff chain gives the probabilities of transitioning from 1 land to another in a single time unit of measurement. Information technology will exist useful to extend this concept to longer time intervals.

Definition 9.3: The n -stride transition probability for a Markoff chain is

(9.iv) P i , j ( n ) = Pr ( X k + i = j | 10 chiliad = i ) .

Also, define an n -step transition probability matrix P (n) whose elements are the n -pace transition probabilities in Equation (9.4).

Given the one-step transition probabilities, information technology is straightforward to summate college social club transition probabilities using the following result.

Theorem ix.1 (Chapman–Kolmogorov Equation):

(9.v) P i , j ( north ) = k p i , thou ( thou ) p m , j ( n - g ) , for whatsoever 1000 = 0 , 1 , two , , n .

Proof: Beginning, condition on the event that in the process of transitioning from country i to state j, the Markoff chain passes through state k at some intermediate point in fourth dimension. Then using the principle of total probability

(nine.6) Pr ( 10 l + n = j | Ten 50 = i ) = k Pr ( X l + n = j | X l = k , X 50 + yard = k ) Pr ( X l + m = yard | X chiliad = i ) .

Using the Markov property, the expression reduces to the desired form:

(nine.seven) Pr ( 10 50 + n = j | 10 l = i ) = thousand Pr ( X l + n = j | 10 l + m = one thousand ) Pr ( Ten l + m = yard | 10 k = i ) .

This result tin can be written in a more compact form using transition probability matrices. It is easily seen that the Chapman–Kolmogorov equations can be written in terms of the due north -footstep transition probability matrices as

(nine.eight) P ( due north ) = P ( m ) P ( due north - m ) .

Then, starting with the fact that P (ane) = P, information technology follows that P (2) = p (1) p (1) = p ii, and using induction, it is established that

(9.9) P ( northward ) = P n .

Hence, we tin find the n -step transition probability matrix through matrix multiplication. If due north is large, it may be more convenient to compute Pnorthward via eigendecomposition. In many cases 1 the matrix P can be expanded every bit P = UΛU −i, where Λ is the diagonal matrix of eigenvalues and U is the matrix whose columns are the corresponding eigenvectors. And then,

(9.10) P north = U Λ n U - one .

Another quantity of involvement is the probability distribution of the Markov chain at some time instant one thousand. If the initial probability distribution of the Markov concatenation is known, then the distribution at some later point in fourth dimension can easily be found. Permit πj (m) = Pr(10k =j) and π(k) be the row vector whose jth element is πj (1000). Then

(9.11) π j ( k ) = Pr ( X k = j ) = i Pr ( X k = j | 10 0 = i ) Pr ( X 0 = i ) = i p i , j ( k ) π i ( 0 ) ,

or in vector form,

(nine.12) π ( g ) = π ( 0 ) P k .

Instance 9.7 (continuation of Case 9.ii)

Recall in Case nine.2, the child who purchased kid's meals at his favorite restaurant in order to collect a set of iv superhero action figures. Initially, before any meals are purchased, the child has no action figures and so the initial probability distribution is π(0) = (1,0,0,0,0). Repeated application of Equation (nine.12) with the probability transition matrix given in Case 9.two results in

π ( 1 ) = ( 0 , 1 , 0 , 0 , 0 ) , π ( 2 ) = ( 0 , 1 / 4 , three / 4 , 0 , 0 ) , π ( three ) = ( 0 , 1 / 16 , 9 / sixteen , three / eight , 0 ) , π ( 4 ) = ( 0 , 1 / 64 , 21 / 64 , ix / xvi.3 / 32 ) , π ( v ) = ( 0 , one / 256 , 45 / 256 , 75 / 128 , fifteen / 64 ) , π ( 6 ) = ( 0 , one / 1024 , 93 / 1024 , 135 / 256 , 195 / 512 ) ,

and and so on. It is to be expected that if the child buys enough meals, he will eventually complete the collection (i.e., get to state iv) with probability approaching unity. This can easily exist verified analytically by calculating the limiting class of Pk as k → ∞. Recall that for this example, P is a triangular matrix and hence its eigenvalues are only the diagonal entries. Hence, the diagonal matrix of eigenvalues is

Λ = [ 0 0 0 0 0 0 1 / 4 0 0 0 0 0 1 / 2 0 0 0 0 0 3 / four 0 0 0 0 0 i ] .

It should be articulate that lim k Λ k is a matrix with all nix entries except the one in the lower right corner, which is equal to i. Using MATLAB (or some other math package) to calculate the corresponding matrix of eigenvectors, information technology is found that

lim k P thousand = U ( lim chiliad Λ 1000 ) U - ane = [ 0 0 0 0 1 0 0 0 0 1 0 0 0 0 i 0 0 0 0 i 0 0 0 0 1 ] .

Then, using the initial distribution of π(0) = (1,0,0,0,0), the state distribution every bit X works out to be

π = lim g π ( k ) = lim one thousand π ( 0 ) P k = [ 0 0 0 0 1 ] .

In Example ix.6, it was seen that as k → ∞, the 1000-step transition probability matrix approached that of a matrix whose rows were all identical. In that case, the limiting product lim k → ∞ π(0)Pk is the same regardless of the initial distribution π(0). Such a Markov concatenation is said to accept a unique steady-state distribution, π. It should be emphasized that not all Markov chains take a steady-state distribution. For example, the Poisson counting process of Example 9.i conspicuously does not, since whatsoever counting process is a monotonie non-decreasing part of time and, therefore, it is expected that the distribution should skew toward larger values as time progresses.

This concept of a steady-country distribution can be viewed from the perspective of stationarity. Suppose at fourth dimension 1000, the process has some distribution, π(k). The distribution at the adjacent time instant is then π(g+ 1) = π(thousand)P. If π(thousand) = π(one thousand+ ane), so the procedure has reached a betoken where the distribution is stationary (contained of fourth dimension). This stationary distribution, π, must satisfy the human relationship

(9.13) π = π P .

In other words, π (if information technology exists) is the left eigenvector of the transition probability matrix P, that corresponds to the eigenvalue λ = one. The next example shows that this eigenvector is not always unique.

Instance 9.8 (The Gambler'south Ruin revisited)

Suppose a certain gambler has $5 and plays against another player (the firm). The gambler decides that he will play until he either doubles his money or loses information technology all. Suppose the house has designed this game of chance so that the gambler volition win with probability p = 0.45 and the business firm volition win with probability q = 0.55. Allow X1000 exist the corporeality of coin the gambler has after playing the game k times. The transition probability matrix for this Markov chain is

p = [ ane 0 0 0 0 0 0 0 0 0 0 0.55 0 0.45 0 0 0 0 0 0 0 0 0 0.55 0 0.45 0 0 0 0 0 0 0 0 0 0.55 0 0.45 0 0 0 0 0 0 0 0 0 0.55 0 0.45 0 0 0 0 0 0 0 0 0 0.55 0 0.45 0 0 0 0 0 0 0 0 0 0.55 0 0.45 0 0 0 0 0 0 0 0 0 0.55 0 0.45 0 0 0 0 0 0 0 0 0 0.55 0 0.45 0 0 0 0 0 0 0 0 0 0.55 0 0.45 0 0 0 0 0 0 0 0 0 0 i ]

This matrix has (two) repeated eigenvalues of λ = ane, and the corresponding eigenvectors are [10 0 0 0 0 0 0 0 0 0] and [00 0 0 0 0 0 0 0 0 l] Note that any linear combination of these will also be an eigenvector. Therefore, whatsoever vector of the form [p00000000l–p] is a left eigenvector off and hence there is no unique stationary distribution for this Markov chain. For this example, the limiting form of the state distribution of the Markoff chain depends on the initial distribution. The limiting form of Pyard can easily be institute to be

lim g p k [ 1 0 0 0 0 0 0 0 0 0 0 0.9655 0 0 0 0 0 0 0 0 0 0 .0345 0.9233 0 0 0 0 0 0 0 0 0 0 .0767 0 .8717 0 0 0 0 0 0 0 0 0 0 .1283 0 .8087 0 0 0 0 0 0 0 0 0 0 .1913 0 .7317 0 0 0 0 0 0 0 0 0 0 .2683 0 .6376 0 0 0 0 0 0 0 0 0 0 .3624 0 .5225 0 0 0 0 0 0 0 0 0 0 .4775 0 .3819 0 0 0 0 0 0 0 0 0 0 .6181 0 .2101 0 0 0 0 0 0 0 0 0 0 .7899 0 0 0 0 0 0 0 0 0 0 0 ]

Using the initial distribution π(0) = [0 000010000 0] (that is, the gambler starts off in state 5), and so it is seen that the steady-state distribution is lim k π ( yard ) = [ 0.7317 0 0 0 0 0 0 0 0 0 0.2683 ] . So, when the gambler starts with $5, he has nearly a 73% run a risk of losing all of his money and about a 27% hazard of doubling his coin.

As seen in Instance nine.vii, with some Markov chains, the limiting form of P thousand (as k → ∞) does not necessarily converge to a matrix whose rows are all identical. In that case, the limiting form of the state distribution will depend on the starting distribution. In the case of the gambler's ruin problem, nosotros probably could take guessed this beliefs. If the gambler had started with very fiddling money, we would expect him to terminate upwardly in the state of ruin with very high probability; whereas, if the gambler was very wealthy and the firm had very footling coin, we would expect a much greater chance of the gambler eventually breaking the house. Accordingly, our intuition tells us that the probability distribution of the gamblers ultimate state should depend on the starting country.

In general, there are several dissimilar manners in which a Markov concatenation's state distribution tin behave every bit 1000 → ∞ In some cases, lim k π ( thousand ) does not be. Such would exist the example when the process tends to oscillate betwixt ii or more states. A second possibility, as in Example nine.7, is that lim k π ( thousand ) does in fact converge to a stock-still distribution, merely the form of this limiting distribution depends on the starting distribution. The last example is when, lim k π ( k ) = π That is the state distribution converges to some fixed distribution, π, and the course of π is independent of the starting distribution. Hither, the transition probability matrix, P, will have a single (non repeated) eigenvalue at λ = i, and the corresponding eigenvector (properly normalized) will exist the steady-country distribution, π. Furthermore, the limiting class of P1000 will be one whose rows are all identical and equal to the steady-state distribution, π. In the next section, we look at some weather condition that must exist satisfied for the Markov chain to reach a unique steady-state distribution.

Example 9.9

In this example, we provide the MATLAB code to simulate the distribution of the queue length of the taxi stand up described in Instance 9.4. For this instance, we take the number of arrivals per time unit, X, to be a Poisson random variable whose PMF is

P X ( k ) = λ grand e - λ k ! .

Call back that in the taxi stand, i customer is served per time unit (assuming there is a to the lowest degree i customer in the queue waiting to be served). The following code tin be used to estimate and plot the PMF of the queue length. The average queue length was also calculated to be three.36 customers for an arrival rate of λ = 0.85 customers per time unit of measurement. Effigy 9.3 shows a histogram of the PMF of the queue length for the same arrival rate.

Figure 9.3. Histogram of the queue length for the taxi stand of Example 9.4 assuming a Poisson arrival process with an average arrival rate of 0.85 arrivals per time unit.

Read full chapter

URL:

https://world wide web.sciencedirect.com/scientific discipline/article/pii/B9780123869814500126

Reconstructing the Phylogeny

Grady Weyenberg , Ruriko Yoshida , in Algebraic and Detached Mathematical Methods for Modern Biology, 2015

12.2.1.2 Jukes-Cantor 1969

The simplest model of DNA evolution is the JC (or JC69) model. In addition to the Markov process assumptions, it also assumes that there is only a single transition charge per unit that governs all types of substitution [38]. This assumption implies a transition matrix of the form,

Q = one ane / iii 1 / 3 one / three 1 / three one 1 / 3 1 / 3 1 / 3 i / 3 1 1 / 3 1 / iii 1 / 3 1 / 3 1 .

We index the rows and columns of matrices for nucleotide models in the following order: adenine, guanine, cytosine, and thymine/uracil. In this example it does non matter, but in subsequent models this becomes important.

Below, nosotros implement a function that calculates the transition probability matrix office P(d) and utilize it to judge the stationary distribution for the JC model. The characteristic timescale of the arrangement (i.e., the parameter of the time t in the continuous time Markov chain) is 1, and the probability matrix has converged quite well at a distance d = 100. (Beware: Attempting to evaluate the function at Inf, the R symbol representing infinity, appears to cause R to enter an infinite loop.)

library(expm)

DNA.alphabet <-c("a", "one thousand", "c", "t")

Q <-matrix(i/three, nrow = 4, ncol = iv)# create 4x4 matrix and fill with ane/3

diag(Q) <- -ane # prepare diagonal to -one

colnames(Q) <-rownames(Q) <- DNA.alphabet

P <-function(d)expm(d * Q) # Implement P(d)

P(100) # Characteristic timescale   is 1, so 100 is ''shut'' to infinity.

##   a   thou   c   t

## a 0.25 0.25 0.25 0.25

## g 0.25 0.25 0.25 0.25

## c 0.25 0.25 0.25 0.25

## t 0.25 0.25 0.25 0.25

Thus, we detect that the stationary distribution for the JC model is compatible on all possible character states, π = (1/4,1/iv,ane/4,1/4). Under this model, considering the Markov chain is fourth dimension reversible, nosotros can remember that the initial probability of this model is π = (1/four,ane/4,one/iv,1/4).

Some other interesting and useful affair happens if a Markov procedure has a stationary distribution π, and the post-obit relationship with the rate matrix Q is true.

(12.1) π i Q i j = π j Q j i , i , j

This is known every bit the detailed balance equation, and when it holds the process is reversible. If a process is reversible, it means that once information technology has converged to the equilibrium distribution, the "pointer of time" disappears: there is no way to decide if a character'south process X(d) is indexed in the proper direction, or if the time alphabetize has been reversed.

Reversibility turns out to be a desirable property if we desire to study molecular evolution. Consider a most-recent common ancestor, with two daughter lineages. If the processes describing sequence evolution are reversible, then we do not need to consider the two lineages separately. The reversibility ways that nosotros tin care for the daughters as endpoints of one long Markov chain that goes "upwards" one lineage to the MRCA, and back "down" the other lineage. If the model was not reversible, and then this would not be a valid simplification. Nosotros would need to model each lineage discretely, in the correct orientation from ancestor to descendant. It turns out that JC, along with all the other models nosotros will talk over, is reversible.

Exercise 12.four

Show that the JC rate matrix satisfies the detailed remainder status.

Call up that P ij (d) represents the probability of ending in country j when starting in state i, if the altitude between the sequences is d. Because the model is reversible, we can apply this matrix to find the likelihood of observing a particular pair of characters in a sequence alignment, assuming the sequences are separated by distance d. If the sites in the alignment are independent, and so the likelihood of the entire alignment is the production of the individual site likelihoods. That is, if the character pair i,j occurs in the alignment n ij times, then the likelihood of the entire alignment is given by

50 ( d ) = i , j P i j ( d ) due north i j .

For a variety of reasons, both theoretical and relating to numerical stability, it is more common to work with the log-likelihood

(12.2) l ( d ) = log 50 ( d ) = i , j n i j log P i j ( d )

Readers familiar with statistical methods will about likely anticipate the adjacent footstep: we are now in a position to use a sequence alignment to estimate the evolutionary altitude separating the ii sequences. Nosotros will exercise this by attempting to find the distance d which maximizes the likelihood, given the observed alignment data.

Get-go, nosotros implement a part that counts the occurrences of the various substitutions between two sequences, and another that calculates the log-likelihood, given d and the substitution counts.

nMatrix <-function(alignment, i, j) {

  alignment <- alignment[c(i, j), ]

  10 <-utilize(alignment, 2:1,function(y)as.DNAbin(DNA.alphabet) %in%

  y)

dimnames(x) <-list(DNA.alphabet,Nada,labels(alignment))

tcrossprod(x[, , 1], ten[, , 2])

}

JC.lnL <-part(d, nm)sum(nm *log(P(d)))

Applying the nMatrix office to the hemoglobin data, the commutation count matrix for the HBA1 and HBA2 sequences is obtained.

A plot of the JC log-likelihood for the HBA1 and HBA2 alignment is shown in Effigy 12.v.

Figure 12.5. Log-likelihood function for the JC model and the sequences HBA1 and HBA2.

N12 <-nMatrix(hemo.aligned, "HBA1", "HBA2")

### substitution pair count for HBA1 & HBA2

N12

##   a   g   c   t

## a 95   2   ane   0

## chiliad   ii 150   0   3

## c   2   3 207   iv

## t   0   2   half dozen 98

We can at present employ numerical optimization to find the maximum likelihood (ML) estimate for the altitude separating the two sequences.

optimize(JC.lnL, 0:1, nm = N12, maximum = True)

## $maximum

## [1] 0.04478

##

## $objective

## [i] -130.three

We estimate that each graphic symbol site in the alignment has experienced about 0.045 substitutions, total, along both lineages leading from the common ancestral hemoglobin gene to the modernistic day HBA1 and HBA2.

In fact, it is fairly like shooting fish in a barrel to obtain a closed-grade solution for the JC problem. This solution is presented in many other books (east.k., [iv]), and nosotros will not hash out it at length here, except to annotation the terminal equation for calculating the JC distance between two sequences:

(12.3) d = 3 four ln 1 four three p .

Here, p is the proportion of sites in the alignment that have unlike characters.

The dist.dna part from the ape packet computes the JC distance using the closed-form solution and agrees with our numerical solution.

dist.deoxyribonucleic acid(hemo.aligned[c("HBA1", "HBA2"), ], model = "JC")

Exercise 12.5

The matrix exponential of the JC rate matrix is given by the formula

P i j ( d ) = 0.25 + 0.75 eastward 4 d / 3 if i = j , 0.25 0.25 e 4 d / 3 if i j .

Prove that Eq. (12.3) solves the likelihood optimization problem for the JC model.

Read total chapter

URL:

https://world wide web.sciencedirect.com/scientific discipline/article/pii/B9780128012130000125

Special Random Processes

Oliver C. Ibe , in Fundamentals of Applied Probability and Random Processes (Second Edition), 2014

Department 12.7 Detached-Time Markov Bondage

12.36

Decide the missing elements denoted by x in the post-obit transition probability matrix:

P = ten i / 3 1 / 3 1 / 3 1 / 10 x 1 / v 2 / five x x x 1 3 / v 2 / 5 x x

12.37

Draw the state-transition diagram for the Markov concatenation with the following transition probability matrix.

P = 1 / 2 0 0 1 / 2 1 / 2 one / two 0 0 1 / 4 0 ane / two one / four 0 i / two 1 / 4 1 / 4

12.38

Consider a Markoff chain with the land-transition diagram shown in Effigy 12.21.

a.

Requite the transition probability matrix.

b.

Identify the recurrent states.

c.

Identify the transient states.

Figure 12.21. Figure for Trouble 12.38

12.39

Consider the Markov chain with the state-transition diagram shown in Figure 12.22.

a.

List the transient states, the recurrent states, and the periodic states.

b.

Identify the members of each chain of recurrent states.

c.

Give the transition probability matrix of the procedure.

d.

Given that the process starts in country one, either determine the numerical value of the probability that the process is in country 8 later an infinitely large number of transitions or explain why this quantity does not exist.

Figure 12.22. Effigy for Problem 12.39

12.xl

Consider the three-land Markov concatenation shown in Figure 12.23:

a.

Identify the transient states, the recurrent states, the periodic states, and the members of each chain of recurrent states.

b.

Either determine the limiting-country probabilities or explain why they do not exist.

c.

Given that the procedure is currently in state 1, determine P[A], the probability that it volition be in state 3 at least once during the next two transitions.

Figure 12.23. Figure for Trouble 12.twoscore

12.41

Consider the Markov chain shown in Figure 12.24:

a.

Which states are transient?

b.

Which states are periodic?

c.

Does country 3 have a limiting-land probability? If and then, decide this probability.

d.

Assuming that the process begins in state 4, determine the z-transform of the PMF of M, where 1000 is the number of trials up to and including the trial in which the process enters state 2 for the 2nd time.

Figure 12.24. Figure for Problem 12.41

12.42

Find the limiting-country probabilities associated with the following transition probability matrix:

P = 0.4 0.3 0.iii 0.3 0.iv 0.iii 0.3 0.iii 0.4

12.43

Consider the following transition probability matrix:

P = 0.half dozen 0.ii 0.two 0.3 0.4 0.3 0.0 0.3 0.7

a.

Give the state-transition diagram.

b.

Given that the procedure is currently in state i, what is the probability that information technology will be in state 2 at the finish of the 3rd transition?

c.

Given that the procedure is currently in state i, what is the probability that the showtime time it enters state 3 is the fourth transition?

12.44

Consider the following social mobility problem. Studies betoken that people in a society can be classified as belonging to the upper course (land one), middle class (state 2), and lower grade (state three). Membership in any class is inherited in the following probabilistic manner. Given that a person is raised in an upper-class family unit, he will accept an upper-grade family with probability 0.45, a eye-class family with probability 0.48, and a lower-form family unit with probability 0.07. Given that a person is raised in a middle-class family unit, he will have an upper-class family unit with probability 0.05, a middle-course family with probability 0.70, and a lower-class family unit with probability 0.25. Finally, given that a person is raised in a lower-form family unit, he will take an upper-grade family with probability 0.01, a middle-class family with probability 0.50, and a lower-form family with probability 0.49. Decide the following:

a.

The state-transition diagram of the process

b.

The transition probability matrix of the process

c.

The limiting-state probabilities. Interpret what they mean to the layperson.

12.45

A taxi driver conducts his business organisation in three unlike towns i, two, and 3. On any given twenty-four hour period, when he is in town 1, the probability that the next passenger he picks upwards is going to a place in town 1 is 0.3, the probability that the side by side passenger he picks up is going to town 2 is 0.2, and the probability that the next passenger he picks up is going to boondocks 3 is 0.5. When he is in town ii, the probability that the next passenger he picks upwardly is going to town 1 is 0.1, the probability that the side by side passenger he picks up is going to town 2 is 0.8, and the probability that the next passenger he picks up is going to town iii is 0.ane. When he is in boondocks iii, the probability that the next passenger he picks up is going to town 1 is 0.4, the probability that the next passenger he picks upwardly is going to boondocks ii is 0.4, and the probability that the next passenger he picks up is going to town 3 is 0.2.

a.

Determine the state-transition diagram for the process.

b.

Give the transition probability matrix for the procedure.

c.

What are the limiting-country probabilities?

d.

Given that the taxi driver is currently in boondocks 2 and is waiting to pick upwardly his outset customer for the day, what is the probability that the showtime time he picks up a rider to town two is when he picks upwards his tertiary rider for the twenty-four hours?

e.

Given that he is currently in town two, what is the probability that his 3rd passenger from at present will be going to town 1?

12.46

New England fall weather can be classified as sunny, cloudy, or rainy. A student conducted a detailed study of the conditions pattern and came up with the post-obit determination: Given that it is sunny on whatever given solar day, and so on the following twenty-four hours it will be sunny once again with probability 0.5, cloudy with probability 0.3 and rainy with probability 0.two. Given that it is cloudy on whatsoever given day, and so on the following day it will be sunny with probability 0.4, cloudy again with probability 0.3 and rainy with probability 0.iii. Finally, given that it is rainy on whatsoever given day, then on the following twenty-four hour period information technology will be sunny with probability 0.2, cloudy with probability 0.5 and rainy once again with probability 0.three.

a.

Requite the state-transition diagram of New England fall weather with the country "sunny" equally land 1, the state "cloudy" as state 2, and the country "rainy" as country three.

b.

Using the aforementioned convention equally in part (a), give the transition probability matrix of New England autumn weather.

c.

Given that it is sunny today, what is the probability that information technology volition exist sunny four days from now?

d.

Determine the limiting-land probabilities of the weather.

12.47

A educatee went to a gambling casino with $3. He wins $1 at each round with a probability p and loses $1 with a probability one   p. Beingness a very cautious player, the student has decided to stop playing when he doubles his original $three (i.due east., when he has a full of $6) or when he loses all his money.

a.

Give the country-transition diagram of the process.

b.

What is the probability that he stops afterwards being ruined (i.e., he lost all his money)?

c.

What is the probability that he stops after he has doubled his original amount?

Read full chapter

URL:

https://world wide web.sciencedirect.com/scientific discipline/article/pii/B9780128008522000122

Markov Chains

Sheldon Thou. Ross , in Introduction to Probability Models (12th Edition), 2019

four.four.1 Limiting Probabilities

In Example iv.8 nosotros considered a two-land Markov chain with transition probability matrix

P = ( 0.seven 0.three 0.iv 0.6 )

and showed that

P ( 4 ) = ( 0.5749 0.4251 0.5668 0.4332 )

From this it follows that P ( 8 ) = P ( 4 ) P ( 4 ) is given (to 3 significant places) by

P ( 8 ) = ( 0.571 0.429 0.571 0.429 )

Note that the matrix P ( 8 ) is virtually identical to the matrix P ( 4 ) , and that each of the rows of P ( 8 ) has almost identical values. Indeed, it seems that P i j n is converging to some value as n , with this value not depending on i. Moreover, in Example 4.22 we showed that the long-run proportions for this chain are π 0 = iv / vii . 571 , π 1 = 3 / 7 . 429 , thus making it appear that these long-run proportions may also be limiting probabilities. Although this is indeed the case for the preceding chain, it is not always truthful that the long-run proportions are also limiting probabilities. To run across why non, consider a ii-country Markov concatenation having

P 0 , one = P i , 0 = one

Considering this Markov concatenation continually alternates betwixt states 0 and 1, the long-run proportions of time it spends in these states are

π 0 = π i = one / 2

However,

P 0 , 0 n = { ane , if n is even 0 , if n is odd

and and so P 0 , 0 north does not have a limiting value as n goes to infinity. In general, a concatenation that can only render to a state in a multiple of d > one steps (where d = two in the preceding example) is said to be periodic and does not have limiting probabilities. However, for an irreducible chain that is not periodic, and such chains are called aperiodic, the limiting probabilities volition always exist and volition not depend on the initial state. Moreover, the limiting probability that the concatenation will be in land j will equal π j , the long-run proportion of time the chain is in land j. That the limiting probabilities, when they be, will equal the long-run proportions can be seen by letting

α j = lim n P ( X northward = j )

and using that

P ( X n + 1 = j ) = i = 0 P ( 10 northward + i = j | X n = i ) P ( X north = i ) = i = 0 P i j P ( X n = i )

and

ane = i = 0 P ( X n = i )

Letting n in the preceding two equations yields, upon assuming that nosotros can bring the limit inside the summation, that

α j = i = 0 α i P i j 1 = i = 0 α i

Hence, { α j , j 0 } satisfies the equations for which { π j , j 0 } is the unique solution, showing that α j = π j , j 0 .

An irreducible, positive recurrent, aperiodic Markov chain is said to be ergodic.

Read total chapter

URL:

https://www.sciencedirect.com/scientific discipline/article/pii/B9780128143469000093

Stochastic Processes

J. MEDHI , in Stochastic Models in Queueing Theory (2d Edition), 2003

1.6.ii Equivalence of the two limiting forms

Permit {Yn, n ≥ 0} be an irreducible and aperiodic chain with finite state infinite Southward and TPM P. And then from the ergodic theorem (Theorem 1.1) nosotros get that

lim north p i j ( n ) = υ j , i , j Due south

exists and is independent of i, and Five = {ν ane , ν 2 ,…} is the invariant distribution given past

(1.6.five) V P = 5 , V e = 5 .

The Markov process {X(t) = Y Northward(t) , t ≥ 0} is also aperiodic and irreducible and has the same state space S. From the ergodic theorem (Theorem 1.iv), we get that

lim t p i j ( t ) = u j

exists and is contained of i. Further, U = {u 1 , u ii ,…} is a probability vector and U is given as the solution of

(1.6.6) U Q = 0 , U due east = one , U Q = 0 U [ λ ( P I ) ] = 0 U P = U .

Thus, from (ane.6.5) and (1.6.6), we go

U 5 .

In other words,

(1.vi.vii) lim t p i j ( t ) = lim n p i j ( n ) , i , j S .

Read full affiliate

URL:

https://www.sciencedirect.com/scientific discipline/article/pii/B9780124874626500011

Renewal Phenomena

Mark A. Pinsky , Samuel Karlin , in An Introduction to Stochastic Modeling (Fourth Edition), 2011

Exercises

7.2.1

Allow {Xn ; n = 0, i, …} be a two-state Markov concatenation with the transition probability matrix

P = 0 i 0 1 1 a a b one b .

State 0 represents an operating state of some system, while state i represents a repair state. We assume that the process begins in land X 0 = 0, and then the successive returns to land 0 from the repair land course a renewal process. Determine the hateful elapsing of one of these renewal intervals.

7.two.2

A certain blazon component has two states: 0 = OFF and 1 = OPERATING. In land 0, the process remains there a random length of time, which is exponentially distributed with parameter α, and then moves to state 1. The time in state 1 is exponentially distributed with parameter β, afterward which the process returns to country 0.

The system has ii of these components, A and B, with distinct parameters:

Component Operating Failure Rate Repair Rate
A βA αA
B βB αB

In society for the organization to operate, at to the lowest degree one of components A and B must be operating (a parallel system). Assume that the component stochastic processes are independent of one another. Consider the successive instants that the organization enters the failed state from an operating country. Use the memoryless property of the exponential distribution to argue that these instants form a renewal process.

7.2.3

Calculate the hateful number of renewals Yard (n) = E [N (north)] for the renewal process having interoccurrence distribution

p 1 = 0.iv , p ii = 0.one , p iii = 0.iii , p iv = 0.2

for n = i, two, …, x. Also calculate un = M (n) − M (due north − i).

Read full chapter

URL:

https://world wide web.sciencedirect.com/science/commodity/pii/B9780123814166000071

Markov Bondage

Sheldon Yard. Ross , in Introduction to Probability Models (Tenth Edition), 2010

Proposition 4.6

Consider an irreducible Markov chain with transition probabilities Pij . If we can find positive numbers π i , i ≥ 0, summing to one, and a transition probability matrix Q = [Qij ] such that

(4.29) π i P i j = π j Q j i

then the Qij are the transition probabilities of the reversed chain and the π i are the stationary probabilities both for the original and reversed concatenation.

The importance of the preceding proffer is that, by thinking astern, we can sometimes guess at the nature of the reversed chain and so use the set of Equations (iv.29) to obtain both the stationary probabilities and the Qij .

Read full chapter

URL:

https://www.sciencedirect.com/scientific discipline/article/pii/B9780123756862000091