### Markov Chains and Random Walks

The fundamental model for studying the behavior of random walks is the model of Markov chains. Markov chain is a discrete-tme stochastic process, that satisfies the "Markovian property", that is, the "memorylessness property" : the future behavior of such process, depends only of its behavior in the current state, not on the past states.

The classical random walk can be exactly interpreted in such manner - random walk picks randomly and independently next state in each step.

Definition: A Markov chain M is a discrete-time stochastic process defined over a set of states S in terms of matrix P of transition probabilities.

The set of states, S, is usually finite set, representing all posible states of given system. The transition probability matrix P has dimension of NxN, where N is a cardinality of the set S (the number of states). Each entry Pij, in the transition matrix, represents the probability that the system in state i will move to state j, in next transition. If we denote Xt , the state of Markov chain at time t, the sequence {Xt} , specifies the history of evolution of the Markov chain.

Definition : q(t) = (q1(t),q2(t)......qn(t)) is the state probability vector (the vector whose ith component is the probability that the chain is in state i at time t

The behavior of Markov chain is completely defined by its initial distribution q(0) and transition matrix P.

The probability vector in arbitrary state can be computed as: q(t+1) = q(t)*P hence, q(t)=q(0)*P^t .

We are interested in the ways that such representation could give us information about the evolution of the random walk. Many theorems from the theory of Markov Chains (most noteably, the Fundamental Theorem of Markov Chains), provide us with a tool which we can use in determining running-time bounds in the analysis of Markov chain-type systems. (Randomized algorithms, Evolutionary algorithms......)

The classical random walk can be exactly interpreted in such manner - random walk picks randomly and independently next state in each step.

Definition: A Markov chain M is a discrete-time stochastic process defined over a set of states S in terms of matrix P of transition probabilities.

The set of states, S, is usually finite set, representing all posible states of given system. The transition probability matrix P has dimension of NxN, where N is a cardinality of the set S (the number of states). Each entry Pij, in the transition matrix, represents the probability that the system in state i will move to state j, in next transition. If we denote Xt , the state of Markov chain at time t, the sequence {Xt} , specifies the history of evolution of the Markov chain.

Definition : q(t) = (q1(t),q2(t)......qn(t)) is the state probability vector (the vector whose ith component is the probability that the chain is in state i at time t

The behavior of Markov chain is completely defined by its initial distribution q(0) and transition matrix P.

The probability vector in arbitrary state can be computed as: q(t+1) = q(t)*P hence, q(t)=q(0)*P^t .

We are interested in the ways that such representation could give us information about the evolution of the random walk. Many theorems from the theory of Markov Chains (most noteably, the Fundamental Theorem of Markov Chains), provide us with a tool which we can use in determining running-time bounds in the analysis of Markov chain-type systems. (Randomized algorithms, Evolutionary algorithms......)

## 0 Comments:

Post a Comment

<< Home