/

ErgodicProbabilities

[this page | pdf]

Function Description

Returns the ergodic (in effect, the long run) probabilities of a system characterised by a Markov chain being in its given states. The Markov chain is characterised by a matrix defining the probability of transitioning from a given state (first index) to a given state (second index).

 

Determination of the ergodic properties of a Markov chain is in general a complicated task as a Markov chain may not actually have ergodic properties (if e.g. it has two or more disjoint sub-chains and the system always stays within a given sub-chain because it starts out in a state within a specific sub-chain). If a Markov chain does have ergodic probabilities then these can be found by iterating the Markov chain for long enough; the long run probabilities of being in each state will then be the ergodic probabilities.

 

The algorithm used by the Nematrian website selects two randomly chosen sets of starting probabilities for each state, projects forward these two separate starting states up to IterationLimit number of times and returns an error if the two sets of states have not by then converged to the same apparent state probabilities, with convergence measured by the size of the Tolerance parameter. To allow the algorithm to produce the same answer each time it is run given the same inputs, it includes a random number seed used merely in the selection of these two initial starting probabilities.

 


NAVIGATION LINKS
Contents | Prev | Next


Links to:

-          Interactively run function

-          Interactive instructions

-          Example calculation

-          Output type / Parameter details

-          Illustrative spreadsheet

-          Other Markov processes functions

-          Computation units used


Note: If you use any Nematrian web service either programmatically or interactively then you will be deemed to have agreed to the Nematrian website License Agreement


Desktop view | Switch to Mobile