Last updated:
0 purchases
marchie 0.3
MarChie: a Compact Open Source Tool for Analyzing Discrete Markov Chains
Contents:
Intro
Quick Start
Acknowledgements
License
Intro
Markov Chain is a model of a system with N states, that assumes that
transition from one state to another is independent of the history of transitions
and is strictly defined by the probability of such transition (Markov assumption).
At the beginning moment of time (step 0), the probabilities of the states are defined by the initial states distribution vector.
On each next time step k, the system goes from a state ξk=i to a state ξk+1=j with a probability P(ξk+1=j|ξk=i)≡pij, while the history of the transitions does not change this probability (Markov assumption):
P(ξk+1=j|ξ0=i0,ξ1=i1,...,ξk−1=ii−1,ξk=i)=P(ξk+1=j|ξk=i)≡pij
Thus, a discrete Markov Chain is described with 2 components:
Initial probability distribution vector π=(π0,π1,...,πn), where n is the number of states in the system, and πi is the probability that the system starts in the state ξ0=i.
Transition probability matrix P=(pij), where pij is the probability to go to the state ξk+1=j from the state ξk=i in one step.
The vector of an initial states distribution, as well as the rows of a transition matrix, are stochastic (the probabilities should add up to 1) as the events of transitions are mutually exclusive, while the system must make a transition at each step, so the sum of the probabilities of all the transitions must be 1 altogether.
From Markov assumption it follows:
∀n≥1,∀ik:P(ξ0=i0,ξ1=i1,...,ξn=in)=πi(0)pi0i1pi1i2...pin−1in
Given transition probability matrix and (optional) initial state probability distribution,
MarChie does the following:
computes adjacency matrix;
computes reachability matrix (using adjacency matrix);
computes transposed reachability matrix (using reachability);
computes communication matrix (using reachability and transposed reachability matrices);
computes communication matrix complement (using communication);
computes classification matrix (using reachability and communication matrix complement);
computes classification matrix extension (using classification matrix);
computes equivalency classes matrix (using communication matrix and classification matrix extension);
defines essential and inessential states;
defines equivalency classes and their cyclic subclasses;
builds the structure of the chain from where all its states, classes and subclasses are easily accessible;
defines properties of the chain;
classifies the chain (based on the properties);
defines end behavior of the chain (based on the classification).
Quick Start
Marchie is a pip-installable package. You can access it directly from PyPI:
pip install marchie
The main object that is really intended to be used is class marchie.marchie.Marchie. The class requires only transition probability matrix and (optionally) initial state probability distribution vector as arguments; if you provide no initial distribution vector, it will be generated.
>>> import numpy as np
>>> from marchie import Marchie
>>> trans_mat = np.array([
[1, 0, 0 ],
[0.8, 0.2, 0 ],
[0.3, 0.5, 0.2]
])
>>> init_distr = np.array(
[0.3, 0.2, 0.5]
)
>>> marchie = MarChie(
init_distr=init_distr,
trans_mat=trans_mat
)
>>> marchie
Output:
Monoergodic Absorbing Markov Chain
|
|___Essential States
|
|___Acyclic Equivalency Class 0 : states 0
|
|___Inessential States: states 1, 2
[+reducible][-polyergodic][+regular][+absorbing][+strong_convergence]
All the information is accessible in class / instance variables, which you will learn in details in API_reference.html. Also, make sure to take a look at the demo notebook for relevant examples.
Acknowledgements
I would like to kindly thank Elena Ilyushina (MSU, Faculty of Mechanics and Mathematics) for the course "Markov Chains and their linguistic applications" (2021). Even though at the moment, the course was far above my educational stage, I was kindly accepted to take it and, with Elena Ilyushina's help, I was able to understand the basic concepts of Markov Chains Theory thoroughly and solidly, learn to apply the Theory in real-life tasks and found my programming skills useful for efficient Markov Chains analysis.
License
Copyright 2023 Max Schmaltz: @maxschmaltz
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
For personal and professional use. You cannot resell or redistribute these repositories in their original state.
There are no reviews.