Last updated:
0 purchases
pyrandwalk 1.1
:walking: Python Library for Random Walks
Table of contents
Overview
Installation
Usage
Contribution
References
Authors
Changelog
License
Overview
Pyrandwalk is an educational tool for simulating random walks, calculating the probability of given state sequences, etc. Random walk is a representation of the discrete-time, discrete-value Markov chain model used in stochastic processes.
PyPI Counter
Github Stars
Branch
master
dev
CI
Installation
Source code
Download Version 1.1 or Latest Source
Run pip install -r requirements.txt or pip3 install -r requirements.txt (Need root access)
Run python3 setup.py install or python setup.py install (Need root access)
PyPI
Check Python Packaging User Guide
Run pip install pyrandwalk or pip3 install pyrandwalk (Need root access)
Usage
>>> from pyrandwalk import *
>>> import numpy as np
>>> states = [0, 1, 2, 3, 4]
>>> trans = np.array([[1, 0, 0, 0, 0],
... [0.25, 0, 0.75, 0, 0],
... [0, 0.25, 0, 0.75, 0],
... [0, 0, 0.25, 0, 0.75],
... [0, 0, 0, 1, 0]])
>>> rw = RandomWalk(states, trans)
We are simulating random walks on the above graph (weights are probabilities):
Probability of A Sequence
Imagine you want to calculate probability which you start from state 2, go to state 1 and stuck in state 0.
What's the probability of these walk sequences?
>>> rw.prob_sec([2, 1, 0])
0.0125
Initial probability distribution is assumed to be uniform by default but you can change it by passing optional argument initial_dist:
>>> rw.prob_sec([2, 1, 0], initial_dist=[0, 0, 1, 0, 0])
0.0625
Run a random walk
You can start a random walk on given markov chain and see the result:
>>> states, probs = rw.run()
>>> states
[4, 3, 4, 3, 4, 3, 4, 3, 2, 3, 4]
>>> probs
[0.2, 1.0, 0.75, 1.0, 0.75, 1.0, 0.75, 1.0, 0.25, 0.75, 0.75]
By default your random walk will contain 10 steps, but you can change it by passing optional argument ntimes:
>>> states, probs = rw.run(ntimes=20)
>>> states
[3, 4, 3, 4, 3, 2, 1, 2, 3, 4, 3, 4, 3, 4, 3, 4, 3, 4, 3, 2, 3]
>>> probs
[0.2, 0.75, 1.0, 0.75, 1.0, 0.25, 0.25, 0.75, 0.75, 0.75, 1.0, 0.75, 1.0, 0.75, 1.0, 0.75, 1.0, 0.75, 1.0, 0.25, 0.75]
And if you want to see what's going on down there during the simulation you can set the show flag:
>>> states, probs = rw.run(ntimes=30, show=True)
1 --> 2 (p = 0.750)
2 --> 3 (p = 0.750)
3 --> 4 (p = 0.750)
4 --> 3 (p = 1.000)
3 --> 4 (p = 0.750)
4 --> 3 (p = 1.000)
3 --> 4 (p = 0.750)
4 --> 3 (p = 1.000)
3 --> 4 (p = 0.750)
4 --> 3 (p = 1.000)
3 --> 4 (p = 0.750)
4 --> 3 (p = 1.000)
3 --> 4 (p = 0.750)
4 --> 3 (p = 1.000)
3 --> 2 (p = 0.250)
2 --> 1 (p = 0.250)
1 --> 2 (p = 0.750)
2 --> 3 (p = 0.750)
3 --> 4 (p = 0.750)
4 --> 3 (p = 1.000)
3 --> 4 (p = 0.750)
4 --> 3 (p = 1.000)
3 --> 4 (p = 0.750)
4 --> 3 (p = 1.000)
3 --> 4 (p = 0.750)
4 --> 3 (p = 1.000)
3 --> 4 (p = 0.750)
4 --> 3 (p = 1.000)
3 --> 2 (p = 0.250)
2 --> 3 (p = 0.750)
>>> states
[1, 2, 3, 4, 3, 4, 3, 4, 3, 4, 3, 4, 3, 4, 3, 2, 1, 2, 3, 4, 3, 4, 3, 4, 3, 4, 3, 4, 3, 2, 3]
>>> probs
[0.2, 0.75, 0.75, 0.75, 1.0, 0.75, 1.0, 0.75, 1.0, 0.75, 1.0, 0.75, 1.0, 0.75, 1.0, 0.25, 0.25, 0.75, 0.75, 0.75, 1.0, 0.75, 1.0, 0.75, 1.0, 0.75, 1.0, 0.75, 1.0, 0.25, 0.75]
Final Probability Distribution
You can easily find out the final probability distribution of you random walk by:
>>> rw.final_dist()
array([1., 0., 0., 0., 0.])
Which implies that the walk will in state 0 for sure as time goes on.
Is it irreducible?
You can check if your Markov chain is irreducible to lower rank ones or not by:
>>> rw.is_irreducible()
False
nth transition matrix
If you want to see what's the probability of moving from state i to j with n steps, you can easily calculate the nth transition matrix by:
>>> rw.trans_power(2)
array([[1. , 0. , 0. , 0. , 0. ],
[0.25 , 0.1875, 0. , 0.5625, 0. ],
[0.0625, 0. , 0.375 , 0. , 0.5625],
[0. , 0.0625, 0. , 0.9375, 0. ],
[0. , 0. , 0.25 , 0. , 0.75 ]])
Graph edges
You can have your final graph edges in a list containing tuples like (from, to, probability) for each edge by:
>>> rw.get_edges()
[(0, 0, 1.0), (1, 0, 0.25), (1, 2, 0.75), (2, 1, 0.25), (2, 3, 0.75), (3, 2, 0.25), (3, 4, 0.75), (4, 3, 1.0)]
Graph
Making a networkx graph object from your random walk process is also token care of by this library:
>>> rw_graph = rw.get_graph()
Colors of Nodes [will be removed]
Until now we could not show graphs with self-loops using networkx so as far as this feature being added to networkx, we're using blue color for ordinary states and red color for states with self-loop.
>>> rw.get_colormap()
['red', 'blue', 'blue', 'blue', 'blue']
Type of Classes
For knowing which class is recurrent or transient you can use above method, you can also have reduced transition matrix for each set.
>>> rw_class_types = rw.get_typeof_classes()
>>> rw_class_types['recurrent']
([0], array([[1.]]))
>>> rw_class_types['transient'][0]
[1, 2, 3, 4]
>>> rw_class_types['transient'][1]
array([[0. , 0.75, 0. , 0. ],
[0.25, 0. , 0.75, 0. ],
[0. , 0.25, 0. , 0.75],
[0. , 0. , 1. , 0. ]])
The Best Policy Problems
For making the best policy problems for your random walk you can easily:
>>> states = [0, 1, 2]
>>> trans = np.array([[1, 0, 0], [1/2, 0, 1/2], [0, 1, 0]])
>>> rw = RandomWalk(states, trans, payoff=[0, 1, 4], cost=[1, 0, 2], discount=0.5)
>>> rw.best_policy()
{'continue': [], 'stop': [0, 1, 2]}
References
1- Lawler, Gregory F. Introduction to stochastic processes. Chapman and Hall/CRC, 2018.
2- Markusfeng
Icon made by Becris from www.flaticon.com
For personal and professional use. You cannot resell or redistribute these repositories in their original state.
There are no reviews.