[Submitted via IRC]
Many of you will know about Markov chains. Named after Andrey Markov, [they] are mathematical systems that hop from one "state" (a situation or set of values) to another. For example, if you made a Markov chain model of a baby's behavior, you might include "playing," "eating", "sleeping," and "crying" as states, which together with other behaviors could form a 'state space': a list of all possible states. In addition, on top of the state space, a Markov chain tells you the probability of hopping, or "transitioning," from one state to any other state---e.g., the chance that a baby currently playing will fall asleep in the next five minutes without crying first.
Victor Powell and Lewis Lehe have produced a 'visual explanation' of how to produce Markov chains showing how they are used in a variety of disciplines; they are useful to computer scientists and engineers and many others. As they point out:
In the hands of meteorologists, ecologists, computer scientists, financial engineers and other people who need to model big phenomena, Markov chains can get to be quite large and powerful.
If you've not seen Markov chains in use before, or perhaps your knowledge is just a little rusty, then take a look at the link and see it they can be of any use to you.
(Score: 2, Interesting) by Anonymous Coward on Monday March 02 2015, @09:51AM
A variation of the Markov chains are the hidden Markov models (HMM). The principle is very close except you don't know what is the state of your system. It's a black box from which you can observe outputs and you can deduce the most likely sequence of inner states that caused these outputs.
To continue with the baby example, consider the states are "happy", "sad", "hungry" and "sick" and the observations are "smiling", "neutral" and "crying".
You cannot directly communicate with the baby, doesn't speak, so you cannot know the current state. But you can look and listen for cries and smiles.
So, from a sequence of "crying", "crying", "neutral", "crying", "smiling", knowing the transition probabilities (for instance the probability of transitioning from happy into hungry, same as for Markov chains) and the observation probabilities (for instance, the probability of the baby crying if sick), you can compute that the most likely sequence of states is "hungry", "hungry", "hungry", "hungry", "happy" (aka food time).