Stories
Slash Boxes
Comments

SoylentNews is people

posted by LaminatorX on Sunday March 01 2015, @06:11PM   Printer-friendly
from the Eat-Pray-Love dept.

[Submitted via IRC]

Many of you will know about Markov chains. Named after Andrey Markov, [they] are mathematical systems that hop from one "state" (a situation or set of values) to another. For example, if you made a Markov chain model of a baby's behavior, you might include "playing," "eating", "sleeping," and "crying" as states, which together with other behaviors could form a 'state space': a list of all possible states. In addition, on top of the state space, a Markov chain tells you the probability of hopping, or "transitioning," from one state to any other state---e.g., the chance that a baby currently playing will fall asleep in the next five minutes without crying first.

Victor Powell and Lewis Lehe have produced a 'visual explanation' of how to produce Markov chains showing how they are used in a variety of disciplines; they are useful to computer scientists and engineers and many others. As they point out:

In the hands of meteorologists, ecologists, computer scientists, financial engineers and other people who need to model big phenomena, Markov chains can get to be quite large and powerful.

If you've not seen Markov chains in use before, or perhaps your knowledge is just a little rusty, then take a look at the link and see it they can be of any use to you.

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by aristarchus on Sunday March 01 2015, @06:53PM

    by aristarchus (2645) on Sunday March 01 2015, @06:53PM (#151608) Journal

    I was expecting astronomy, http://en.wikipedia.org/wiki/Markarian's_Chain [wikipedia.org].

  • (Score: 0) by Anonymous Coward on Sunday March 01 2015, @07:05PM

    by Anonymous Coward on Sunday March 01 2015, @07:05PM (#151612)

    Well done. This should be held as an example for writing an article summary. I look forward to reading it.

    • (Score: 2) by pnkwarhall on Sunday March 01 2015, @10:41PM

      by pnkwarhall (4558) on Sunday March 01 2015, @10:41PM (#151658)

      Too bad it's not "news".

      I would have like to see a couple of links to interesting and **recent** uses of markov chains.

      --
      Lift Yr Skinny Fists Like Antennas to Heaven
      • (Score: 4, Informative) by francois.barbier on Monday March 02 2015, @12:00AM

        by francois.barbier (651) on Monday March 02 2015, @12:00AM (#151677)

        It doesn't have to be news. Remember our motto? News for nerds, stuffs that... Ow! Sh*t! I beg you pardon.

        Anyway, I like this kind of algorithm visualization. It's not always easy to grasp a complex concept from text only.
        Sometimes, having such visualization helps greatly in understanding the underlaying code.
        For example, here is a video of 15 sorting algorithm [youtube.com].
        Another one I like particularly: Amit's though on pathfinding.
        He first described the A* algorithm with plain images and lot of text [stanford.edu]. Very interesting read.
        Recently, he rewrote his articles and used dynamic animations [redblobgames.com] where you can move the origin, destination, walls, etc. and see how the code reacts in real time, with all the debugging information shown.

        Really helps you make sense of what is going on. His other articles are all equally interesting (hexagonal grids, procedural map generation, etc.)

  • (Score: 3, Funny) by davester666 on Sunday March 01 2015, @07:20PM

    by davester666 (155) on Sunday March 01 2015, @07:20PM (#151615)

    ...all states directly link to 'crying'. Including crying.

    • (Score: 2) by isostatic on Sunday March 01 2015, @09:44PM

      by isostatic (365) on Sunday March 01 2015, @09:44PM (#151643) Journal

      And pooping. It can also exist in multiple states at the same time.

    • (Score: 2) by cafebabe on Sunday March 01 2015, @10:05PM

      by cafebabe (894) on Sunday March 01 2015, @10:05PM (#151650) Journal

      They tend to do that for no discernible reason. However, there are checklists [raisingchildren.net.au] and flowcharts [dummies.com] for such circumstances.

      --
      1702845791×2
  • (Score: 3, Insightful) by PizzaRollPlinkett on Sunday March 01 2015, @08:32PM

    by PizzaRollPlinkett (4512) on Sunday March 01 2015, @08:32PM (#151626)

    Do kids not play Candy Land these days?

    --
    (E-mail me if you want a pizza roll!)
  • (Score: 5, Informative) by CirclesInSand on Sunday March 01 2015, @11:21PM

    by CirclesInSand (2899) on Sunday March 01 2015, @11:21PM (#151665)

    A markov chain is a probabilistic description of a system where probability at time t depends only on the prior state of the system (a) for a finite amount of prior time (b) that prior time does not depend on t.

    Time is usually described in discrete steps from state to state, but they can be continuous. The important fact is that to determine the next state, you don't need a full history, only a finite history.

    The easiest to use (and very useful) type of markov chain is finite state / discrete time markov chain. That means that there are only a finite number of states that the system can be in, and that time is described as an integer, rather than a continuous real number.

    Such systems can be described using transition matrices, where the row index is the state transitioned from y, the column index is the state transitioned to x, and the entry there is the probability of going into state x given that you are in state y.

    This description allows you to use all the easy tools of linear algebra to solve otherwise very difficult probability problems.

    • (Score: 3, Interesting) by Geotti on Sunday March 01 2015, @11:49PM

      by Geotti (1146) on Sunday March 01 2015, @11:49PM (#151673) Journal

      I thought it'd be off-topic, but thanks for the great cue for distributed systems:

      As one of the principle requirements for attaining the ability to implement true, scalable distributed systems are the unawareness of nodes about the system state as well as making decisions based exclusively on local information [1], your statement that:

      "The important fact is that to determine the next state, you don't need a full history, only a finite history."

      can lend itself quite nicely to fulfilling these requirements (with a bit of fantasy/creativity).

      Ok, anyway, I just wanted to reply with these two nice visualizations of the raft consensus algorithm, which I was reminded of, after watching the link from TFS:

      Link 1 [github.io]

      Link 2 [thesecretlivesofdata.com]

      I hope we'll get to see many more of these nice introductory visualizations of topics often made complex by our education system(s). (A few of them probably being made with D3.js [github.com], InfoVis [github.io], PhiloGL [senchalabs.org], and Co., btw.)

      [1]A.S. Tanenbaum's Distributed Systems: Principles and Paradigms, co-authored with Maarten van Steen [wikipedia.org]

    • (Score: 0, Disagree) by Anonymous Coward on Monday March 02 2015, @01:34AM

      by Anonymous Coward on Monday March 02 2015, @01:34AM (#151696)

      Way to go, you've taken a simple concept and unnecessarily overcomplicated it.

  • (Score: 4, Interesting) by GeorgeScuttles on Monday March 02 2015, @03:36AM

    by GeorgeScuttles (4499) on Monday March 02 2015, @03:36AM (#151713)

    At work, we make extensive use of Markov Chain Monte Carlo methods (successive draws of prior distributions from statistical models). It's a tool in the belt for determining likely states from unknown prior values (e.g., in weather forecasting one might ask, what will the effect be on the temperature, Y, at some location, if the prior temperature an hour before is X). They work great, with two absolutely major downsides (i) they tend to be pretty computationally intensive and (ii) if the prior distribution or model is wrong, they give you a completely incorrect answer. The second one can be mitigated by sensitivity studies, but that involves even more computation. Anyway, it's a great tool, discovered long ago, but only recently used because of modern computational infrastructure.

    If you want some interesting reading material, look up bred vectors in Weather Forecasting versus Lyapunov vectors elsewhere.

  • (Score: 2, Interesting) by Anonymous Coward on Monday March 02 2015, @09:51AM

    by Anonymous Coward on Monday March 02 2015, @09:51AM (#151785)

    A variation of the Markov chains are the hidden Markov models (HMM). The principle is very close except you don't know what is the state of your system. It's a black box from which you can observe outputs and you can deduce the most likely sequence of inner states that caused these outputs.

    To continue with the baby example, consider the states are "happy", "sad", "hungry" and "sick" and the observations are "smiling", "neutral" and "crying".

    You cannot directly communicate with the baby, doesn't speak, so you cannot know the current state. But you can look and listen for cries and smiles.

    So, from a sequence of "crying", "crying", "neutral", "crying", "smiling", knowing the transition probabilities (for instance the probability of transitioning from happy into hungry, same as for Markov chains) and the observation probabilities (for instance, the probability of the baby crying if sick), you can compute that the most likely sequence of states is "hungry", "hungry", "hungry", "hungry", "happy" (aka food time).