Stories
Slash Boxes
Comments

SoylentNews is people

posted by LaminatorX on Sunday March 01 2015, @06:11PM   Printer-friendly
from the Eat-Pray-Love dept.

[Submitted via IRC]

Many of you will know about Markov chains. Named after Andrey Markov, [they] are mathematical systems that hop from one "state" (a situation or set of values) to another. For example, if you made a Markov chain model of a baby's behavior, you might include "playing," "eating", "sleeping," and "crying" as states, which together with other behaviors could form a 'state space': a list of all possible states. In addition, on top of the state space, a Markov chain tells you the probability of hopping, or "transitioning," from one state to any other state---e.g., the chance that a baby currently playing will fall asleep in the next five minutes without crying first.

Victor Powell and Lewis Lehe have produced a 'visual explanation' of how to produce Markov chains showing how they are used in a variety of disciplines; they are useful to computer scientists and engineers and many others. As they point out:

In the hands of meteorologists, ecologists, computer scientists, financial engineers and other people who need to model big phenomena, Markov chains can get to be quite large and powerful.

If you've not seen Markov chains in use before, or perhaps your knowledge is just a little rusty, then take a look at the link and see it they can be of any use to you.

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Interesting) by Geotti on Sunday March 01 2015, @11:49PM

    by Geotti (1146) on Sunday March 01 2015, @11:49PM (#151673) Journal

    I thought it'd be off-topic, but thanks for the great cue for distributed systems:

    As one of the principle requirements for attaining the ability to implement true, scalable distributed systems are the unawareness of nodes about the system state as well as making decisions based exclusively on local information [1], your statement that:

    "The important fact is that to determine the next state, you don't need a full history, only a finite history."

    can lend itself quite nicely to fulfilling these requirements (with a bit of fantasy/creativity).

    Ok, anyway, I just wanted to reply with these two nice visualizations of the raft consensus algorithm, which I was reminded of, after watching the link from TFS:

    Link 1 [github.io]

    Link 2 [thesecretlivesofdata.com]

    I hope we'll get to see many more of these nice introductory visualizations of topics often made complex by our education system(s). (A few of them probably being made with D3.js [github.com], InfoVis [github.io], PhiloGL [senchalabs.org], and Co., btw.)

    [1]A.S. Tanenbaum's Distributed Systems: Principles and Paradigms, co-authored with Maarten van Steen [wikipedia.org]

    Starting Score:    1  point
    Moderation   +1  
       Interesting=1, Total=1
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3