TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Show HN: Markov chains explained visually

1070 pointsby vicapowalmost 11 years ago

42 comments

jgablealmost 11 years ago
Beautiful. I had seen Markov chains mentioned before, but had not looked them up. Skimming the wikipedia page made sense (it's a state machine with transitions determined by probabilities instead of defined events), but I would not have had an intuitive understanding of why they are useful. The explanation mid-way down about modeling the distribution of sunny and rainy days really made it click for me.
评论 #8103595 未加载
评论 #8105137 未加载
cscheidalmost 11 years ago
This is really nice.<p>Minor nit #1: <a href="https://www.dropbox.com/s/2meqa8hhen9ztba/Screenshot%202014-07-29%2010.37.58.png" rel="nofollow">https:&#x2F;&#x2F;www.dropbox.com&#x2F;s&#x2F;2meqa8hhen9ztba&#x2F;Screenshot%202014-...</a> Seems like the graph visualization is sticking to the wrong coordinates (dragging it to the left doesn&#x27;t help; it moves back to the center)<p>Minor nit #2. I&#x27;d love to see a visualization of the &quot;probability mixing&quot; interpretation of markov chains and stationary distributions, which is what PageRank is really about. That is, it&#x27;d be really nice to have a visualization of the fact that Markov chains are ultimately memoryless (it eventually doesn&#x27;t matter in which state you start for the distribution of events). I think it could be done by exchanging &quot;probabilities conditioned on the past&quot;, which is most easily done by multiplying the entire probability vector by the stochastic matrix and visualizing that.
评论 #8103759 未加载
murbard2almost 11 years ago
Now look up Hidden Markov models, <a href="http://en.wikipedia.org/wiki/Hidden_Markov_model" rel="nofollow">http:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Hidden_Markov_model</a><p>How they can be calibrated in the finite case <a href="http://en.wikipedia.org/wiki/Baum%E2%80%93Welch_algorithm" rel="nofollow">http:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Baum%E2%80%93Welch_algorithm</a><p>And how they can be evaluated for arbitrary models <a href="http://en.wikipedia.org/wiki/Particle_filter" rel="nofollow">http:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Particle_filter</a>
itoddalmost 11 years ago
This is fantastic. I have encountered markov chains in my career and always thought of them as a black box. This simple visualization makes it so easy to understand what has previously been so hard for me. Thank you.
telalmost 11 years ago
The one thing to add to this is that usually each state doesn&#x27;t emit a single token (&quot;I am in state 1&quot; then &quot;I am in state 2&quot;) but instead you assume that each state has a range of possible actions and the likelihood of a choice of action varies with state.<p>So if might not be that your model is sunny versus rainy but instead cold front v warm front. Since rain is more likely during a cold front your observation of rain increases your belief that the system is in the &quot;cold front&quot; state.
评论 #8113101 未加载
sinwavealmost 11 years ago
Thought I&#x27;d point out a little typo. In your table with sliders for adjusting probabilities of state transitions, the P(B|B) probability reads &quot;P(B|A)&quot;.<p>Edit 1: Also, P(A|B) reads &quot;P(A|A)&quot;.<p>Edit 2: Not trying to be too nitpicky, though. It&#x27;s a really nice visualization. Really excited about the growing use of d3 to visualize algorithms. Is this inspired by Mike Bostock&#x27;s post by that title?
ajanuaryalmost 11 years ago
Presumably in the B row it should read &quot;P(A|B)&quot; and &quot;P(B|B)&quot;
beenpooralmost 11 years ago
Thank you! I understood what Markov Chains are now. Nicely done and in a simple understandable fashion.<p>I am also trying to understand what they call Hidden Markov Model (specifically, I just cannot wrap my head around how it gets used in speech. They just look like entirely different things). Would be awesome to see an update with the Hidden MM.
评论 #8104013 未加载
评论 #8104395 未加载
评论 #8105601 未加载
评论 #8105497 未加载
devindotcomalmost 11 years ago
I&#x27;ve seen Markov chains applied to language generation - producing sentences that make sense grammatically but not literally. Anyone know what the connection is here? I think I have an idea but would like to see if it gets independently verified by someone else.
评论 #8104759 未加载
评论 #8106507 未加载
评论 #8104692 未加载
评论 #8104697 未加载
评论 #8106952 未加载
sitkalmost 11 years ago
This is an absolutely magical and intuitive (not to mention beautiful) way to imagine the complex mathematical concept of a Markov Chain. This is the exact sort of pedagogical tools that MOOCs and other educational software platforms need to build and adopt to bring education into the 21st Century and finally replace traditional teaching methods. How does this compare to what even the best teacher could draw on a whiteboard? Teachers will still play an essential role in the emotional and social development of students, and they can then focus their energy on these things which software probably will struggle to ever replace.
romanivalmost 11 years ago
Good lecture on Markov models:<p><a href="https://class.coursera.org/modelthinking-006/lecture/60" rel="nofollow">https:&#x2F;&#x2F;class.coursera.org&#x2F;modelthinking-006&#x2F;lecture&#x2F;60</a>
dekhnalmost 11 years ago
Back when I was in college (~20 years ago) I was struggling to understand generative models, and I asked my CS professor.<p>he said, &quot;imagine god is sitting around emitting DNA sequences. She has sevearl 4-sided biased dice, rolls one of the 4-sided die, BAM, emit an A! Again, roll the die, BAM, emit a A! Roll again, BAM, emit a T! Now, imagine god is a fickle person, and between rolls, decides to roll a die to decide which of the biased die to roll.<p>For some reason, that helped.
saganusalmost 11 years ago
Very nice! I&#x27;ve never had to work with Markov chains but I&#x27;ve read about them and they seem to pop up in lots of places.<p>Nice and simple and interactive explanation.
jesuslopalmost 11 years ago
Gian Carlo Rota is always a pleasure to quote, despite he knowing it. One from his reminiscence about Jack Schwartz, in his &quot;Indiscrete Thoughts&quot; Book (TL;DR: Markov Chains seen as random maps):<p>The first lecture by Jack I listened to was given in the spring of 1954 in a seminar in functional analysis. A brilliant array of lecturers had been expounding throughout the spring term on their pet topics. Jack&#x27;s lecture dealt with stochastic processes. Probability was still a mysterious subject cultivated by a few scattered mathematicians, and the expression &quot;Markov chain&quot; conveyed more than a hint of mystery. Jack started his lecture with the words, &quot;A Markov chain is a generalization of a function.&quot; His perfect motivation of the Markov property put the audience at ease. Graduate students and instructors relaxed and followed his every word to the end.<p>Beuatiful visualizations.
granttimmermanalmost 11 years ago
I created a Markov chain generator: <a href="https://gist.github.com/grant/561834963dc526495c45" rel="nofollow">https:&#x2F;&#x2F;gist.github.com&#x2F;grant&#x2F;561834963dc526495c45</a><p>var numNodes=10;var roundNum=100;var a=[];for(var i=0;i&lt;numNodes;++i){var connections=[];var sum=0;for(var j=0;j&lt;numNodes;++j){var randNum=Math.random()&#x2F;numNodes;randNum=Math.round(randNum<i>roundNum)&#x2F;roundNum;connections[j]=randNum;sum+=randNum}connections=connections.map(function(e){var t=e</i>(1&#x2F;sum);t=Math.round(t<i>roundNum)&#x2F;roundNum;return t});sum=connections.reduce(function(e,t){return e+t});connections[numNodes-1]+=1-sum;connections[numNodes-1]=Math.round(connections[numNodes-1]</i>roundNum)&#x2F;roundNum;a[i]=connections}console.log(JSON.stringify(a))<p>Copy and paste the output into the side bar.
评论 #8105547 未加载
评论 #8104801 未加载
pimgeekalmost 11 years ago
It really helps, I used to learn Markov Chain Theory in graduate school, but either I was not paying attention, or the tutor did not really intended to get things better-explained, I never truely care about its real life application.<p>But this tutorial, both visually attractive and expained with real life examples, make me want to re-learn this topic. Just quote one passage:<p>[if you made a Markov chain model of a baby&#x27;s behavior, you might include &quot;playing,&quot; &quot;eating&quot;, &quot;sleeping,&quot; and &quot;crying&quot; as states, which together with other behaviors could form a &#x27;state space&#x27;: a list of all possible states.]<p>Thanks for sharing!
dnauticsalmost 11 years ago
This is really great, but could you put in a bit how some transition matrices aren&#x27;t markov (e.g. [0 1; 1 0]) and the convergence criterion where you can take M^n n-&gt;infinity and get the occupancy of the states?
评论 #8104087 未加载
vicapowalmost 11 years ago
There was a bug in the playground earlier that I just fixed that allows you to share your Markov chains via the url hash. For example: <a href="http://setosa.io/markov/#%7B%22tm%22%3A%5B%5B0.9%2C0.1%2C0%2C0%2C0%2C0%5D%2C%5B0%2C0.9%2C0.1%2C0%2C0%2C0%5D%2C%5B0%2C0%2C0.9%2C0%2C0%2C0.1%5D%2C%5B0%2C0%2C0%2C0.8%2C0.1%2C0.1%5D%2C%5B0%2C0%2C0%2C0%2C0.9%2C0.1%5D%2C%5B0.1%2C0%2C0%2C0%2C0%2C0.9%5D%5D%7D" rel="nofollow">http:&#x2F;&#x2F;setosa.io&#x2F;markov&#x2F;#%7B%22tm%22%3A%5B%5B0.9%2C0.1%2C0%2...</a>
评论 #8106762 未加载
bdavisxalmost 11 years ago
Great! It would be nice to be able to stop the animations though, they are distracting while you are trying to read the text.<p>The sunny&#x2F;rainy probability example is perfect as a scenario.
nabeelahmed13almost 11 years ago
This is at a tangent, but I&#x27;m a fresh CS undergrad and this simple explanation really hooked me.<p>So my question is, where can I find more of this stuff? MOOCs are tough to manage with university, but if I wanted to learn more about these mathematical concepts presented in an interesting way, where should I start looking?<p>I&#x27;m a tad bit indecisive about how good I am with CS theory but I know if I took the leap and mastered some basics I would enjoy it. Any recommendations will help.
评论 #8105405 未加载
评论 #8104805 未加载
评论 #8105360 未加载
lewis500almost 11 years ago
Man the text is really well written! Am I right, everyone?
评论 #8106387 未加载
CharlesMerriam1almost 11 years ago
Very nice concept; a mvp<p>Just on first glance: 1. first diagram and others, ball jumps from beginning of BtoA arc to B without sliding along the arc.<p>2. second diagram box was no P(B|B). That is boxes are mislabeled.<p>3. strange, but arcs are sometimes at an angle. It appears to happen if they are scrolled to, but no if drawn on the initial screen.<p>4. while the R S on the next diagram does settle to a steady state, it starts with random Rs and Ss marching across at random rates.<p>Good Start!
kevinwangalmost 11 years ago
That&#x27;s pretty cool. The markov chain diagrams seem very similar (identical?) to deterministic finite automota. Would it be correct or incorrect to say that a Markov Chain can be thought of as a DFA where the changes in state are determined by probability?
评论 #8104859 未加载
vitdalmost 11 years ago
Great explanation, but wow, those animations were awful! The movement was great, but the seizure-like jump every time it hit a state was unwatchable. I had to cover them up to get through the text.
lobotryasalmost 11 years ago
Has anyone thought about or attempted to model game AI with Markov Chains instead of decision trees? Ex: NPCs, wildlife or enemies that use Markov Chains to react to their surroundings.
评论 #8106283 未加载
评论 #8105714 未加载
granttimmermanalmost 11 years ago
Markov chain circle generator: <a href="https://gist.github.com/grant/29afaa91349b1928b05e" rel="nofollow">https:&#x2F;&#x2F;gist.github.com&#x2F;grant&#x2F;29afaa91349b1928b05e</a>
_nullandnull_almost 11 years ago
Beautiful. What did you use to create the graphics?
评论 #8105788 未加载
matthewcantyalmost 11 years ago
I&#x27;m outta control on this page!<p>[ [0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1], [0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1], [0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1], [0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1], [0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1], [0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1], [0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1], [0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1], [0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1], [0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1] ]
tigroferocealmost 11 years ago
Great explanation! Very easy. It could be perfect if you added some easy to understand real life examples to play with.
Max_Horstmannalmost 11 years ago
Very cool. Would love to see this generalized to an interactive visualization of Markov Decision Processes (MDPs).
elwellalmost 11 years ago
And where is Temple OS creator&#x27;s comments? I believe they are markov chains of a sort.
skriticos2almost 11 years ago
This totally reminds me of SpaceChem on higher levels (puzzle game for programmers).
lnkmailsalmost 11 years ago
I would not &quot;require&quot; people to know Markov chains but I am usually surprised how many programmers have no idea what it is and how it works and how it can be used. It is a very powerful tool to model queues which is something most distributed systems deal with :).
_raoulcousinsalmost 11 years ago
Can I use this for my class? Creative commons with attribution?
评论 #8108530 未加载
chenshu_ivoryalmost 11 years ago
Look so cool! Still hard to understand
lynchdtalmost 11 years ago
This is really cool, nice work.
suchetchachraalmost 11 years ago
Excellent visualization!
dolomalmost 11 years ago
Really cool: well done!
mrcactu5almost 11 years ago
can I fork these?
richcuteguy34almost 11 years ago
Nice<p>Here&#x27;s a model of chutes and ladder using Markov <a href="http://www.datagenetics.com/blog/november12011/index.html" rel="nofollow">http:&#x2F;&#x2F;www.datagenetics.com&#x2F;blog&#x2F;november12011&#x2F;index.html</a><p>And another for Candyland <a href="http://www.datagenetics.com/blog/december12011/index.html" rel="nofollow">http:&#x2F;&#x2F;www.datagenetics.com&#x2F;blog&#x2F;december12011&#x2F;index.html</a>
评论 #8106251 未加载
hyperlineralmost 11 years ago
<i>&quot;For example, the algorithm Google uses to determine the order of search results, called PageRank, is a type of Markov chain.&quot;</i><p>I had to research that to understand it: <a href="http://en.wikipedia.org/wiki/PageRank" rel="nofollow">http:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;PageRank</a><p>Here is some key text from Wikipedia:<p>Google recalculates PageRank scores each time it crawls the Web and rebuilds its index. As Google increases the number of documents in its collection, the initial approximation of PageRank decreases for all documents.<p>The formula uses a model of a random surfer who gets bored after several clicks and switches to a random page. The PageRank value of a page reflects the chance that the random surfer will land on that page by clicking on a link. It can be understood as a Markov chain in which the states are pages, and the transitions, which are all equally probable, are the links between pages.<p>If a page has no links to other pages, it becomes a sink and therefore terminates the random surfing process. If the random surfer arrives at a sink page, it picks another URL at random and continues surfing again.
评论 #8104052 未加载
Pinn2almost 11 years ago
The problem with Markov chains is that they are named after a person, which makes math seem more like a &quot;private club&quot;. For instance, why use &quot;abelian group&quot;, when &quot;commutative group&quot; will do? The reasons for wanting to be a member of an exclusive group are psychological.