TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Markov Chains Explained Visually (2014)

213 pointsby mrcgnc3 months ago

7 comments

ngriffiths3 months ago
Markov chains are super useful in statistics but it isn&#x27;t obvious at first what problem they solve and how - some further reading that I found helpful:<p><a href="https:&#x2F;&#x2F;twiecki.io&#x2F;blog&#x2F;2015&#x2F;11&#x2F;10&#x2F;mcmc-sampling&#x2F;" rel="nofollow">https:&#x2F;&#x2F;twiecki.io&#x2F;blog&#x2F;2015&#x2F;11&#x2F;10&#x2F;mcmc-sampling&#x2F;</a><p>Note that the point of the markov chain is it&#x27;s possible to compute <i>relative</i> probabilities between two given points in the posterior even when you don&#x27;t have a closed form expression for the posterior.<p>Also, the reason behind separating the proposal distribution and the acceptance probability is that it&#x27;s a convenient method to make the Markov process stationary, which isn&#x27;t true in general. (Wikipedia page on MCMC is also useful here).
评论 #43209472 未加载
globalnode3 months ago
This is timely! I have an assignment on these coming up soon. Can anyone with knowledge about this explain something. From what I can tell, many matrix multiplications move vectors so they are more inline with eigenvectors if they exist. So Markov Chains are just a continual movement in this direction. Some examples that don&#x27;t do this that I can think of are the Identity matrix and rotations.. Is there a way to test if a matrix will have this effect? Is it just testing for existence of eigenvectors?
评论 #43207700 未加载
评论 #43209194 未加载
brcmthrowaway3 months ago
What is the secret sauce that makes LLM better than a Markov chain?
评论 #43202019 未加载
评论 #43200899 未加载
评论 #43201972 未加载
评论 #43200834 未加载
评论 #43202778 未加载
评论 #43201365 未加载
评论 #43204353 未加载
评论 #43202264 未加载
评论 #43201268 未加载
alberto_ol3 months ago
previuos submissions<p><a href="https:&#x2F;&#x2F;hn.algolia.com&#x2F;?dateRange=all&amp;page=0&amp;prefix=true&amp;query=markov%20chains%20explained%20visually&amp;sort=byPopularity&amp;type=story" rel="nofollow">https:&#x2F;&#x2F;hn.algolia.com&#x2F;?dateRange=all&amp;page=0&amp;prefix=true&amp;que...</a>
s_dev3 months ago
The relevance to me is that markov chains are a remarkable way to explain why LLMs are both useful and very unreliable.<p>You train on piece of text and then the output &#x27;sounds&#x27; like that text it was trained despite being pure gibberish.
评论 #43214504 未加载
kuharich3 months ago
Past comments: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=17766358">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=17766358</a>
potatoicecoffee3 months ago
markov chains are used for my favourite financial algorithm; the allocation of overhead costs in cost accounting. wish there was an easy way to visualise a model with 500 nodes