TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Bayes’ Theorem – What is it and what is it good for?

102 点作者 SimplyUseless将近 10 年前

11 条评论

tjradcliffe将近 10 年前
Bayes&#x27; Theorem tells us that the quest for certain knowledge, which drove a great deal of science and philosophy in the pre-Bayesian era (before about 1990, when Bayesian methods started to gain real traction in the scientific community) is much like the alchemist&#x27;s quest for the secret of transmutation: it is simply the wrong goal to have, even though it generated a lot of interesting and useful results.<p>One of the most important consequences of this is noted by the article: &quot;Confirmation and falsification are not fundamentally different, as Popper argued, but both just special cases of Bayes’ Theorem.&quot; There is no certainty, even in the case of falsification, because there are always alternatives. For example, superluminal neutrinos didn&#x27;t prove special relativity false, although they did provide some evidence. But the alternative hypothesis that the researchers had made a mistake turned out to be much more plausible.<p>Bayesian reasoning--which is plausibly the only way of reasoning that will keep our beliefs consistent with the evidence--cannot produce certainty. A certain belief is one that has a plausibility of exactly 1 or 0, and those are only asymptotically approachable applying Bayes&#x27; rule. Such beliefs would be immune from any further evidence for or against them, no matter how certain it was, essentially because Bayesian updating is multiplicative and anything times zero is still zero.<p>There is a name for beliefs of this kind, which to a Bayesian are the most fundamental kind of error: faith.
评论 #9781876 未加载
评论 #9782122 未加载
评论 #9781988 未加载
评论 #9781970 未加载
jhallenworld将近 10 年前
I recently started to think about connections between Bayes&#x27; Theorem and fuzzy logic:<p><a href="http:&#x2F;&#x2F;sipi.usc.edu&#x2F;~kosko&#x2F;Fuzziness_Vs_Probability.pdf" rel="nofollow">http:&#x2F;&#x2F;sipi.usc.edu&#x2F;~kosko&#x2F;Fuzziness_Vs_Probability.pdf</a><p><pre><code> (also from wikipedia on fuzzy logic): &quot;Bruno de Finetti argues[citation needed] that only one kind of mathematical uncertainty, probability, is needed, and thus fuzzy logic is unnecessary. However, Bart Kosko shows in Fuzziness vs. Probability that probability theory is a subtheory of fuzzy logic, as questions of degrees of belief in mutually-exclusive set membership in probability theory can be represented as certain cases of non-mutually-exclusive graded membership in fuzzy theory. In that context, he also derives Bayes&#x27; theorem from the concept of fuzzy subsethood. Lotfi A. Zadeh argues that fuzzy logic is different in character from probability, and is not a replacement for it. He fuzzified probability to fuzzy probability and also generalized it to possibility theory. (cf.[10])&quot;</code></pre>
评论 #9785482 未加载
shoo将近 10 年前
Here are a few tangentially related things that may be of interest:<p>(i) MacKay&#x27;s book on Information Theory, Inference, and Learning Algorithms: <a href="http:&#x2F;&#x2F;www.inference.phy.cam.ac.uk&#x2F;itila&#x2F;" rel="nofollow">http:&#x2F;&#x2F;www.inference.phy.cam.ac.uk&#x2F;itila&#x2F;</a><p>(ii) Probability Theory As Extended Logic: <a href="http:&#x2F;&#x2F;bayes.wustl.edu&#x2F;" rel="nofollow">http:&#x2F;&#x2F;bayes.wustl.edu&#x2F;</a><p>(iii) Causal Calculus: <a href="http:&#x2F;&#x2F;www.michaelnielsen.org&#x2F;ddi&#x2F;if-correlation-doesnt-imply-causation-then-what-does&#x2F;" rel="nofollow">http:&#x2F;&#x2F;www.michaelnielsen.org&#x2F;ddi&#x2F;if-correlation-doesnt-impl...</a><p>(iv) I recall reading a pretty good blog post a year or two ago that described how to implement some kind of Bayesian token recognition thing to parse screen captures from some database (or something roughly like that). The gist of the approach was like this:<p>1. define a model expressing that certain combinations of neighbouring tokens are more likely to occur than others 2. approximate the full Bayesian inference problem as MAP inference 3. the resulting combinatorial optimisation problem could be encoded as a relatively easy mixed integer program 4. easy mixed integer programs are very tractable to commercial solvers such as CPLEX, Gurobi, or sometimes even the open source COIN-OR CBC<p>At the time I found the idea fascinating as I was working with LPs&#x2F;MIPs and had some interest in Bayesian inference, but hadn&#x27;t figured out that the former could provide a way to computationally tackle certain approximations of the latter.<p>I cannot for the life of me find the link again for this.
le0n将近 10 年前
“Seeing the world through the lens of Bayes’ Theorem is like seeing The Matrix. Nothing is the same after you have seen Bayes.”<p>I&#x27;m pretty sure this is an instance of cognitive bias.
评论 #9785142 未加载
评论 #9783201 未加载
dimino将近 10 年前
My biggest issue with Bayes&#x27; Theorem as a method of making everyday decisions is that it assumes the ability to accurately assess the underlying likelihoods of events taking place, especially on-the-fly.<p>I would even argue that it&#x27;s actually providing a <i>false</i> sense of precision because the sig figs are oftentimes not correctly represented.
评论 #9781740 未加载
评论 #9782966 未加载
评论 #9783893 未加载
评论 #9783920 未加载
DennisP将近 10 年前
&gt; the Standard Model of particle physics explains much, much more than thunderstorms, and its rules could be written down in a few pages of programming code.<p>As a programmer who doesn&#x27;t know advanced math, I&#x27;d really like to see that code, in literate form.
评论 #9787104 未加载
lucb1e将近 10 年前
Hint: changing the font to Arial improves readability a lot and actually displays italics where the author used them.
jrgnsd将近 10 年前
I recently had the need for Bayes Classifier[1] in a couple of projects, so I wrote a service that exposes one through an API. You can set up your prior set and then get predictions against that set.<p>I haven&#x27;t gone through the trouble of making it suitable for public consumption yet. Would anyone be interested in consuming such a service?<p>[1]: <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Naive_Bayes_classifier" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Naive_Bayes_classifier</a>
gedrap将近 10 年前
Speaking of Bayes, there&#x27;s a great book by Allen B. Downey &#x27;Think Bayes&#x27; <a href="http:&#x2F;&#x2F;www.greenteapress.com&#x2F;thinkbayes&#x2F;" rel="nofollow">http:&#x2F;&#x2F;www.greenteapress.com&#x2F;thinkbayes&#x2F;</a> available as free PDF or (if you wish to support the author, which I did) a paperback from Amazon.<p>It teaches Bayes theorem accompanied with Python code examples, which I found really useful.
Pamar将近 10 年前
This is excellent and finally prompted me to ask how to use Bayes more in my life: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=9782767" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=9782767</a>
mkramlich将近 10 年前
I&#x27;m in the middle of designing and building a system which uses Bayesian models.<p>One thing that struck me early is that while Bayes itself is rock solid, like arithmetic, when you go to apply it the results live or die on the quality of the models, and the relevance&#x2F;realism of the evidence used to train them. GIGO.<p>But once you do have a good, relevant, signal-producing model, then, using it is a bit like doing a multi-dimensional lookup, or function call. Conceptually easy to understand, and, in many cases (depending, of course, on the details) cache-friendly.
评论 #9782330 未加载