TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Stanford Lecture Notes on Probabilistic Graphical Models

336 点作者 volodia大约 8 年前

8 条评论

georgeek大约 8 年前
An amazing text on this topic is Martin Wainwright&#x2F;Michael Jordan&#x27;s Graphical Models, Exponential Families, and Variational Inference: <a href="https:&#x2F;&#x2F;people.eecs.berkeley.edu&#x2F;~wainwrig&#x2F;Papers&#x2F;WaiJor08_FTML.pdf" rel="nofollow">https:&#x2F;&#x2F;people.eecs.berkeley.edu&#x2F;~wainwrig&#x2F;Papers&#x2F;WaiJor08_F...</a>
评论 #14180427 未加载
评论 #14182230 未加载
refrigerator大约 8 年前
For anyone interested, here are the materials for the Graphical Models course at Oxford: <a href="http:&#x2F;&#x2F;www.stats.ox.ac.uk&#x2F;~evans&#x2F;gms&#x2F;index.htm" rel="nofollow">http:&#x2F;&#x2F;www.stats.ox.ac.uk&#x2F;~evans&#x2F;gms&#x2F;index.htm</a>
philipov大约 8 年前
OpenCourseOnline, different Stanford professor: <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=WPSQfOkb1M8&amp;list=PL50E6E80E8525B59C&amp;index=1" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=WPSQfOkb1M8&amp;list=PL50E6E80E8...</a><p>Carnegie-Mellon: <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=lcVJ_zsynMc&amp;list=PLI3nIOD-p5aoXrOzTd1P6CcLavu9rNtC-" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=lcVJ_zsynMc&amp;list=PLI3nIOD-p5...</a>
beambot大约 8 年前
For a practical application of using graphical models to &quot;solve&quot; a bayesian problem, I recommend Frank Dellaert&#x27;s whitepaper, which cover Simultaneous Localization and Mapping (SLAM, a robotics algorithm) using similar techniques: <a href="https:&#x2F;&#x2F;research.cc.gatech.edu&#x2F;borg&#x2F;sites&#x2F;edu.borg&#x2F;files&#x2F;downloads&#x2F;gtsam.pdf" rel="nofollow">https:&#x2F;&#x2F;research.cc.gatech.edu&#x2F;borg&#x2F;sites&#x2F;edu.borg&#x2F;files&#x2F;dow...</a>
评论 #14180345 未加载
dirtyaura大约 8 年前
For a novice, it&#x27;s hard to assess how important PGMs are currently.<p>Investing my time to learn Deep Learning (CNNs, RNNs) vs. Random Forest vs. PGMs vs. Reinforcement Learning well enough to be able to apply the chosen approach, it seems that PGMs are not high in the list, is that correct?<p>Are there Kaggle competitions, in which PGMs have been the best approach?<p>What are the real-world problem areas that PGM&#x27;s currently excel at compared to other methods?
评论 #14183073 未加载
评论 #14182161 未加载
likelynew大约 8 年前
We have a professor in our college, who uses this thing with great passion. I took two related courses under him. The problem with this thing is he uses his own mental image and notation in the lectures and examination. Even the internet seems to be highly affected with this problem. I think there are many concepts that are not hard but it takes time to get a feeling. Like see &quot;monads are not burritos&quot; essay. It describes the problem of using analogies in explaining monads. This seems like a great page in the sense it uses minimal confusing analogies as is common in most resources in bayesian rule.
graycat大约 8 年前
In their &quot;Probability review&quot; at<p><a href="http:&#x2F;&#x2F;ermongroup.github.io&#x2F;cs228-notes&#x2F;preliminaries&#x2F;probabilityreview&#x2F;" rel="nofollow">http:&#x2F;&#x2F;ermongroup.github.io&#x2F;cs228-notes&#x2F;preliminaries&#x2F;probab...</a><p>I see two problems:<p>(1) First Problem -- Sample Space<p>Their definition of a <i>sample space</i> is<p>&quot;The set of all the outcomes of a random experiment. Here, each outcome ω can be thought of as a complete description of the state of the real world at the end of the experiment.&quot;<p>The &quot;complete description&quot; part is not needed and even if included has meaning that is not clear.<p>Instead, each possible <i>experiment</i> is one <i>trial</i> and one element in the set of all trials Ω. That&#x27;s it: Ω is just a set of trials, and each trial is just an element of that set. There is nothing there about the outcomes of the trials.<p>Next the text has<p>&quot;The sample space is Ω = {1, 2, 3, 4, 5, 6}.&quot;<p>That won&#x27;t work: Too soon will find that need an uncountably infinite sample space. Indeed an early exercise is that the set of all events cannot be countably infinite.<p>Indeed, a big question was, can there be a sample space big enough to discuss random variables as desired? The answer is yes and is given in the famous Kolomogorov extension theorem.<p>(2) Second Problem -- Notation<p>An <i>event</i> A is an element of the set of all events F and a subset of the sample space Ω.<p>Then a <i>probability measure</i> P or just a <i>probability</i> is a function P: F --&gt; [0,1] that is, the closed interval [0,1].<p>So, we can write the probability of event A by P(A). Fine.<p>Or, given events A and B, we can consider the event C = A U B and, thus, write P(C) = P(A U B). Fine.<p>But the notes have P(1,2,3,4), and that is undefined in the notes and, really, in the rest of probability. Why? Because<p>1,2,3,4,<p>is not an event.<p>For the set of real numbers R, a real <i>random variable</i> X: Ω --&gt; R (that is <i>measurable</i> with respect to the sigma algebra F and a specified sigma algebra in R, usually the Borel sets, the smallest sigma algebra containing the open sets, or the Lebesgue measurable sets).<p>Then an event would be X in {1,2,3,4} subset of R or the set of all ω in Ω so that X(ω) in {1,2,3,4} or<p>{ω| X(ω) in {1,2,3,4} }<p>or the inverse image of {1,2,3,4} under X -- could write this all more clearly if had all of D. Knuth&#x27;s TeX.<p>in which case we could write<p>P(X in {1,2,3,4})<p>When the elementary notation is bad, a bit tough to take the more advanced parts seriously.<p>A polished, elegant treatment of these basics is early in<p>Jacques Neveu, <i>Mathematical Foundations of the Calculus of Probability</i>, Holden-Day, San Francisco, 1965.<p>Neveu was a student of M. Loeve at Berkeley, and can also see Loeve, <i>Probability Theory</i>, I and II, Springer-Verlag. A fellow student of Neveu at Berkeley under Loeve was L. Breiman, so can also see Breiman, <i>Probability</i>, SIAM.<p>These notes are from Stanford. But there have long been people at Stanford, e.g., K. Chung, who have these basics in very clear, solid, and polished terms, e.g.,<p>Kai Lai Chung, <i>A Course in Probability Theory, Second Edition</i>, ISBN 0-12-174650-X, Academic Press, New York, 1974.<p>K. L. Chung and R. J. Williams, <i>Introduction to Stochastic Integration, Second Edition</i>, ISBN 0-8176-3386-3, Birkhaüser, Boston, 1990.<p>Kai Lai Chung, <i>Lectures from Markov Processes to Brownian Motion</i>, ISBN 0-387-90618-5, Springer-Verlag, New York, 1982.
评论 #14179882 未加载
mrcactu5大约 8 年前
course like these make me wonder ... how much one can dress up basic probability. i think the answer is A LOT
评论 #14179507 未加载
评论 #14179965 未加载
评论 #14179866 未加载