TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

An Introduction to Probabilistic Graphical Models (2003) [pdf]

214 pointsby scvalenciaabout 8 years ago

12 comments

diab0licabout 8 years ago
A few comments have mentioned neural nets in this post. adamnemecek mentions in this thread that PGMs are a superset of neural networks, and and Thomas Wiecki has a few excellent blog posts on creating bayesian neural networks using pymc3.[0][1][2] If you&#x27;re curious about how these two concepts can be brought together I highly recommend reading through these three posts.<p>[0] <a href="http:&#x2F;&#x2F;twiecki.github.io&#x2F;blog&#x2F;2016&#x2F;06&#x2F;01&#x2F;bayesian-deep-learning&#x2F;" rel="nofollow">http:&#x2F;&#x2F;twiecki.github.io&#x2F;blog&#x2F;2016&#x2F;06&#x2F;01&#x2F;bayesian-deep-learn...</a><p>[1] <a href="http:&#x2F;&#x2F;twiecki.github.io&#x2F;blog&#x2F;2016&#x2F;07&#x2F;05&#x2F;bayesian-deep-learning&#x2F;" rel="nofollow">http:&#x2F;&#x2F;twiecki.github.io&#x2F;blog&#x2F;2016&#x2F;07&#x2F;05&#x2F;bayesian-deep-learn...</a><p>[2] <a href="http:&#x2F;&#x2F;twiecki.github.io&#x2F;blog&#x2F;2017&#x2F;03&#x2F;14&#x2F;random-walk-deep-net&#x2F;" rel="nofollow">http:&#x2F;&#x2F;twiecki.github.io&#x2F;blog&#x2F;2017&#x2F;03&#x2F;14&#x2F;random-walk-deep-ne...</a>
评论 #13983746 未加载
platzabout 8 years ago
PGM&#x27;s are great, but my experience from Koller&#x27;s course is that it is very hard to identify cases where they can be used.<p>Part of the reason is that you need a-priori knowledge of the causal relationships (coarse grained I.e direction) between your variables.<p>Presumably if you&#x27;re doing ML you don&#x27;t know those causal relationships to begin with.<p>Particularly good fits are things like physics where laws are known.
评论 #13983395 未加载
评论 #13983925 未加载
tachimabout 8 years ago
This is the best textbook on graphical models, also from Jordan but later (2008): <a href="https:&#x2F;&#x2F;people.eecs.berkeley.edu&#x2F;~wainwrig&#x2F;Papers&#x2F;WaiJor08_FTML.pdf" rel="nofollow">https:&#x2F;&#x2F;people.eecs.berkeley.edu&#x2F;~wainwrig&#x2F;Papers&#x2F;WaiJor08_F...</a>. It also covers some general theory of variational inference. Source: I worked on PGMs in grad school.
JustFinishedBSGabout 8 years ago
This course also referred M.I Jordan book: <a href="http:&#x2F;&#x2F;imagine.enpc.fr&#x2F;~obozinsg&#x2F;teaching&#x2F;mva_gm&#x2F;fall2016&#x2F;" rel="nofollow">http:&#x2F;&#x2F;imagine.enpc.fr&#x2F;~obozinsg&#x2F;teaching&#x2F;mva_gm&#x2F;fall2016&#x2F;</a><p>One of the best course I have ever taken, F. Bach and G. Obozinski are incredible teachers.
评论 #13985964 未加载
BucketSortabout 8 years ago
There&#x27;s an excellent course on PGM by Koller on Coursera. My friend took it and now he&#x27;s a PGM evangelist. If you are wondering where PGM lies in the spectrum of machine learning, you should research the difference between generative and discriminate modeling. We have been driven to PGM to solve our ML problem that was hard to frame as A NN. Mainly because we had some priors we needed to encode to make the problem tractable. It reminds me a little of heuristics in search.<p>The person I&#x27;m talking to: an early ML student.
评论 #13984026 未加载
KasianFranksabout 8 years ago
Good article: &quot;Big-data boondoggles and brain-inspired chips are just two of the things we’re really getting wrong&quot; - Michael I. Jordan ref: <a href="http:&#x2F;&#x2F;spectrum.ieee.org&#x2F;robotics&#x2F;artificial-intelligence&#x2F;machinelearning-maestro-michael-jordan-on-the-delusions-of-big-data-and-other-huge-engineering-efforts" rel="nofollow">http:&#x2F;&#x2F;spectrum.ieee.org&#x2F;robotics&#x2F;artificial-intelligence&#x2F;ma...</a>
visargaabout 8 years ago
PGM seems to me harder than neural nets, but the trend in the last couple of years is to include probabilities in neural nets, so they&#x27;re hot.
评论 #13983137 未加载
评论 #13983124 未加载
MrQuincleabout 8 years ago
From: <a href="http:&#x2F;&#x2F;spectrum.ieee.org&#x2F;robotics&#x2F;artificial-intelligence&#x2F;machinelearning-maestro-michael-jordan-on-the-delusions-of-big-data-and-other-huge-engineering-efforts" rel="nofollow">http:&#x2F;&#x2F;spectrum.ieee.org&#x2F;robotics&#x2F;artificial-intelligence&#x2F;ma...</a><p>Jordan: Well, humans are able to deal with cluttered scenes. They are able to deal with huge numbers of categories. They can deal with inferences about the scene: “What if I sit down on that?” “What if I put something on top of something?” These are far beyond the capability of today’s machines. Deep learning is good at certain kinds of image classification. “What object is in this scene?”<p>I think Jordan refers here to Bayesian models that incorporate gravity, occlusion, and other such concepts.<p><a href="http:&#x2F;&#x2F;www.cv-foundation.org&#x2F;openaccess&#x2F;content_cvpr_2013&#x2F;html&#x2F;Jiang_Hallucinated_Humans_as_2013_CVPR_paper.html" rel="nofollow">http:&#x2F;&#x2F;www.cv-foundation.org&#x2F;openaccess&#x2F;content_cvpr_2013&#x2F;ht...</a> e.g. postulates entire humans to improve scene understanding.<p>What I get out of this: Deep learning has to be enriched with progress from other machine learning fields
leecarraherabout 8 years ago
Nice high level talk on Statistical Inference for Big Data by Jordan. Been one of my favorites since his LDA&#x2F;PLSA papers in 2003 with Andrew NG. <a href="http:&#x2F;&#x2F;videolectures.net&#x2F;colt2014_jordan_bigdata&#x2F;?q=jordan" rel="nofollow">http:&#x2F;&#x2F;videolectures.net&#x2F;colt2014_jordan_bigdata&#x2F;?q=jordan</a>
Chris2048about 8 years ago
Related?:<p><a href="https:&#x2F;&#x2F;www.coursera.org&#x2F;specializations&#x2F;probabilistic-graphical-models" rel="nofollow">https:&#x2F;&#x2F;www.coursera.org&#x2F;specializations&#x2F;probabilistic-graph...</a><p><a href="http:&#x2F;&#x2F;openclassroom.stanford.edu&#x2F;MainFolder&#x2F;CoursePage.php?course=ProbabilisticGraphicalModels" rel="nofollow">http:&#x2F;&#x2F;openclassroom.stanford.edu&#x2F;MainFolder&#x2F;CoursePage.php?...</a>
shurtlerabout 8 years ago
Note there&#x27;s an exploding literature that reads these models as causal models: <a href="http:&#x2F;&#x2F;ftp.cs.ucla.edu&#x2F;pub&#x2F;stat_ser&#x2F;r350.pdf" rel="nofollow">http:&#x2F;&#x2F;ftp.cs.ucla.edu&#x2F;pub&#x2F;stat_ser&#x2F;r350.pdf</a>
iconvalleysilabout 8 years ago
Michael I. Jordan was a mentor to Andrew Ng. Probabilistic Graphical Models are the next frontier in AI after deep learning.