>> JP: Correct. Formally, Bayesian networks are just efficient
evidence-to-hypothesis inference machines. However, in retrospect, their
success emanated from their ability to “secretly” represent causal knowledge.
In other words, they were almost always constructed with their arrows pointing
from causes to effect, thus achieving modularity. It is only due to our
current understanding of causality that we can reflect back and speculate on
why they were successful; we did not know it then.<p>Hang on a sec. If Bayesian networks are perfectly capable of representing
causality relations, and in fact they've been doing just that all along
(albeit "secretely") then why the hell do we need a different formalism to
represent causality?<p>To give an analogy - if we can represent context-free langugaes with regular
automata, then what's the point of context-free languages? Instead, we
classify languages that can be represented by both regular automata and
pushdown automata as regular, and reserve the context-free designation for
languages that <i>cannot</i> be represented by regular (or finite) automata.<p>In the same way, if causality relations can be represented by Bayesian
networks, then higher-order representations are not really needed, or must be
reserved for some object that Bayesian networks can't represent.<p>In any case, this is just a huge piece of ret-con. Bayesian networks always
represented causality relations, only they did so "secretly"! That's up there
with the original Klingon's flat heads being the result of a virus infection;
or how Jean Grey didn't really die and it was the Phoenix Force that had taken
her form.