This article makes some excellent hypotheses about music that I think everyone can anecdotally confirm.<p>To explain the attributes of the brain some of these hypotheses allude to, in terms of programming, imagine you have a class. It consists of a list of data members and some member functions that are called by the user (the brain) in order to parse given (sensory) inputs.<p>For each sensory input, a separate object is created, and so
the returned outputs, after some computation, are inherently associated with the object. So, it happens that when you see a blue circle, the member functions are crunched using the
.shape and .color data members, that are set to circle and
blue respectively, say in the constructor of the ``blue circle`` object.<p>Suppose this model for processing all kinds of sensory
inputs; we might have to imagine complicated classes,
lots of member functions so that all kinds of sensory
inputs can be parsed. The author makes the interesting
claim that there are additional arguments in the member
functions that are influenced by whether there is music
at the time of sensory object creation.<p>These arguments distort the outputs (the parsed sensory
inputs). The member functions are black boxes, but common
experience gives us some insight, based on the observed
correlations of inputs and outputs (eg. "sad story" as
a sensory input without music is parsed as "sad story",
with Bach's Chaconne, is parsed as "extremely depressing, weirdly
poignant story").<p>One other claim that is made in the article, that is very interesting, is that there is a feedback, through evolution.
Suppose the brains of human beings have performed Bayesian
updates on the outputs of the black box member functions. When the Bayesian brain creates music, does it harness its updated function so as to amplify desired effects?