That's a book review. Read the actual book.[1]<p>Notes:<p>- Prologue:<p><i>(Behaviorism) ended up being a terrible way to do psychology, but it was admirable for being an attempt at describing the whole business in terms of a few simple entities and rules. It was precise enough to be wrong, rather than vague to the point of being unassailable, which has been the rule in most of psychology.</i><p>- Thermostat:<p>An intro to control theory, but one which ignores stability. Maxwell's original paper, "On Governors", (1868) is still worth reading. He didn't just discover electromagnetics, he founded control theory.
Has the usual problems with applying this to emotions, and the author realizes this.<p>OK, so living things have a lot of feedback control systems. This is not a new observation. The biological term is "homeostasis", a concept apparently first described in 1849 and named in 1926. (There are claims that this concept dates from Aristotle, who wrote about "habit", but Aristotle didn't really get feedback control. Too early.)<p>- Motivation:<p>Pick goals with highest need level, but have some hysteresis to avoid toggling between behaviors too fast.<p>- Conflict and oscillation:<p>Author discovers oscillation and stability in feedback systems.<p>- What is going on?<p>Author tries to derive control theory.<p>- Interlude<p>Norbert Wiener and cybernetics, which was peak fascination with feedback in the 1950s.<p>- Artificial intelligence<p><i>"But humans and all other biological intelligences are cybernetic minimizers, not reward maximizers. We track multiple error signals and try to reduce them to zero. If all our errors are at zero — if you’re on the beach in Tahiti, a drink in your hand, air and water both the perfect temperature — we are mostly comfortable to lounge around on our chaise. As a result, it’s not actually clear if it’s possible to build a maximizing intelligence. The only intelligences that exist are minimizing. There has never been a truly intelligent reward maximizer (if there had, we would likely all be dead), so there is no proof of concept. The main reason to suspect AI is possible is that natural intelligence already exists — us."</i><p>Hm. That's worth some thought. An argument against it is that there are clearly people driven by the desire for "more", with no visible upper bound.<p>- Animal welfare<p>Finally, "consciousness". It speaks well of the author that it took this long to bring that up. It's brought up in the context of whether animals are conscious, and, if so, which animals.<p>- Dynamic methods<p>Failure modes of multiple feedback systems, plus some pop psychology.<p>- Other methods<p>Much like the previous chapter<p>- Help wanted<p><i>"If the proposal is more or less right, then this is the start of a scientific revolution."</i><p>Not seeing the revolution here. Most of the ideas here have been seen before. Did I miss something?<p>Feedback is important, but the author doesn't seem to have done enough of it to have a good understanding.<p>If you want an intuitive grasp of feedback, play with some op amps set up as an analog computer and watch the output on a scope. Or find a simulator. If The Analog Thing came with a scope (which, at its price point, it should) that would be ideal. Watch control loops with feedback and delay stabilize, oscillate, or limit. There are browser-based tools which do this, but they assume basic electrical engineering knowledge.<p>[1] <a href="https://slimemoldtimemold.com/2025/02/06/the-mind-in-the-wheel-prologue-everybody-wants-a-rock/" rel="nofollow">https://slimemoldtimemold.com/2025/02/06/the-mind-in-the-whe...</a>