Fascinating! Apparently central to the complete farce going on at OpenAI - is this paper by one of the board members:<p><a href="https://cset.georgetown.edu/publication/decoding-intentions/" rel="nofollow noreferrer">https://cset.georgetown.edu/publication/decoding-intentions/</a><p>It's about - my favourite topic: Costly Signaling!<p>How curious - that such a topic would be the possible catalyst for one of the most extraordinary collapses of a leading tech company.<p>Specifically - the paper is about using costly signalling as a means to align (or demonstrate algnment) various kinds of AI interested entities (governments, private corporations, etc) with the public good.<p>The gist of costly signalling - to try and convince others you really mean what you say - you use a signal that is very expensive to you in some respect. You don't just ask a girl to marry you, you buy a big diamond ring! The idea being - cheaters are much less likely to suffer such expense.<p>Apparently the troubles at OpenAI escalated when one of the board members - Helen Toner - published this paper. It is critical of OpenAI, and Sam Altman was pissed at the reputational damage to the company and wanted her removed. The board instead removed him.
The gist of the paper's criticisms is that while OpenAI has invested in some costly signals to indicate its alignment with AI safety, overall, it judges those signals were ultimately rather weak (or cheap).<p>Now here is what I find fascinating about all this: up until reading this paper I had found the actions of the OpenAI board completely baffling, but now suddenly their actions make a kind of insane sense. They are just taking their thinking on costly signalling to its logical conclusion. By putting the entire fate of the company and its amazing market position at risk - they are sending THE COSTLIEST SIGNAL possible, relatively speaking: willingness to suffer self-annihilation.<p>Academics truly are wondrous people... that they can lose themselves in a system of thought so deeply, in a way regular people can't. I can't help but have a genuine, sublime appreciation for this, even while thinking they are some of the silliest people on this planet.<p>Here's where I feel they went wrong. Costly signals by and large should be without explicit intention. If you are consciously sending various signals that are costly - you are probably a weirdo. Systems of costly signalling work because they are implicit, shared and in many respects, innate. That's why even insects can engage in costly signalling.
But these folk see costly signals as an explicit activity to be engaged in as part of explicit public policy - and unsurprisingly, see it riddled with ambiguity. Of course it would be - individual agents can't just make signals up, and expect the system to understand them. Semiotics biatch....<p>But rather than reflect on this they double down on signalling as an explicit policy choice. How do they propose to reduce ambiguity? Even costlier signals! It's no wonder then they see it as entirely rational to accept self destruction as a possibility. That's how they escape the horrid existential dread of being doubted by the other.
In biology though, no creatures survived in the long-run to reproduce where they invested in costly signals that that didn't confer at least as much, if not more benefit to them in excess of what they paid in the first place.<p>Those that ignore this basic cost-benefit analysis in their signalling will suffer the ignomony of being perceived as ABSOLUTE NUTTERS. Which is exactly how the world is thinking about the OpenAI board. The world doesn't see a group of highly safety aligned AI leaders.<p>The world sees a bunch of disfunctional crazy people.