I appreciate the notion of a "pretty-hard" problem of consciousness; it nicely captures what people in the field, philosophers and scientists alike, are <i>actually</i> looking for.<p>His counterarguments appear sound, especially the technical concerns. (That's not particularly exciting, though: formalisations of philosophical arguments often crumble under the mathematician's lens. Even the best ones!) They're not exactly new, I'd say, but their rigour is refreshing. My key problem with the overall approach remains the shitty test set we have for any theory of consciousness. It ultimately consists of two elements. If we're being generous, there's three. Namely, almost everyone except for the philosophical extremist agrees that we are conscious and that a stone isn't. IIT works quite well for these cases. Some people would include, say, a cat as a conscious being while maintaining that its "level" of consciousness is reduced. This boils down to<p><pre><code> C_stone < [C_cat <] C_human
</code></pre>
which isn't much. There's no real hope for extending it. Any other cases like the ones Aaronson discusses are ones for which we don't even have strong intuitions. Sure, we'd like to think that none of the entities he describes are in fact conscious. But if that's the bullet I have to bite in order to get a decent theory of consciousness, then I might be OK with that.