> Projects like OpenAI’s DALL-E and DeepMind’s Gato and LaMDA have stirred up many discussions of artificial general intelligence (AGI). These discussions tend not to go anywhere, largely because we don’t really know what intelligence is.<p>This doesn't make any sense. Any time consciousness or intelligence is discussed someone comes along and says "we don't even have a definition, how can we talk." This is so absurd and ignorant of all the research that came before.<p>The article then proceeds to provide the worst definition I've ever seen.<p>> However, to discover intelligent life, we would need a working definition of intelligence. The only useful definition I can imagine is “able to generate signals that can be received off planet and that are indisputably non-natural.” But by that definition, humans have only been intelligent for roughly 100 years,<p>What? A shitty operational definition? Intelligence is problem solving ability. If you have more you solve harder problems. One day a machine will exist that no one would deny is intelligent. That's it. Imagining, that no, you need to be able to operationalize the assignment to the category intelligent or not intelligent why what the entity is. Why, intelligence is obviously relational. It used to be ok to say AGI is impossible, everyone sees that claim is absurd (even though it always was). Now people quibble about competence in context and generalization. The horse is out the barn. Intelligence is substrate independent.