Interesting read. Time wasn't a variable I had considered missing from interactions with AI, but it makes sense.<p>I'd also add this: tools like the AI bots so prevalent today are flawed because they cannot consider things like context, limitations, dependencies and scope. I give a question...they attempt to spit out a complete answer with complete disregard for the context which my question is coming from.<p>AI fails in the same way a monkey can't drive a car.... abstraction. We humans know a red light ahead means stop at the stop light, not stop immediately where you are right now. All AI can do is make a best guess of what the inputs pattern-match to. This is like always having an answer without ever asking for clarification or context.
I enjoyed reading your perspective and largely agree with your points. However, I believe it would be more compelling if you could provide concrete evidence, such as transcripts or results from actual LLM interactions. Many of the examples you’ve cited, such as those involving figures like kings or presidents, feel somewhat dated and well-discussed. Drawing a strong conclusion that LLMs cannot reach AGI or understand concepts like time, solely based on these examples, seems premature without showcasing specific results from modern LLMs. I feel a demonstration of their limitations would strengthen your argument.