eli5 from chatgpt: Imagine human thinking is like a super complicated puzzle. When cognitive science (studying how we think and learn) was just starting, people thought of Artificial Intelligence (AI) as a special toolbox that could help solve parts of this puzzle. But now, many people working on AI are trying to build robots or computers that can solve the entire puzzle by themselves, just like a human would. This paper says that's really, really hard—so hard that we probably can't do it. The paper also says that if we believe these robots or computers are just like us, we're getting the wrong idea about how our own minds work. It's like using a map of a different place to try and find your way home—it doesn't work and just makes things confusing. The paper suggests we should use AI like a toolbox again, to help us understand our minds better, but we need to be careful not to make the same mistakes we did before.