Not to make everything about AI, but.. he did mention LLMs. What happens if, after clearly defining the problem and listing all potentially relevant clues as to how to solve it, you first give that information to an LLM, before taking a walk or a weekend off?<p>This is probably another case where the answer depends on the attitude towards LLMs and maybe technology overall.<p>For me I think that this is largely how I use LLMs for programming. I use the aider program, add the relevant source files, explain what I want to do and the approach I want, and ask it to do it. It does routinely miss obvious things. But then it's also often fairly easy to ask it to correct itself.<p>It depends on the nature and complexity of the problem though.<p>But theoretically the LLM would have a couple of useful ideas or feedback if you really give it all of the context.<p>But maybe defining the problem and the relevant information is the hard part. Perhaps having significantly larger context windows is a bigger deal than some people might realize.<p>If the LLM or multimodal model has a very large context window and also enough computing resources to constantly or routinely decide what the goal is.. Then the other part would be having a large pool of potentially relevant information to select from in approaching the problem.<p>But basically we might be able to skip the step where we select the relevant information and let the AI do that, if we have a large enough context window.<p>Which might lead to the question, why did we even get out of bed. But that's another problem.<p>I get the impression that diffusion transformers are a big deal. Do they allow for more sophisticated/developed problem solving or "cognition" in some way?