It's a good article. It seems like the author hedges his bets a little bit on the title question. At the end he allows that someday there could be a combination of LLMs, support scaffolding, and prompt engineering that will be able to tackle even the problems it has difficulty with now. But it was interesting as a tour through some seemingly straightforward problems that even the most advanced current LLMs are bad at, as well as containing interesting speculation on the root cause.
It is an interesting article, and a good question, "What can AI never do?"
There is an ongoing debate on LLMs and Trust, that we should take a look at as well. How much we can trust LLM and the data to make decisions.
:)