I evaluated the Jetbrains assistant. I was more impressed with its ability to chat about science fiction than actual programming.<p>It could do some impressive things. I was working on a codebase that used JooQ to generate SQL code. The agent did not have access to the database or SQL scripts but it could figure out what the SQL schema was by looking at the Jooq stubs.<p>I tried to use it to write a somewhat complex query that involved CTEs in Postgres, there is a chicken-and-egg element of circularity that makes these queries tricky to write. I was able to get it to write very simple JooQ queries but it never really understood the problem I had and solutions and went through quite a few cycles that weren't right even after I'd tell it that "this didn't compile", "that won't work because..." and reading a lot of polite apologizing.<p>I found it very tiresome to cut and paste code snippets, add imports, have to fix little things, have it not compile, cut and paste compilation errors, then undo all the changes. With close integration to the IDE it might be less painful to cycle through a large number of wrong answers.<p>My take is that LLMs are very strong when doing things that are basically linear operations from one end to the other end. For instance, language translation is like that, at least at the entry level, since roughly every sentence in one text corresponds to a sentence in the other language. It is like translating JooQ stubs to a SQL script: you don't need to really understand very much, just replace one pattern with another pattern.<p>Other tasks have an element of looping which is really fundamental in computer science<p><a href="https://en.wikipedia.org/wiki/Halting_problem" rel="nofollow">https://en.wikipedia.org/wiki/Halting_problem</a><p>I've found that people often get really offended when you point out that LLMs cannot repeal the fundamentals of computer science, but because they invest a finite computing budget into a problem, an LLM just can't do anything that takes a program which might not complete. The old book<p><a href="https://en.wikipedia.org/wiki/G%C3%B6del,_Escher,_Bach" rel="nofollow">https://en.wikipedia.org/wiki/G%C3%B6del,_Escher,_Bach</a><p>has a running story about a conflict between the Tortoise and Achilles who are struggling to solve a problem isomorphic to great logical paradoxes and struggle deliciously for a long time before finally understanding the impossibility of what they are doing. Many people misinterpret this book as a critique of the symbolic AI of the 1970s, but it will give you some insight into how "lets just write a loop with an LLM in it" will get you into problems which are just as intractable as symbolic AI seemed to be the late 1980s.