Any sufficiently advanced LLM is indistinguishable from Prolog.<p>I half-jest but I envision the direction of LLM research to head towards a parser-oriented setup where LLMs merely extract the entities and relations and the actual logic is done by a logical engine such as Prolog.
Non deterministic compilers, yay! Where do I sign up?<p>In more seriousness, miscompilations or in general unexpected behavior caused by layers below you are expensive to find and fix. I think LLMs have a long way to go before such use cases seem appealing to me.
Reading the paper, the connection to compilers is more of an analogy rather than a direct technical link.<p>The authors propose using an LLM to reframe the task as high level psuedocode, and then reason on that code on the specific details of the task<p>No compilers were used or compiled - no real code was generated or executed. Its just the idea that a programming language syntax has good structure to process details, and a way to interpret some of the results. Many of the other comments here seem like they didn't read the paper at all and are reacting to the headline
I think the title is a little misleading. The main difference between this paper and CoC (Chain of Code) is that the LLM is instructed to make a plan to solve all the given instances and then code that plan in pseudocode, while in CoC the plan is to solve the single instance given.<p>From the paper: The main difference between THINK-AND-EXECUTE and CoC is that we use pseu-
docodes which are generated to express logic shared among the tasks instances, while CoC
incorporates pseudocode as part of the intermediate reasoning steps towards the solution of
a given instance. Hence, the results indicate the advantages of applying pseudocode for the
generation of task-level instruction over solely using them as a part of rationales.<p>I find the phrase "as a part of rationales" a little strange, but English is not my native language.
The phase 2 prompt is complete, but the phase 3 prompt's initial part ends in "When constructing the main function, ...", and no mention of random seeds, so I guess this paper is not reproducible at all.
This seems quite promising. Using pseudo-code as an intermediary step isn't new but seems like this takes it a bit further. Will need to see some code and test it out.
English is terribly imprecise, so it makes sense to use pseudo instructions to improve the bounds/outcome of a language model’s execution.<p>I do wonder how long hacks like this will be necessary; as it stands, many of these prompting techniques are essentially artificially expanding the input to enhance reasoning ability (increasing tokens, thus increasing chance of success).
Couple of weeks ago I published a new programming language called Plang (as in pseudo language) that uses LLM to translate user intent into executable code, basically LLM as a compiler.<p>It saves you incredible amount of work, cutting code writing down by 90%+. The built code is deterministic(it will never change after build) and as a programmer you can validate the code that will be executed. It compiles to C#, so it handles GC, encoding, etc. that languages need to solve, so I can focus on other areas.<p>Plang also has some features that other language don't have, e.g. events on variables, built in identity and interesting(I think) approach to privacy.<p>I have not been advertising to much since it is still early development and I create still to many breaking changes, but help is welcome(and needed) so if it something that is interesting to you the repo is at <a href="https://github.com/plangHQ">https://github.com/plangHQ</a>
It's going to be really fascinating to see this applied instead of chain of thought and other kinds of reasoning approaches, because it's generic. It should in principle work on every kind of LLM.
I wrote a toy language along these lines a while back[0]. Basically, types and function signatures, with comments in English, produce a valid program. You write a type and a comment, and the compiler goes through GPT to run the code. Fun novel idea.<p>[0] - <a href="https://github.com/eeue56/neuro-lingo">https://github.com/eeue56/neuro-lingo</a>
Researchers are trying their damndest to build a "reasoning" layer using LLMs as the foundation. But, they need to go back to the drawing-board and understand from first principles what it means to reason. For this in my view, they need to go back to epistemology (and refer to Peirce and logicians like him).
Seeing this makes me want to reactivate an old project[0]. Been thinking more and more that LLMs could give it superpowers.<p>[0] <a href="https://pypi.org/project/neulang/" rel="nofollow">https://pypi.org/project/neulang/</a>