Huh. This is a very... "interesting" application for an LLM. I'm not the brightest crayon in the box, but if anyone else would like to follow along with my non-expert opinion as I read through the paper, here's my take on it.<p>It's pretty important for compilers / decompilers to be reliable and accurate -- compilers behaving in a deterministic and predictable way is an important fundamental of pipelines.<p>LLMs are inherently unpredictable, and so using an LLM for compilation / decompilation -- even an LLM that has 99.99% accuracy -- feels a bit odd to include as a piece in my build pipeline.<p>That said, let's look at the paper and see what they did.<p>They essentially started with CodeLlama, and then went further to train the model on three tasks -- one primary, and two downstream.<p>The first task is compilation: given input code and a set of compiler flags, can we predict the output assembly? Given the inability to verify correctness without using a traditional compiler, this feels like it's of limited use on its own. However, training a model on this as a primary task enables a couple of downstream tasks. Namely:<p>The second task (and first downstream task) is compiler flag prediction / optimization to predict / optimize for smaller assembly sizes. It's a bit disappointing that they only seem to be able to optimize for assembly size (and not execution speed), but it's not without its uses. Because the output of this task (compiler flags) are then passed to a deterministic function (a traditional compiler), then the instability of the LLM is mitigated.<p>The third task (second downstream task) is decompilation. This is not the first time that LLMs have been trained to do better decompilation -- however, because of the pretraining that they did on the primary task, they feel that this provides some advantages over previous approaches. Sadly, they only compare LLM Compiler to Code Llama and GPT-4 Turbo, and not against any other LLMs fine-tuned for the decompilation task, so it's difficult to see in context how much better their approach is.<p>Regarding the verifiability of the disassembly approach, the authors note that there are issues regarding correctness. So the authors employ round-tripping -- recompiling the decompiled code (using the same compiler flags) to verify correctness / exact-match. This still puts accuracy in the 45% or so (if I understand their output numbers), so it's not entirely trustworthy yet, but it might be able to still be useful (especially if used alongside a traditional decompiler, and this model's outputs only used when they are verifiably correct).<p>Overall I'm happy to see this model be released as it seems like an interesting use-case. I may need to read more, but at first blush I'm not immediately excited by the possibilities that this unlocks. Most of all, I would like to see it explored if these methods could be extended to optimize for performance -- not just size of assembly.