The article contains a reference to a much more impressive task where a user automatically decompiled a binary exe game into Python. But I read their original post and here is what that user said.<p>> Several critics seemed to assume I claimed Claude had "decompiled" the executable in the traditional sense. In reality, as I described in our conversation, it analyzed visible strings and inferred functionality - which is still impressive but different from true decompilation.<p>So I’m not sure that the implications are as big as the article author is claiming. It seems Claude is good at de-minifying JavaScript but that is a long way away from decompiling highly optimized binary code.
I'm not sure why this is framed as an issue for security teams. Transpiling software has been a thing for ages. Especially in the JS world. Decompiling has been a bit harder without automation, but unless you have black box tests, this process will take ages to verify that the result has matching functionality.<p>So why would the blue teams care beyond "oh fun, a new tool for speeding up malware decompilation"?<p>Edit: To be clear, I get the new reverse engineering and reimplementation possibilities got much better and simpler. But the alarmist tone seems weird.
Erm sure... so is the output actually any good? I don't think anyone doubted that the LLM could produce some output but I would like to know if it is actually good output. Does it compile? Does it make sense?
Three years ago, you wrote<p>> Systemically, I'm concerned that there is a lack of professional liability, rigorous industry best practices, and validation in the software industry which contributes to why we see Boeings flying themselves into the ground, financial firms losing everyone's data day in and out, and stories floating around our industry publications about people being concerned about the possibility of a remotely exploitable lunar lander on Mars.<p>> There's a heap of [comical?] tropes in the software industry that are illogical/counterproductive to the advancement of our profession and contribute to why other professions think software developers are a bunch of immature spoiled children that require constant supervision.<p>3 weeks ago you posted something titled "The future belongs to people who can just do things".<p>Today you post this:<p>> Because cli.mjs is close to 5mb - which is way bigger than any LLM context window out here. You're going to need baby sit it for a while and feed it reward tokens of kind words ("your doing good, please continue") and encourage it to keep on going on - even if it gives up. It will time out, lots...<p>I don't think you are someone who can just "do things" if you think a good way to de-obfuscate 5MB of minified javascript is to pass it to a massive LLM.<p>Do you think you are advancing your profession?
This feels very much like the work of someone with ‘just enough knowledge to be dangerous’.<p>At no point in this process does the author seem to stop and inspect the results to see if they actually amount to what he’s asking for. Claiming that this output represents a decompilation of the obfuscated target seems to require at least demonstrating that the resulting code produces an artifact that does the same thing.<p>Further, the claim that “Using the above technique you can clean-room any software in existence in hours or less.” is horrifyingly naive. This would in no way be considered a ‘clean room’ implementation of the supplied artifact. It’s explicitly a derived work based on detailed study of the published, copyrighted artifact.<p>Please step away from the LLM before you hurt someone.
> You might be wondering why I've dumped a transpilation of the source code of Claude Code onto GitHub and the reason is simple. I'm not letting an autonomous closed source agent run hands free on my infrastructure and neither should you.<p>Asking it for its source code (AI never lies, right?) and then buying it on your personal card so corporate security doesn’t know what you’re doing makes me feel a lot better about it.
> Please understand that restrictive software licenses no longer matter because these LLMs can be driven to behave like Bitcoin mixers that bypass licensing and copyright restrictions using the approach detailed in this blog post.<p>This reads to me like "Please understand that legal protections no longer matter because computers can now break the law for you automatically".
This is total bullshit. It's clear by spending 2 minutes with the output, located on <a href="https://github.com/ghuntley/claude-code-source-code-deobfuscation">https://github.com/ghuntley/claude-code-source-code-deobfusc...</a>.<p>The AI has just made educated guesses about the functionality, wrote some sensible-looking code and hallucinated a whole lot.<p>The provided code on GitHub does not compile, does not work in the slightest, does not include any of the prompts from the original source, does not contain any API URLs and endpoints from the original, and uses <i>Claude 3 Opus</i>! And this is just from a cursory 5-minute look.
A better writeup on reverse engineering CC: <a href="https://github.com/Yuyz0112/claude-code-reverse">https://github.com/Yuyz0112/claude-code-reverse</a>
> Please understand that restrictive software licenses no longer matter because these LLMs can be driven to behave like Bitcoin mixers that bypass licensing and copyright restrictions using the approach detailed in this blog post.<p>I’m pretty sure translation of a text into another language would still count as copyright infringement. It may be hard to prove, but this isn’t a copyright bypass.
I don't understand Anthropic's reluctance to release this project as an npm package but not open-source it. Claude Code is such a great example of how agents could work in the future that the whole community could benefit from studying it. Plus, the work on integrating MCPs alone could create a huge network effect opportunity for them, one that's much bigger than keeping the source code secret.<p>All they've done so far is add an unnecessary step by putting a bounty on who will be the first to extract all the prompts and the agent orchestration layer.
I just inherited a Flutter project with no readme and no prior Flutter experience. AI helps but adding new features and deploying is still a tall task, having a conversation with the previous contributors is invaluable and somehow underrated these days
> cli.mjs<p>> This is the meat of the application itself. It is your typical commonjs application which has been compiled from typescript.<p>Why is it .mjs then?
> these LLMs are shockily good at transpilation and structure to structure conversions<p>I wonder if it is possible to transpile all the C Python modules to an api version that has no GIL, this way.
It has always been possible to decompile and deobfuscate code. This makes it way, way easier, though it still requires effort. What this produces is not going to be perfect.<p>The author thinks this invalidates the business models of companies with closed source or mixed open and closed components. This misunderstands why companies license software. They want to be compliant with the license, and they want support from the team that builds the software.<p>Yes, hustlers can and will fork things just like they always have. There are hustlers that will fork open source software and turn it into proprietary stuff for app stores, for example. That's a thing right now. Or even raise investment money on it (IMHO this is borderline fraud if you aren't adding anything). Yet the majority of them will fail long term because they will not be good at supporting, maintaining, or enhancing the product.<p>I don't see why this is so apocalyptic. It's also very useful for debugging and for security researchers. It makes it a lot easier to hunt for bugs or back doors in closed software.<p>The stuff about Grok planning a hit on Elon is funny, but again not apocalyptic. The hard part about carrying out a hit is doing the thing, and someone who has no clue what they're doing is probably going to screw that up. Anyone with firearms and requisite tactical training probably doesn't need much help from an LLM. This is sensationalism.<p>I've also seen stuff about Grok spitting out how to make meth. So what? You can find guides on making meth -- whole PDF books -- on the clear web, and even more on dark web sites. There are whole forums. There's even subreddits that do <i>not not</i> (wink wink nudge nudge) provide help for people cooking drugs. This too is AI doom sensationalism. You can find designs for atomic bombs too. The hard part about making an a-bomb is getting the materials. The rest could be done by anyone with grad level physics knowledge, a machine shop, and expertise in industrial and electrical engineering. If you don't have the proper facilities you might get some radiation exposure though.<p>There is <i>one</i> area that does alarm me a little: LLMs spitting out detailed info on chemical and biological weapons manufacture. This is less obvious and less easy to find. Still: if you don't have the requisite practical expertise you will probably kill yourself trying to do it. So it's concerning but not apocalyptic.
I found this article [0] by the same author and linked in the post more personally valuable - great insights into expert-level use of Cursor.<p>[0]<a href="https://ghuntley.com/stdlib/" rel="nofollow">https://ghuntley.com/stdlib/</a>