TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Converting C to ASM to specs and then to a working Z/80 Speccy tape

80 pointsby ghuntley2 months ago

11 comments

ohmygoodniche2 months ago
The amount of cognitive dissonance here is interesting.<p>I compiled c to asm. Title says the llm did this. it works! But it&#x27;s broken. It generated a bunch of other files! But I only need one. It couldn&#x27;t target z80 so I was a human in the loop. You have to trust it and understand how the Black box works to get n factor gains. But no one knows how these tools actually work and general advice is NOT to trust LLM outputs and the author didn&#x27;t trust them either... And even the final result has the incorrect tax rates...<p>I&#x27;m not denying LLMs can sort of rewrite small chunks of code in other languages, add comments to code, etc. but the way people talk about them is so snake oily.<p>Going by any of the major bullet points I would say that the title is wrong, and misleading at best.
评论 #43387804 未加载
评论 #43388033 未加载
评论 #43387756 未加载
评论 #43387526 未加载
beagle32 months ago
Interesting. Surprisingly, it decided to encode the multiplication and division as addition&#x2F;subtraction loops, which is incredibly inefficient - multiplying e.g. 32,000 by 32,000 (ignoring the overflow ...) will take 1,024,000,000 iterations, so thousands of seconds on the speccy&#x27;s humble 4Mhz Z80 (8 instructions, each taking at least 2 T-states ...)<p>Here is the multiplication loop (division is similar but in reverse, subtracting instead of adding).<p><pre><code> mult_loop: ; Check if BC is zero ld a, b or c jr z, mult_done ; Add HL to result ex de, hl ; DE = multiplier, HL = result add hl, de ; Add multiplier to result ex de, hl ; DE = result, HL = multiplier ; Decrement counter dec bc jr mult_loop</code></pre>
评论 #43391467 未加载
评论 #43387651 未加载
评论 #43403457 未加载
WalterBright2 months ago
I&#x27;ve done three major conversions of assembly to C:<p>1. My Empire game was written in assembler on the PDP-11. To port it to the IBM PC, I rewrote it line by line into C.<p>2. FutureNet&#x27;s schematic editor for the IBM PC was a very large assembler program. It was written in assembler because there was no other way to make it fit on the PC. I converted it line by line into C so it would run on a Sun. It took a couple months.<p>3. I converted some of the Optlink linker, a 32 bit Windows linker, into C. The main difficulty, though, was there was no test suite and it was very very hard to get right. I got maybe 10% of it converted. I eventually abandoned the project because 32 bit Windows code is now obsolete. The other hard part of the project was it was not conventionally divided up into functions. Code would jump into the middle of other functions. Another problem with it was its use of macros and conditional assembly was so hard to decipher that I would assemble the code, run it through my disassembler, and use the latter as my starting point. (Macros are Spawn of the Devil.)
评论 #43396839 未加载
fancyfredbot2 months ago
As someone who is generally very excited about the potential of LLMs to improve developer productivity I find this article a bit frustrating.<p>This isn&#x27;t a productive way to use an LLM. The example was so trivial you could easily rewrite it from scratch in less time.<p>That would be irrelevant if the process scaled to more complex applications, but the blog shows it fails to understand or implement the simple example repeatedly. This left me with little hope that the technique scales.<p>I worry that creating so much hype will lead to some kind of backlash.
评论 #43387939 未加载
stevekemp2 months ago
At some point the food-tax dropped from 10% to 5% which I guess is good for people who need to eat!<p>I still do a fair bit of z80 programming for myself, and I&#x27;m very familiar with the Spectrum so this was a nice article to see, but it&#x27;s a bit underwhelming how well this seemed to go.
评论 #43387593 未加载
评论 #43387590 未加载
aNoob70002 months ago
Are there any demos showing how to manage an existing codebase? Everyone loves to demo how AI can create new programs with a snap, but the elephant in the room is how well AI works with existing codebases and manages things like naming conventions, APIs from other apps, etc.
评论 #43387791 未加载
hakaneskici2 months ago
Thanks for publishing this.<p>Can you also share your opinion if you compare the &quot;code to spec&quot; vs &quot;spec to code&quot; phases?<p>I&#x27;m wondering if the LLM considers &quot;code&quot; and &quot;spec&quot; as two separate programming languages, or one as a programming, and the other as a human language? Not sure if it makes a difference or not for its internal translation logic though, if that makes sense.<p>PS: I learned BASIC on a friend&#x27;s ZX Spectrum, and your post made me remember some forgotten childhood memories :) Extra thanks.
评论 #43388330 未加载
consumer4512 months ago
I just learned something about LLMs: start a new chat as often as possible. I knew that this was a best practice, but I didn&#x27;t know how quickly the situation becomes dire.<p>&gt; We evaluate 12 popular LLMs that claim to support contexts of at least 128K tokens. While they perform well in short contexts (&lt;1K), performance degrades significantly as context length increases. At 32K, for instance, 10 models drop below 50% of their strong short-length baselines. Even GPT-4o, one of the top-performing exceptions, experiences a reduction from an almost-perfect baseline of 99.3% to 69.7%. [0]<p>I copied all the code that I could from TFA and pasted it into OpenAI&#x27;s tokenizer. It counted ~15k tokens. Many other tokens were generated in TFA&#x27;s chat, some of which are not visible to the user. I think it&#x27;s fair to assume that the entire chat was at least 25k tokens, right? Therefore, I believe that by the end of that chat, 4o&#x27;s performance was significantly degraded.<p>I think a major skill to develop for LLM supported coding is to compress a chat after just few thousand tokens into something like a step1.md file. Then start a new chat with &quot;read step1.md&quot; as the first prompt, and so on..<p>Is my logic sound here?<p>[0] <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2502.05167" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2502.05167</a>
评论 #43388243 未加载
itishappy2 months ago
Describing an annotated ASM program is something I would have thought LLMs should be perfect for, but it failed immediately. The author really published a whole blog post without even mentioning that <i>core business logic</i> wasn&#x27;t working.<p>I admittedly don&#x27;t really understand the purpose behind any of the intermediate LLM steps here. I&#x27;d have assumed an ASM to ASM translation would make the most sense and be the least error prone.
feverzsj2 months ago
Did he input all the prompts in the same session? It&#x27;s kinda nonsense bs. Any disassembler is more useful than this.
评论 #43387565 未加载
评论 #43387566 未加载
DeathArrow2 months ago
TLDR: No, it can not.