TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Reverse engineering a neural network's clever solution to binary addition

562 pointsby Ameoover 2 years ago

25 comments

kensover 2 years ago
The trick of performing binary addition by using analog voltages was used in the IAS family of computers (1952), designed by John von Neumann. It implemented a full adder by converting two input bits and a carry in bit into voltages that were summed. Vacuum tubes converted the analog voltage back into bits by using a threshold to generate the carry-out and more complex thresholds to generate the sum-out bit.
评论 #34409314 未加载
评论 #34405396 未加载
评论 #34423150 未加载
greenbitover 2 years ago
That&#x27;s awesome - the network&#x27;s solution is essentially to convert the input to analog, <i>perform the actual addition in analog</i>, and then convert that back to digital. And the first two parts of that all happened in the input weights, no less.
评论 #34399826 未加载
评论 #34400277 未加载
评论 #34400203 未加载
评论 #34404496 未加载
评论 #34409982 未加载
dagssover 2 years ago
Impressive. Am I right that what is really going on here is that the network is implementing the &quot;+&quot; operator by actually having the CPU of the host carrying out the &quot;+&quot; when executing the network?<p>I.e., the network converts the binary input and output to floating point, and then it is the CPU of the host the network is running on that <i>really</i> does the addition in floating point.<p>So usually one does a bunch of FLOPs to get &quot;emerging&quot; behaviour that isn&#x27;t doing arithmetic. But in this case, instead the network does a bunch of transforms in and out so that in a critical point, the addition executed in the network runner is used for exactly its original purpose: Addition.<p>And I guess saying it is &quot;analog&quot; is a good analogy for this..
评论 #34402375 未加载
评论 #34401371 未加载
评论 #34400954 未加载
jccalhounover 2 years ago
These stories remind me of a story from Discover Magazine <a href="https:&#x2F;&#x2F;www.discovermagazine.com&#x2F;technology&#x2F;evolving-a-conscious-machine" rel="nofollow">https:&#x2F;&#x2F;www.discovermagazine.com&#x2F;technology&#x2F;evolving-a-consc...</a> A researcher was using a process to &quot;evolve&quot; a FPGA and the result was a circuit that was super efficient but worked in ways that were unexpected: part of the circuit seemed unconnected to the rest but if removed the whole thing stopped working and it would only work at a specific temperature.
评论 #34401762 未加载
评论 #34401279 未加载
评论 #34400639 未加载
评论 #34404391 未加载
评论 #34401319 未加载
评论 #34411441 未加载
评论 #34403355 未加载
评论 #34402352 未加载
aargh_aarghover 2 years ago
The essay linked from the article is interesting: <a href="http:&#x2F;&#x2F;www.incompleteideas.net&#x2F;IncIdeas&#x2F;BitterLesson.html" rel="nofollow">http:&#x2F;&#x2F;www.incompleteideas.net&#x2F;IncIdeas&#x2F;BitterLesson.html</a>
评论 #34400693 未加载
评论 #34400501 未加载
评论 #34399991 未加载
评论 #34400056 未加载
puttycatover 2 years ago
Recent work on NNs where applying Kolmogorov complexity optimization leads to a network that solves binary addition with a carry-over counter and 100% provable accuracy: (see Results section)<p><a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2111.00600" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2111.00600</a>
moyixover 2 years ago
Related: an independent reverse engineering of a network that &quot;grokked&quot; modular addition:<p><a href="https:&#x2F;&#x2F;www.alignmentforum.org&#x2F;posts&#x2F;N6WM6hs7RQMKDhYjB&#x2F;a-mechanistic-interpretability-analysis-of-grokking" rel="nofollow">https:&#x2F;&#x2F;www.alignmentforum.org&#x2F;posts&#x2F;N6WM6hs7RQMKDhYjB&#x2F;a-mec...</a><p>One interesting thing is that this network similarly does addition using a Fourier transform.
rehevkor5over 2 years ago
Not too surprising. Each layer can multiply each input by a set of weights, and add them up. That&#x27;s all you need, in order to convert from base 2 to base 10. In base 2, we calculate the value of a set of digits by multiplying each by 2^0, 2^1, ..., 2^n. Each weight is twice the previous. The activation function is just making it easier to center those factors on zero to reduce the magnitude of the regularization term. I am guessing you could probably get the job done with just two neurons in the first layer, which output the real value of each input, one neuron in the second layer to do the addition (weights 1, 1), and 8 neurons in the final layer to convert back to binary digits.
评论 #34402427 未加载
MontyCarloHallover 2 years ago
A couple questions:<p>1. How much of this outcome is due to the unusual (pseudo) periodic activation function? Seems like a lot of the DAC-like behavior is coming from the periodicity of the first layer’s output, which seems to be due to the unique activation function.<p>2. Would the behavior of the network change if the binary strings were encoded differently? The author encodes them as 1D arrays with 1 corresponding to 1 and 0 corresponding to -1, which is an unusual way of doing things. What if the author encoded them as literal binary arrays (i.e. 1-&gt;1, 0-&gt;0)? What about one-hot arrays (i.e. 2D arrays with 1-&gt;[1, 0] and 0-&gt;[0, 1]), which is the most common way to encode categorical data?
评论 #34401538 未加载
评论 #34400970 未加载
评论 #34407737 未加载
hosejaover 2 years ago
Reminded me of the 0xFFFF0000 0xFF00FF00 0xF0F0F0F0 0xCCCCCCCC 0XAAAAAAAA trick, the use of which I don&#x27;t currently recall...<p>edit: several, but probably not very relevant <a href="https:&#x2F;&#x2F;graphics.stanford.edu&#x2F;~seander&#x2F;bithacks.html" rel="nofollow">https:&#x2F;&#x2F;graphics.stanford.edu&#x2F;~seander&#x2F;bithacks.html</a>
AstralStormover 2 years ago
When you expect your network to learn a binary adder and it learns FFT addition instead. :)<p>(For an ANN FFT is more natural as it&#x27;s a projection algorithm.)
juungeover 2 years ago
Awesome reverse engineering project. Next up: reverse engineer a NN&#x27;s solution to Fourier Transform!
评论 #34399817 未加载
WalterBrightover 2 years ago
In a similar vein, in the 1980s a &quot;superoptimizer&quot; was built for optimizing snippets of assembler by doing an exhaustive search of all combinations of instructions. It found several delightful and unexpected optimizations, which quickly became incorporated into compiler code generators.
评论 #34404304 未加载
wwarnerover 2 years ago
So important. Here’s a prediction. Work like this on model explainability will continue and eventually will reveal patterns in the input that reliably produce features in the resulting models. That knowledge will tell us about <i>nature</i>, in addition to ML.
dejonghover 2 years ago
Interesting idea to reverse engineer the network. Are there other sources that have done this?
评论 #34399781 未加载
评论 #34399755 未加载
nerdponxover 2 years ago
I liked this article a lot, but<p>&gt; One thought that occurred to me after this investigation was the premise that the immense bleeding-edge models of today with billions of parameters might be able to be built using orders of magnitude fewer network resources by using more efficient or custom-designed architectures.<p>Transformer units themselves are already specialized things. Wikipedia says that GPT-3 is a standard transformer network, so I&#x27;m sure there is additional room for specialization. But that&#x27;s not a new idea either, and it&#x27;s often the case that a after a model is released, smaller versions tend to follow.
评论 #34399996 未加载
评论 #34400052 未加载
GeorgeTirebiterover 2 years ago
Small nit: the analog DAC circuit has MSB and LSB reversed. That is, the LSB goes into the top 128k resistor. If you think of it one bit at a time, the gain of the circuit = -Rf&#x2F;R --- where R is one of the resistors in the ladder. (Rf is the feedback resistor, which is unlabeled and un-valued in the article.) Clearly, you get bigger output when R is smaller.
agaponover 2 years ago
Very interesting. But I missed how the network handled the overflow.<p>Spotted one small typo: digital to audio converter should be digital to analog.
评论 #34399977 未加载
评论 #34399743 未加载
评论 #34399821 未加载
KhoomeiKover 2 years ago
A more theoretical but increasingly popular approach to identifying behavior like this is interchange intervention: <a href="https:&#x2F;&#x2F;ai.stanford.edu&#x2F;blog&#x2F;causal-abstraction&#x2F;" rel="nofollow">https:&#x2F;&#x2F;ai.stanford.edu&#x2F;blog&#x2F;causal-abstraction&#x2F;</a>
imtringuedover 2 years ago
I don&#x27;t see how this is surprising. It&#x27;s cool but come on, you have a network consisting of addition and multiplication and you expect it to not use them? What else is it supposed to do? Memorize every input output pair like a look up table?
etherioover 2 years ago
Relevant: reverse engineering modular addition <a href="https:&#x2F;&#x2F;twitter.com&#x2F;NeelNanda5&#x2F;status&#x2F;1559060507524403200" rel="nofollow">https:&#x2F;&#x2F;twitter.com&#x2F;NeelNanda5&#x2F;status&#x2F;1559060507524403200</a>
YesThatTom2over 2 years ago
Is this going to change how 8-but adders are implemented in chips?
评论 #34399982 未加载
allisdustover 2 years ago
totally off topic: Any pointers to tutorials&#x2F;articles that teach implementation of common neural network models in javascript&#x2F;rust (from scratch) ?
nnurmanovover 2 years ago
I wonder if is possible to reverse engineer passwords. One can input passwords, train them on known values. What to expect?
2bitencryptionover 2 years ago
&gt; One thought that occurred to me after this investigation was the premise that the immense bleeding-edge models of today with billions of parameters might be able to be built using orders of magnitude fewer network resources by using more efficient or custom-designed architectures.<p>I&#x27;m pretty convinced that something equivalent to GPT could run on consumer hardware today, and the only reason it doesn&#x27;t is because OpenAI has a vested interest in selling it as a service.<p>It&#x27;s the same as Dall-E and Stable Diffusion - Dall-E makes no attempt to run on consumer hardware because it benefits OpenAI to make it so large that you must rely on someone with huge resources (i.e. them) to use it. Then some new research shows that effectively the same thing can be done on a consumer GPU.<p>I&#x27;m aware that there&#x27;s plenty of other GPT-like models available on Huggingface, but (to my knowledge) there is nothing that reaches the same quality that can run on consumer hardware - <i>yet</i>.