TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Diffusion on syntax trees for program synthesis

401 pointsby pouwerkerk12 months ago

20 comments

zelphirkalt12 months ago
This sounds more similar to what people have done with Racket and hint generation for MOOCs. Not sure which university it is again, but I saw a presentation about how they generate hints for students by mutating the syntax tree and analyzing how they had to modify it, to get to a target solution. It was presented at some RacketCon, maybe a decade ago already. Perhaps that knowledge how to do it can be combined with newer machine learning approaches?<p>EDIT: I found the talk: <a href="https:&#x2F;&#x2F;invidious.baczek.me&#x2F;watch?v=ijyFC36kVis" rel="nofollow">https:&#x2F;&#x2F;invidious.baczek.me&#x2F;watch?v=ijyFC36kVis</a>
pmayrgundter12 months ago
It&#x27;s funny, this kind of subtree mutation was looked at pretty deeply by Koza and Adamı in the 90s under the rubric of Genetic Algorithms, but with a slightly different optimization function<p>One ref in the paper to 2000 for GAs for fast generation of program trees, but that&#x27;s missing the main show<p>Hope they&#x27;re reading this and dig into those guys work
评论 #40570480 未加载
评论 #40577268 未加载
评论 #40574403 未加载
评论 #40570083 未加载
评论 #40569953 未加载
评论 #40572976 未加载
评论 #40571788 未加载
JHonaker12 months ago
Markov Chain Monte Carlo for program synthesis isn&#x27;t exactly novel. The most immediate reference I thought of is Josh Tenenbaum&#x27;s [1].<p>There&#x27;s also a lot of demos in WebPPL (web probabilistic programming language)[2] like [3] for the synthesis of 3D space-ships. I highly recommend their associated books on The Design and Implementation of Probabilistic Programming Languages [4] and Probabilistic Models of Cognition [5].<p>I also highly recommend taking a look at the publications of the MIT Probabilistic Computing Project [6].<p>[1] Human-level concept learning through probabilistic program induction. <a href="https:&#x2F;&#x2F;www.cs.cmu.edu&#x2F;~rsalakhu&#x2F;papers&#x2F;LakeEtAl2015Science.pdf" rel="nofollow">https:&#x2F;&#x2F;www.cs.cmu.edu&#x2F;~rsalakhu&#x2F;papers&#x2F;LakeEtAl2015Science....</a><p>[2] <a href="http:&#x2F;&#x2F;webppl.org&#x2F;" rel="nofollow">http:&#x2F;&#x2F;webppl.org&#x2F;</a><p>[3] <a href="https:&#x2F;&#x2F;dritchie.github.io&#x2F;web-procmod&#x2F;" rel="nofollow">https:&#x2F;&#x2F;dritchie.github.io&#x2F;web-procmod&#x2F;</a><p>[4] <a href="https:&#x2F;&#x2F;dippl.org&#x2F;" rel="nofollow">https:&#x2F;&#x2F;dippl.org&#x2F;</a><p>[5] <a href="http:&#x2F;&#x2F;probmods.org&#x2F;" rel="nofollow">http:&#x2F;&#x2F;probmods.org&#x2F;</a><p>[6] <a href="http:&#x2F;&#x2F;probcomp.csail.mit.edu&#x2F;" rel="nofollow">http:&#x2F;&#x2F;probcomp.csail.mit.edu&#x2F;</a>
评论 #40582278 未加载
dinobones12 months ago
I don&#x27;t understand the &quot;magic&quot; here.<p>In a traditional approach, you would generate random images, calculate some distance metric, then use some optimization method like simulated annealing to minimize the distance.<p>I get that the difference between the image representations is being optimzied here, but how is it possible that changing tokens in a program is differentiable?
评论 #40570278 未加载
can16358p12 months ago
I wonder how this would apply to compiler&#x2F;interpreter optimizations.<p>Is it possible that it can &quot;disect&quot; some parts of the execution, perhaps at assembly level, and come up with optimizations specific to the compiled code without changing the output (I mean expected program output, not emitted binary), that modern compilers have not deterministically come up with?
评论 #40575531 未加载
评论 #40578855 未加载
评论 #40577644 未加载
whereismyacc12 months ago
There&#x27;s been talk in the past about github adding integrations with common build tools (automatically?). What if you could compile every llvm-compiled project on github and run a diffusion model over the intermediate representations?
评论 #40573327 未加载
gastonmorixe12 months ago
could diffusion work at binary level? I mean, could we train a diffusion model to generate a final binary of a program given a prompt? probably AST may be better but the binary I feel is extremely easy to at least test fast if it works or not. Though there may be a lot of drawbacks, if this is possible I can&#x27;t wait until we ask &quot;give me an app that does that&quot; and the diffusion model starts generating all te bytes the app to do the job. Just wondering
评论 #40571093 未加载
评论 #40570901 未加载
lwansbrough12 months ago
I’d like to see it with SDFs!
评论 #40572325 未加载
flakiness12 months ago
The PDF is super slow to render, I guess because it contains commands from programmatically generated figures. It gives it a kind of an academic-paper-feel I miss these days.<p><a href="https:&#x2F;&#x2F;arxiv.org&#x2F;pdf&#x2F;2405.20519" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;pdf&#x2F;2405.20519</a>
aquarius012 months ago
The application to inverse graphics tasks reminds me of this paper which was released one week earlier: <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2405.15306" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2405.15306</a>
sakras12 months ago
This is very cool! My first thought is: can this be applied to converting raster graphics to vector graphics (eg PNG to SVG)? Seems like a very similar problem, though probably much more computationally expensive.
评论 #40569888 未加载
montyanderson12 months ago
This is fascinating. I&#x27;ve been trying to envisage how the new language models will have deeper or lower-level role in software production than simple code generation.
评论 #40572046 未加载
grondilu12 months ago
The application to graphics is interesting. It seems to me that current image generation models struggle with stylized pictures (&quot;ligne claire&quot; in comics, geometric shapes and so on). After all this kinds of pictures should be easy to encode in vectoriel formats (like SVG), which are basically programming languages.
pamelafox12 months ago
I would love to see them try this with the Processing&#x2F;P5Js libraries. Thats what the ASTs reminded me of. It could potentially be used to help students trying to figure out how to fix their programs. I used AST-based hints for my ProcessingJS courses on Khan Academy, but I handwrote the AST patterns for those hints.
machiaweliczny12 months ago
I had idea about doing something similar based on DifussER paper. One would need to model code edits as algebra similar to add char, replace char or delete char but something like define func, remove func, define var etc. I am undereducated to do it myself but have a feeling it could work.
评论 #40572443 未加载
behnamoh12 months ago
How is it different from genetic algorithms that mutate the syntax tree until the target output is achieved?
评论 #40574534 未加载
goexploration12 months ago
The beam search idea is interesting. Curious to know if beam search for reverse diffusion has been done before.<p>Could anyone clarify how they integrate beam search with the reverse diffusion- do they sample m &gt; k nodes from a reverse diffusion step and expand only the top k nodes?
dwlg0012 months ago
I&#x27;m failing to see how this is novel. It looks like they&#x27;re doing diffusion on a representation system for 2D graphics, which is very different than an actual program (they do address this limitation to be fair)
评论 #40570290 未加载
artninja198812 months ago
Surprised to see Stuart Russells name on this as I thought he was fully consumed by the doomsday cult. Although he&#x27;s last author so he&#x27;s probably only on it because he&#x27;s the head of the lab
评论 #40571809 未加载
评论 #40570701 未加载
评论 #40570562 未加载
ofou12 months ago
Here&#x27;s a GPTo summary of the paper<p><a href="https:&#x2F;&#x2F;www.emergentmind.com&#x2F;papers&#x2F;2405.20519" rel="nofollow">https:&#x2F;&#x2F;www.emergentmind.com&#x2F;papers&#x2F;2405.20519</a>