TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Paper: Augmenting Decompiler Output with Learned Variable Names and Types [pdf]

48 点作者 ArdelleF将近 3 年前

1 comment

troelsSteegin将近 3 年前
This work reads to me like a neat smart hack. It's the kind of thing I miss reading about in more ambitious systems and learning papers. It "magically" reverses elements of a transformation. If I read it right, it follows a pattern of taking a structured input (source code), an unobserved but consistent transformation (compilation), and an output with latent structure (binary). They trained a transformer model in order to extract features from the output and label them with features that make sense in the representation of the input. This is harder if the transformation is noisy, perhaps as from different inlining behaviors. This is harder if the feature space of the input does not have labels that make sense to the model consumer. What they did seems like fun.
评论 #32513834 未加载