TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

How AI based programming could work

89 点作者 bjenik将近 9 年前

20 条评论

jcranmer将近 9 年前
The keyword you want to search for is &quot;program synthesis.&quot; It already exists, and no neural nets need apply (neural nets tend to be useful only if you can&#x27;t get anything else to work). As a bonus, it&#x27;s not probabilistic like AI techniques tend to be but exact, based on SMT solvers and verified proof correctness.<p>An example of work showcased at this year&#x27;s PLDI that&#x27;s capable of doing this sort of stuff:<p>Fast synthesis of fast collections (<a href="https:&#x2F;&#x2F;homes.cs.washington.edu&#x2F;~mernst&#x2F;pubs&#x2F;collection-synthesis-pldi2016-abstract.html" rel="nofollow">https:&#x2F;&#x2F;homes.cs.washington.edu&#x2F;~mernst&#x2F;pubs&#x2F;collection-synt...</a>): specify a database-like data structure for a collection of objects and the list of queries performed on that data structure, and get out code that&#x27;s as fast as hand-tuned data structures with fewer bugs.<p>Programmatic and Direct Manipulation, Together at Last (<a href="http:&#x2F;&#x2F;dl.acm.org&#x2F;citation.cfm?id=2908103&amp;dl=ACM&amp;coll=DL&amp;CFID=609587099&amp;CFTOKEN=40441624" rel="nofollow">http:&#x2F;&#x2F;dl.acm.org&#x2F;citation.cfm?id=2908103&amp;dl=ACM&amp;coll=DL&amp;CFI...</a>): Take an image generated by code (e.g., periodic stripes), and be able to manipulate that image by drag-and-drop, e.g., changing stripe size or period.<p>Stratified synthesis: automatically learning the x86-64 instruction set (<a href="https:&#x2F;&#x2F;stefanheule.com&#x2F;publications&#x2F;pldi16-strata&#x2F;" rel="nofollow">https:&#x2F;&#x2F;stefanheule.com&#x2F;publications&#x2F;pldi16-strata&#x2F;</a>): 60% of the x86-64 instruction set can be formally specified starting from about 60 base or pseudo-instructions (basically, describing each instruction as an assembly program of simpler instructions).
评论 #12199391 未加载
sapphireblue将近 9 年前
The author proposes a lot of vague ideas in this article (for example &quot;I believe one of the biggest problems is the use of Error Propagation and Gradient Descent&quot;) without references or any solid proofs why they are necessary to solve the proposed program (Automate programming using ML?).<p>In fact there is already a lot of solid work just on this subject:<p>* Learning algorithms from examples <a href="http:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1511.07275" rel="nofollow">http:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1511.07275</a> <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1410.5401" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1410.5401</a><p>* Generating source code from natural language description <a href="http:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1510.07211" rel="nofollow">http:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1510.07211</a><p>* And, the most closest work to what author probably wants, a way to write a program in forth while leaving some functions as neural blackboxes to be learned from examples: <a href="http:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1605.06640" rel="nofollow">http:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1605.06640</a><p>* Also there is a whole research program by nothing less than Facebook AI Research that explicitly aims at creating a conversational AI agent that is able to translate user&#x27;s natural language orders into programs (asking to the user additional questions if necessary): <a href="http:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1511.08130" rel="nofollow">http:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1511.08130</a> (there is also a summary here <a href="http:&#x2F;&#x2F;colinraffel.com&#x2F;wiki&#x2F;a_roadmap_towards_machine_intelligence" rel="nofollow">http:&#x2F;&#x2F;colinraffel.com&#x2F;wiki&#x2F;a_roadmap_towards_machine_intell...</a> )<p>And deepmind is also working on conversational agents: <a href="https:&#x2F;&#x2F;youtu.be&#x2F;vQXAsdMa_8A?t=1265" rel="nofollow">https:&#x2F;&#x2F;youtu.be&#x2F;vQXAsdMa_8A?t=1265</a><p>Given current success of such models, automating simple programming tasks maybe not as much research as engineering and scaling up problem.<p>There is a lot of exciting machine learning research out there nowadays. Almost all of this research is available for free from papers posted on arxiv. It is a really good idea to read more about state of the art before coming with new ideas.
评论 #12198239 未加载
评论 #12198266 未加载
评论 #12198776 未加载
tiagoespinha将近 9 年前
Interesting idea, but to be honest I think it will never take off in a large scale.<p>If you want to write a vague and generic piece of code that can figure out by itself what it is that its outputs ought to be with regards to its inputs, you are, with effect, creating artificial intelligence.<p>This, in turn, requires teaching.<p>What if suddenly you have new pages on your website? Your AI program thingy wouldn&#x27;t be able to serve them until you taught it what these pages are and in what situations the end user might be interested in viewing them. Imagine having a 5 year old kid handling your shop&#x27;s cashier. And now a new product arrived and you have to explain to him&#x2F;her that there&#x27;s this new product, what it&#x27;s called, the price and which types of people are allowed to buy it (e.g., alcohol couldn&#x27;t be sold to people younger than X years old). If you&#x27;re really teaching your application with natural language, like you would with a five year old, then the effort it&#x27;d take to get that info into your system would defeat the purpose of actually using a computer to do it (computers are good at storing and looking up stuff in large data sets, better than humans for really large data sets).<p>This whole hype of machine learning is suitable for situations where you have large amounts of data and you want them crunched according to some basic pre-established and non-changing logic without relying on actual human labor. If the logic evolves, you&#x27;ll always require humans to sit at the helm and steer the right and wrong.<p>My 2 cents.
评论 #12198691 未加载
评论 #12198607 未加载
评论 #12201037 未加载
cardigan将近 9 年前
Starts off with an interesting idea (programming is perfectly specified versus being underspecified but more efficient due to shared context) but then devolves significantly: neural networks are not the be all end all of smart computers.<p>Interesting part is that a smart computer could resolve ambiguities in a human description of program desired in a way similar to humans. Missing part to this idea is that unless it&#x27;s perfect, it would need a way to explain choices and results and probably some dialog system to iteratively improve upon how it resolved ambiguities.<p>E.g, make me an app like Uber but for cats -&gt; here is a version you can play with -&gt; oh I don&#x27;t really like the cat icon, can we change it -&gt; sure, which of these would you like ; etc etc<p>Computer requires ever changing human context, and maybe could have individualized context. So it has to learn over time. The point is to maximize efficiency of getting things made: how little needs to be specified before computer gives you something you accept &#x2F; how fast can you go from some vague idea in your head to working acceptable software.<p>You should at least be able to get as good as an arbitrarily large number of people who lived a very similar life to yours and were put in a time bubble where you could talk with them to have them write software for you, assuming they&#x27;re organized very well and highly motivated and great programmers and all that.
userbinator将近 9 年前
Suppose you have an AI that generated some code for you, but it doesn&#x27;t do exactly what you want. Now try to debug it...<p><i>what if programming wouldn&#x27;t involve defining exact steps, but instead just roughly defining what we have and what we want and maybe giving a few hints, and having the computer generally do the right thing - wouldn&#x27;t that be awesome?</i><p>I experience enough frustration with things like debugging generated code, ostensibly &quot;smart&quot; devices which seem to &quot;have a mind of their own&quot; (maybe that&#x27;s the point, but it&#x27;s not doing what I want), and getting Google&#x27;s search engine to find exactly what I want with its pseudo-AI machine-learning algorithms doing completely inexplicable things with my queries, that I think &quot;generally do the right thing&quot; is not a good idea. Edge cases matter a lot.
评论 #12197678 未加载
评论 #12197899 未加载
评论 #12197624 未加载
teddyknox将近 9 年前
Why do we need to jump to ML-based programming again? I&#x27;m confident that as we build simpler interfaces and workflows for replacing the most modular components in our programs with AI, we&#x27;ll begin to see which components are the next lowest-hanging fruit and what concrete ML problems need to be solved to model them.<p>We imagine a future where AI becomes a dominant paradigm for &quot;writing&quot; software -- I think that will be the case, but not in the way everyone suspects. I think 80% of the new <i>value</i> of software in the future will be derived from AI components, but that 80% of the production <i>costs</i> will still go into the structural glue code that supports the value-powerhouse models. Thus, most of the software <i>written</i> in the future will look similar to most of the code written today.<p>I also suspect that as the complexity of AI models increases, the structural code required to support these models will keep pace in complexity. For this reason, I see a future where the unit cost of high-quality software does not see any big drop, but where the value of this software continues to increase exponentially. A corollary to this is that valuable software will be no easier to program in the future than it is today.
anthk将近 9 年前
&gt;what if programming wouldn&#x27;t involve defining exact steps, but instead just roughly defining what we have and what we want and maybe giving a few hints, and having the computer generally do the right thing - wouldn&#x27;t that be awesome?<p>So, Prolog?
评论 #12197532 未加载
wellsjohnston将近 9 年前
The whole point of programming is to get exact results by solving all edge cases. Machine learning&#x2F;Neural nets are only good at guessing results, and cannot solve all edge cases without specific direction.
评论 #12197664 未加载
评论 #12198809 未加载
iammyIP将近 9 年前
Since the c language is already a human readable abstraction away from the actual hardware instructions, which then again are abstractions on what actually happens on the chip with the transistors and electrons, you could argue that programming in c is already pretty much &#x27;talking to the computer human-style&#x27;.<p>As others said the use of frameworks and even higher level scripting languages with an ide is already a dumb-ai kind of human-friendly conversation with the machine.<p>A problem with driving this further down the road like the article suggests could be, that if i wanted to talk to the machine like i talk with other persons, the natural language used in this case migh not be very well suited to express what i actually want. I would need special definitions and terminology, and in some cases a general description would not suffice, so the need for precision in expression might stay.<p>The actual dialogue with the machine then might last as along as a programmer would have needed to program it. I could see how this might work for very simple programs (like for some uber-app which is not much more than a wrapper for a telephone call to the taxi central), but inherent complexity that other programs need cannot be simplified away by switching the language to a conversational chit chat level.
nemaar将近 9 年前
In a sense we already do this. For example, every time you create a website in HTML&#x2F;CSS and javascript, you use high level instructions and a lot of declarative stuff. It is already very far away from the actual hardware. If you use some framework with builtin templates, all you need is the actual content and some plumbing. Everything else is handled by the lower layers and it is much more declarative than you realize. It also means that you do not specify thousands of things&#x2F;little details and you trust the lower layers to do the right thing. It is already happening and it will only get better.<p>People freak out every time when this idea appears and the recurring argument is that &quot;the AI will not do the right thing, it is too stupid&quot;. They forget that our frameworks&#x2F;libraries and complicated software stacks already work as dumb AIs. They follow hardcoded rules and try to please us and most of the time it actually works otherwise we would not use libraries. Using a neural network may not be the right choice for this problem but the general idea is correct. A good library should hide the &quot;how&quot; as much as it can and only require the user to specify the &quot;end goal&quot;.
chris_va将近 9 年前
I suspect we&#x27;ll see supervised seq2seq generated code first.<p>Like:<p>---<p>Programmer inputs on left:<p><i>compute std dev of x please</i><p>AI on right proposes edit to code:<p><i>+ import numpy as np<p>...<p>+ stddev_x = np.std(x)</i><p>---<p>Not super complicated to start with, but you can see where it will go from there.
评论 #12198628 未加载
orasis将近 9 年前
My project, improve.ai, is a first concrete step forward in this direction.<p>We start by replacing if&#x2F;then statements with a decide() function and some goal events.<p>I&#x27;ve used this approach in my app, 7 Second Meditation, to achieve a solid 5 star rating across 248 reviews and 40% first month retention.<p><a href="https:&#x2F;&#x2F;itunes.apple.com&#x2F;us&#x2F;app&#x2F;7-second-meditation-daily&#x2F;id667197203?mt=8" rel="nofollow">https:&#x2F;&#x2F;itunes.apple.com&#x2F;us&#x2F;app&#x2F;7-second-meditation-daily&#x2F;id...</a><p>I&#x27;ll be writing some articles deep diving into 7 Second Meditation and breaking down how I&#x27;m achieving this results. For now, here is a taste:<p><a href="https:&#x2F;&#x2F;blog.improve.ai&#x2F;hi-world-how-to-write-code-that-learns-efdb8b5af940#.1upmk4vxo" rel="nofollow">https:&#x2F;&#x2F;blog.improve.ai&#x2F;hi-world-how-to-write-code-that-lear...</a>
ThePhysicist将近 9 年前
Very intersting ideas! I also think that in the future we will see a lot more declarative programming languages that model high-level concepts of a given program, while low-level &quot;plumbing&quot; code will be generated automatically. I&#x27;m not yet sure about the role that AI will play in this process though, and I think there are many things that we have to solve before we&#x27;ll see something that can write generic programs.<p>Program synthesis is a very active field of research and many AI-based methods have been proposed in the last decades, IMHO what most systems lack is the applicability to real-world programming languages and use cases though.<p>Having worked on static program analysis, I know that even our current ability to understand and reason about existing programs is still very limited. A main reason for this is that most real-world systems are composed of many parts that are not easily specifiable under a single paradigm (e.g. templates, database code, configuration files).<p>To build a usable AI-based programming system, we will need:<p>* A description languge that is able to model ALL aspects of a given real-world system under a single paradigm<p>* A system to analyze and understand the artefacts produced by the above system<p>* A way to generate real-world code from the specification above, including a way to &quot;fill in the blanks&quot; that the user did not specify (as leaving out the details the whole point of such a system)<p>* A way to test the generated code against the specifications provided by the user and further &quot;reasonable&quot; assumptions, which will be needed as the specifications by the user will not be complete, see above<p>* A way to guide this process towards a reasonable program through user feedback within a reasonably small number of steps<p>While none of these things are impossible, implementing them is a significant challenge with a lot of unknowns for which we don&#x27;t have good solutions yet.<p>I therefore think the first AI-based systems which we&#x27;ll see in the coming years will be limited to specific problem domains (e.g. data analysis, logic programming) for which we can more easily build a system as the above.
brett40324将近 9 年前
I feel dumber after reading this article. So much, that my own human net of neurons can&#x27;t figure out or explain why.<p>A partial explanation is that I dont like the assumption that once work is being performed at higher and higher abstracted layers, that machine hardware, OS kernels, compilers and linkers, package and dependency managers, and all other systems software supporting high level programming paradigms (including AI based methods) are going to fall in line at the application layer.
评论 #12198634 未加载
YeGoblynQueenne将近 9 年前
&gt;&gt; I believe a lot of research today is limited by first looking at, and becoming an expert in, the status quo and then building small iterative improvements. It would be better to first find a goal and then looking for a way of getting there - at least that&#x27;s the way we got to the moon.<p>Nope.
brett40324将近 9 年前
#3 on HN for the moment is very relevant to some of the discussion in this thread pertaining to websites.<p><a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=12198572" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=12198572</a>
BucketSort将近 9 年前
I&#x27;ve been toying around with the idea of making user agents that simulate the functions a user must be able to accomplish through a software and using that as a cost function for some evolutionary algorithm.
viach将近 9 年前
Interesting, how AI based project management could work? It is much simpler to implement and could really save lots of money for organizations, imo
ww520将近 9 年前
It&#x27;s kind of vague. It would be better if he used a concrete example to illustrate how AI help programming.
评论 #12198408 未加载
jcoffland将近 9 年前
neural networks != AI
评论 #12198721 未加载