FYI The full paper can be found here:
<a href="http://delivery.acm.org/10.1145/2740000/2737988/p43-sidirogloudouskos.pdf?ip=80.113.211.165&id=2737988&acc=OPEN&key=4D4702B0C3E38B35%2E4D4702B0C3E38B35%2E4D4702B0C3E38B35%2E6D218144511F3437&CFID=524572299&CFTOKEN=51965653&__acm__=1435665069_096eebca6644a0f5e695dea4e5bdc5c8" rel="nofollow">http://delivery.acm.org/10.1145/2740000/2737988/p43-sidirogl...</a><p>In summary, for an app with an bug. You must know a) input that causes to bug to show up, b) input that doesnt cause an error. Then it will look for similar code in github, and try inserting the checks done from that code into the new code. And then it reruns the program, hoping that the bug is resolved.<p>This approach is very cool, and harnesses the power of lots of developer. But its also very limited. However thats what research is for. Small steps together are a big leap for mankind :p<p>I also like this conlusion in the article:<p>"""
In recent years the increasing scope and volume of software development
efforts has produced a broad range of systems with similar or
overlapping goals. Together, these systems capture the knowledge
and labor of many developers. But each individual system largely
reflects the effort of a single team and, like essentially all software
systems, still contains errors.
We present a new and, to the best of our knowledge, the first,
technique for automatically transferring code between systems to
eliminate errors. The system that implements this technique, CP,
makes it possible to automatically harness the combined efforts of
multiple potentially independent development efforts to improve
them all regardless of the relationships that may or may not exist
across development organizations. In the long run we hope this
research will inspire other techniques that identify and combine the
best aspects of multiple systems. The ideal result will be significantly
more reliable and functional software systems that better serve the
needs of our society.
"""
The old adage "fixing a bug introduces at least two more" comes to mind. Now fully automated! ;)<p>But seriously: autogenerating fixes as observed by fuzzing does sound cool.
If I exaggerate just a little bit, that means the end of us programmers (though not right away). Just imagine, any-one can make a rough sketch of a “computer program” and have a system like CodePhage fill in the blanks. There would still be CS people for fundamental research and new discoveries, but the rest of the software industry would collapse into one automated know-it-all software replicator. Someone wake me up please!
Related: <a href="http://dijkstra.cs.virginia.edu/genprog/" rel="nofollow">http://dijkstra.cs.virginia.edu/genprog/</a><p>Edit; this one is open source; is the MIT one ? Couldn't find references on the page?
Related or just a coincidence?<p>> <i>An Obama Administration official tells Re/code that recent advances in using automated methods to analyze software code for vulnerabilities have spurred interest in government circles to see if there’s a way to standardize how software is tested for security and safety.</i><p><a href="https://recode.net/2015/06/29/famed-security-researcher-mudge-leaves-google-for-white-house-gig/" rel="nofollow">https://recode.net/2015/06/29/famed-security-researcher-mudg...</a><p>I just wonder what will happen to Google's Project Vault [1] now that Mudge is gone. Hopefully it will still be on track.<p>[1] <a href="https://www.youtube.com/watch?v=V6qrQzn8uBo" rel="nofollow">https://www.youtube.com/watch?v=V6qrQzn8uBo</a>
I wonder if this works mainly for C, C++, ObjC, Java and C#, Python, Ruby or would it also work for Lisp/Scheme and other more powerful languages? Does it only work for crashes?<p>In any case, CS is awesome. Love it when research that a few years ago would have been theoretical is applied.