Lots of people grumping about Copilot, has anyone actually used it?
For a personal project writing a transpiler from an old language (LambdaMOO) to JS, I've actually found it quite helpful. I can write code like:<p>convertList(node: MooASTNode): List {<p>And then it just spits out:<p><pre><code> const entries: ASTNode[] = node.children.map((child) => {
return this.convertNode(child);
});
return new List(entries, this.sourceLocation(node));
}
</code></pre>
This isn't exactly difficult to write or reason about, but it did save me quite a bit of time to just dash out the code that converts the parse tree to an intermediate tree in an hour or so and then just quickly look over and make a couple corrections.<p>It also helped making the ASTNodes, for instance:<p>export class If extends ASTNode {<p>resulted in:<p><pre><code> constructor(
public condition: Compare,
public then: ASTNode,
public elseDo?: ASTNode,
public override loc: SourceLocation | null = null
) {
super();
condition.parent = this;
then.parent = this;
if (elseDo) {
elseDo.parent = this;
}
}
@logCall
toEstree() {
return builders.ifStatement(
this.condition.toEstree(),
this.then.toEstree(),
this.elseDo?.toEstree()
);
}</code></pre>
}<p>Again, this code is not gonna win any prizes, but it sure did save me a good chunk of time.
Why so much hate for the tool?
I stopped using GitHub Copilot. Not because of the accuracy of its predictions (or potential lack thereof), but because the cognitive overhead / distraction of getting inline suggestions made using Copilot not strictly a net positive in productivity.<p>And this is before GitHub will start charging for it.
AI writes code like a bad intern: pasting in "likely" blocks of code with no understanding of the intent. The difference is the AI pastes in "likely" blocks from github examples where bad interns paste from SO.
I haven't used Github Copilot but from what I understand it generates a completion from the text already in your file. Since OpenAI Codex can "understand" natural language instructions, I've found putting code into a prompt with extra context in natural language on what needs to be achieved can give decent results. e.g to generate a docstring, you might use a Markdown-ish prompt like<p><pre><code> # Writing a good docstring
This is an example of writing a really good docstring that follows a best practice for the given language. Attention is paid to detailing things like
* parameter and return types (if applicable)
* any errors that might be raised or returned, depending on the language
I received the following code:
```{{{language}}}
{{{snippet}}}
```
The code with a really good docstring added is below:
```{{{language}}}
</code></pre>
If you wanted a docstring in a particular format, you could add some specific examples as context to your prompt to get even better results.
Another snake oil. Coding needs abstract thinking that only smartest of humans can do, while the AI crowd still can't replicate ant-level intelligence.<p>What would be an immense multiplier of software eng productivity is an intelligent auto-fixer tool: a compiler gives you an error, you know how to fix it, but it's a tedious work that wastes most of your time. Think of fixing build deps, rewriting method signatures to match the parent class, properly adding a library to your project. You'd write "include ssl.h; encrypt(message)" and the tool would add all the plumbing around it, in line with project guidelines.
Why does everyone assume developers won't read and edit the code generated to be more correct?<p>Analyzing, debugging, and fixing code is what we do all day.<p>It reminds me of people strongly against autocomplete for variable/function names.
We’ve been able to write buggy automated software for a very long time. That’s not interesting. It’s also not that interesting that stealing/replicating code replicates bugs.<p>When the AI/ML programs can actually create lines of code without references and do it with quality that’s when there’s a real story that isn’t playing tricks on tech authors for stock gains.<p>Tricks like this are why everyone always thinks big innovation is 5 years away when no one is really working on the things that’s make it 5 years away.
It is clear that technical progress is inevitable. I can show you an example of a site where almost all articles are written using AI (GPT-2, GPT-3). <a href="https://www.vproexpert.com/what-can-ai-do-today/" rel="nofollow">https://www.vproexpert.com/what-can-ai-do-today/</a> Is it badly written?
hot take: rather than AI, software development would be better served by just going into LTS maintenance mode. For each domain (web development, android, etc, css) pick a tool and everyone agrees that will be "the" tool for a decade or two.<p>this will reduce the amount of "possibilities" w.r.t. code by orders of magnitude and therefore make it easier to both read, write, and development tools to automate code.<p>of course this will necessarily reduce innovation, but at the expense of higher quality code, easier to maintain code and skills that are more transferrable.