I have used it and copilot both and it is a bit behind copilot.<p>First, Copilot supports a lot more languages (which is a big utility of such tools, that you are writing code in a different language much more quickly.)<p>Second, it fails more often with incorrect suggestions, and on non trivial things tends often to go line by line.
This will impact copilot pricing. I will be keeping my copilot subscription (since it is pretty cheap), but I hope and expect there to be some good competition here.<p>With copilot being embedded in all office software in the near future, MS may as well make GH copilot free. Interesting times!
the next leap in developer productivity won’t arrive from reducing the drudgery of typing. that wasn’t ever the problem, nor source of bugs or damper on productivity. so long as we still evaluate code in our heads (or pay the expensive compile-run-observe cost) we’re generations behind what architecture has achieved with computers. so long as i can’t evaluate the consequences of any line of code as i write it, dumping lines in industrial quantities is searching under the streetlight.
Do they train on your code that they read for context? Do they retain the code snippets you generate? Can't find any mention in regards to privacy in their copy.
Here is someone not affiliated with Amazon doing a live demo of CodeWhisperer: <a href="https://www.youtube.com/watch?v=E0jCIPaIaiA">https://www.youtube.com/watch?v=E0jCIPaIaiA</a>
Just a thought that I have not fleshed out and can even see some problems with, but: will we arrive at a point or will it even be desirable to get to a point where the text in your source "code" just remains the instruction you've given to the AI? In other words, Amazon's example on the linked page is:<p>> /*Create a lambda function that stores the body of the SQS message into a hash key of a DynamoDB table.<p>Now, obviously that is not valid Java syntax and javac will fail on that, but could/would it be possible to just build an intermediate tool that'll expand this into Java (or whatever other language) so that you don't need to even see the expanded code in your editor, like the same way you don't need to see bytecode?<p>I get that practically, right now, that would be ill-advised since the AI may not be reliable enough and there are probably more cases than not where you need to tweak or add some logic specific to your domain, etc. But still, theoretically is that where we are heading, i.e. a world in which even what are now considered high level langs get shoved down further below and are considered internal/low level details?
Great to see them giving attribution and reference links to open source. I don't think co-pilot does this at all.<p>"To help you code responsibly, CodeWhisperer filters out code suggestions that might be considered biased or unfair, and it’s the only coding companion that can filter or flag code suggestions that may resemble particular open-source training data."
Anybody having any luck with this for rust? Tried to prompt it to give me a MQTT hello world and instead it just keeps suggestion more & more commented out imports:<p>// Send string via mqtt<p>// use async_std::task;<p>// use async_std::prelude::<i>;<p>// use async_std::net::TcpStream;<p>// use async_std::io::prelude::</i>;<p>// use async_std::io;<p>// use async_std::sync::Mutex;<p>// use async_std::sync::Arc;
I find constant copilot suggestions kind of annoying but I really enjoy keeping a GPT-4 tab open as a pair programmer. The 25 request limit per 3 hours has been fitting my workflow just fine since I mostly use it when I'm stuck on a problem just to get ideas or when I'm working on an unfamiliar language/framework.
I’m actually really interested to try this. I’ve been putting off paying for Copilot because GitHub does not make it convenient to pay for their services without a credit card and I do not have a large enough open source project to qualify for a free license.<p>I also haven’t used these tools at all so if CodeWhisperer is a little “dumber” than copilot, I doubt I will even notice.
The era of code generators is likely to user-in an era of legal work surrounding them.<p>I was just thinking about this before reading the announcement. Part of our work is in aerospace; hardware and software being a part of that. All of it goes through layers-upon-layers of design, testing, verification and qualification for flight.<p>In my mind I saw this scenario where something happens and it ends-up in the courts. And then, in the process of ripping the code apart during the lawsuit, we come to a comment that changes it all. Something like this:<p><pre><code> // Used Amazon CodeWhisperer to generate the framework of this state machine.
// Modified as needed. See comments.
</code></pre>
That's when the courtroom goes quiet and one side thinks "Oh, shit!".<p>What does the jury think?<p>They are not experts. All they heard is you just used AI to write part of the code for this device that may have been responsible for a horrific accident. Are their minds, at that point, primed for the prosecution to grab onto that and build it up to such a level that the jury becomes convinced a guilty verdict is warranted?<p>Don't know.<p>Does this mean we have to be very careful about using these tools, even if the code works? Does this mean we have to ban the use of these tools out of concerns for legal liability?<p>Personal example:<p>A year or so ago I wrote a CRC calculation program in ARM assembler. It could calculate anything from CRC-8 to CRC-32. This was needed because we were dealing with critical high speed communications and there was a finite real-time window to compute the CRC checksum. The code was optimized using every trick in the books, from decades of doing such work. Fast, accurate, did exactly what it was supposed to do. In production. Working just fine.<p>I was curious. A couple of weeks ago I asked ChatGPT to write a CRC-32 calculation routine given some constraints (buffer size, polynomial, etc.). It took a few seconds for it to generate the code. I ran it through some tests. It seemed to work just fine.<p>That's when the question first occurred to me: Would it expose us to liability if that code were to be used in our system? I don't know. I have a feeling it would be unwise to use any of it at all.<p>Wouldn't it be funny, interesting and perhaps even tragic if we had to have "100% organically-coded" disclaimers on our work in the future?
I've been trying to hold on to using Sublime Text because I really appreciate the super clean interface which helps me stay productive, but I think I'm probably going to have to either go back to VSCode or possibly JetBrains Fleet. Every time I learn a new technology there seems to be an official VSCode Plugin for it that gives you some superpowers. Sometimes they'll get some sort of port to Sublime but I feel like it's falling too far behind now. It does make me pretty sad though because I love it. I think it'll be a long time til anything pries Sublime Merge from my fingers though.
Maybe I'm missing something, but isn't it supposed to output multiline suggestions as in the example for everyone? I can't find any option in vscode and just get line by line results (any supported language).