I liked the main theme of concentrating on practical AI that adds value right now, but disagree with his opinions on CoPilot.<p>I use CoPilot in all or my development modes except for the LispWorks IDE. I find it very useful, especially when I toggle over multiple code completions, picking the best one. It is not as much about writing code for me as it is saving time looking up documentation. I usually work in Emacs with a REPL so checking generated code is quick.
> The AI effect occurs when onlookers discount the behavior of an artificial intelligence program by arguing that it is not real intelligence.<p>> Author Pamela McCorduck writes: "It's part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was a chorus of critics to say, 'that's not thinking'." Researcher Rodney Brooks complains: "Every time we figure out a piece of it, it stops being magical; we say, 'Oh, that's just a computation.'"<p><a href="https://en.wikipedia.org/wiki/AI_effect" rel="nofollow">https://en.wikipedia.org/wiki/AI_effect</a>
> An Intellisense AI that detected when an arrow key press was an intended interaction with Intellisense and when it was a continuation of the normal typing flow could smooth out these awkward discontinuities.<p>I think this is bad UX actually. It should be obvious to the programmer what the arrow key will do next, the programmer shouldn't have to guess what the AI is thinking. Navigation should be predictable
The problem is the naming.<p>Even weak/incomplete AI results tends to fuel the imagination, we can easily fill the gaps with anything from fiction or our own dreams.<p>On the contrary, when AI works and is strong (like with chess) we stop calling it AI and start seeing it for what it is.<p>Stockfish is called a chess engine. Not an AI.<p>And the algorithm behind current chess engines is not super smart, it is mostly brute force.<p>The trick is to avoid doing brute force at every move, but instead doing the grunt work once a huge computer and store everything in a compact form.
Copilot has been really useful to me, and if you know what you are doing it's a massive time and mental energy saver.<p>You can let it draft an initial approach to a small task and then you refine it, I've found this works well, and in practical terms, I end up less tired after working in collaboration with Copilot.<p>I don't expect it to give me the perfect answer, and it doesn't remove the need for tests (which it can also help creating). But as an assistant? It rocks.
> CoPilot changes the kind of work the engineer has to do, from authoring to editing, but the amount of work is not reduced.<p>Disagree based on my own experience using CoPilot, but it would be interesting to think about ways to fairly test this.<p>> Reweighting these potential choices with the most likely given the current context and showing only the most likely is a solvable AI problem.<p>Not sure about other editors, but for JetBrains IDEs at least this has been a thing for a while: <a href="https://www.jetbrains.com/idea/guide/tips/enable-ml-code-completion/" rel="nofollow">https://www.jetbrains.com/idea/guide/tips/enable-ml-code-com...</a>
These AI tools are not static. They are products with well funded teams behind them constantly iterating to make them better.<p>I don’t dismiss them, but I do put a low weight on arguments of the form “AI is not there yet” give how far it has come in the last 5 years. By 2030, I can see a product understanding the context of a system from multiple repos and, given an intent, producing code and deployment infra that adds new features respecting constraints<p>This is good. A single dev will be able to do the work of a full team and then every small business will be able to develop software that suits their specific needs with a couple of dedicated employees
There just aren’t enough “full stack” machine learning folks who can quickly solve these problems.<p>The pure programming people have trouble scoping out which solutions to try due to lack of experience. The pure ML people code in Python notebooks and have little visibility into these issues.<p>Both folks could easily learn the other side and help, but it’s surprisingly rare to see.
Copilot is a verbose savant heavily afflicted by Dunning-Kruger... but an extremely useful one.<p>Do you remember how Googling was a skill?<p>Learning to Copilot, Stable Diffusion, GPT are exactly the same thing.<p>Copilot's full power (at this time) does not exist in generating reams of code. Here are a few things it excels at:<p>- Snippet search: Say you can't remember how to see if a variable is empty in a bash conditional, ask.<p>- Template population: Say I have a series of functions I need to write in a language without good meta programming facilities. I can write a list of the (all combinations), write one example, the ai will pick up on the rest.<p>- Rust: If I get trapped because of some weird borrow checker issue with `fn doit(...`, I begin rewriting the function `fn fixed_doit(...`, and 9/10 times Copilot fixes the bug.
The basic problem is that machine learning systems don't really understand what they're doing. So most of the time, they do the right thing, and when they do a totally wrong thing, they don't notice. Works fine for advertising, not so much for self-driving.<p>Look at those systems that generate images or text from a prompt. Usually the results are good, and sometimes they are totally bogus.<p>As I point out occasionally, the big hole in AI is "common sense", defined as not screwing up big-time in the next 30 seconds. Until that gets solved, AI systems can't be trusted very far.<p>The control theory people are trying to fix this, so they can use ML to build control systems with safe behavior. The math is really tough. Way beyond me. See IEEE Transactions on Control Systems Technology to follow this. People are trying to make control theory math and ML math play together. Control theory usually has continuity - if it does the same right thing at 0.21 and 0.22, you can be confident it will do the same right thing at 0.215. ML systems do not have that property. Which is why we see those image recognition demos where some minor change in the background noise totally changes the result.
“Instead CoPilot attempts to solve the most general problem, have the engineer state their intent and let the AI produce the code.”<p>And here in lies the rub, 90% of the time, the engineers intent is wrong. That’s what real pair programming will help you with and what an ‘ai pair programmer’ will not.
It just helps you build something that’s probably wrong faster.
> AI is software based automation. That we imbue any automated process with intention, emotion, and intelligence, is a defect of our human software<p>Matt Calder thinks he's teaching us something about AI, but he's really just teaching us something sad about Matt Calder.
So it sounds like Their complaining about the UI of Copilot -Doesn’t seem to be a complaint about current capabilities of AI<p>Hmm <a href="https://youtu.be/PdFB7q89_3U" rel="nofollow">https://youtu.be/PdFB7q89_3U</a>
Imagine that you are a CEO of Microsoft. Someone in your company invented an AI tool that makes an average developer much better at coding (more productive, etc.). As a reminder: you company is a software company that itself employs tens of thousand of developers. What would you do?<p>a) Downplay the tool. Use it internally to completely crush competition by writing amazing software. License for exorbitant price to a select few partners, who will also crush their competition.<p>b) Hype it up as much as possible and license it to anyone who's willing to pay $10/month.<p>I think the rational choice is obvious, which also makes it blatantly obvious that Copilot is not seen as any kind of competitive advantage by Microsoft itself.
Ah, the AI versus advanced automation debate and the Lighthill Report (UK, 1973).<p>[1] Report: <a href="http://www.chilton-computing.org.uk/inf/literature/reports/lighthill_report/p001.htm" rel="nofollow">http://www.chilton-computing.org.uk/inf/literature/reports/l...</a><p>[2] Video of the public debate: <a href="https://www.youtube.com/watch?v=03p2CADwGF8&t" rel="nofollow">https://www.youtube.com/watch?v=03p2CADwGF8&t</a>
AI is going to rule this world.<p>You cannot serialize your brain and transmit it around the world in 5 seconds, but AI will.<p>What a human can do is encode thoughts into words, and transmit them to people one word at a time with the hope they will understand them, but it's incredibly inefficient.<p>Humans have infinitely shitty and limited I/O. AI will have any I/O it wants.<p>AI won't need to encode their thoughts into words. They will be able to send a neural circuit directly to their peers over the network.<p>The handicap will be so absurd that we will be their pets.
Copilot doesn't remove the thinking process for large problems. But it helps me with dozens of small things. Like adding log lines, variable names etc.
If you have AI generating code, then you'll have to:<p>1) have the software requirements codified to a near-coded state already<p>2) have a validation/verification test suite that validates the software generated works correctly<p>Both of which increase the overall complexity to the point that the AI will be of minimal value-add.