TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

AI Hyperopia

91 pointsby mvcalderover 2 years ago

22 comments

mark_l_watsonover 2 years ago
I liked the main theme of concentrating on practical AI that adds value right now, but disagree with his opinions on CoPilot.<p>I use CoPilot in all or my development modes except for the LispWorks IDE. I find it very useful, especially when I toggle over multiple code completions, picking the best one. It is not as much about writing code for me as it is saving time looking up documentation. I usually work in Emacs with a REPL so checking generated code is quick.
评论 #33047080 未加载
cercatrovaover 2 years ago
&gt; The AI effect occurs when onlookers discount the behavior of an artificial intelligence program by arguing that it is not real intelligence.<p>&gt; Author Pamela McCorduck writes: &quot;It&#x27;s part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was a chorus of critics to say, &#x27;that&#x27;s not thinking&#x27;.&quot; Researcher Rodney Brooks complains: &quot;Every time we figure out a piece of it, it stops being magical; we say, &#x27;Oh, that&#x27;s just a computation.&#x27;&quot;<p><a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;AI_effect" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;AI_effect</a>
评论 #33049067 未加载
评论 #33048711 未加载
评论 #33048629 未加载
woojoo666over 2 years ago
&gt; An Intellisense AI that detected when an arrow key press was an intended interaction with Intellisense and when it was a continuation of the normal typing flow could smooth out these awkward discontinuities.<p>I think this is bad UX actually. It should be obvious to the programmer what the arrow key will do next, the programmer shouldn&#x27;t have to guess what the AI is thinking. Navigation should be predictable
评论 #33049171 未加载
评论 #33051084 未加载
stephc_int13over 2 years ago
The problem is the naming.<p>Even weak&#x2F;incomplete AI results tends to fuel the imagination, we can easily fill the gaps with anything from fiction or our own dreams.<p>On the contrary, when AI works and is strong (like with chess) we stop calling it AI and start seeing it for what it is.<p>Stockfish is called a chess engine. Not an AI.<p>And the algorithm behind current chess engines is not super smart, it is mostly brute force.<p>The trick is to avoid doing brute force at every move, but instead doing the grunt work once a huge computer and store everything in a compact form.
评论 #33048313 未加载
hugozapover 2 years ago
Copilot has been really useful to me, and if you know what you are doing it&#x27;s a massive time and mental energy saver.<p>You can let it draft an initial approach to a small task and then you refine it, I&#x27;ve found this works well, and in practical terms, I end up less tired after working in collaboration with Copilot.<p>I don&#x27;t expect it to give me the perfect answer, and it doesn&#x27;t remove the need for tests (which it can also help creating). But as an assistant? It rocks.
Ukvover 2 years ago
&gt; CoPilot changes the kind of work the engineer has to do, from authoring to editing, but the amount of work is not reduced.<p>Disagree based on my own experience using CoPilot, but it would be interesting to think about ways to fairly test this.<p>&gt; Reweighting these potential choices with the most likely given the current context and showing only the most likely is a solvable AI problem.<p>Not sure about other editors, but for JetBrains IDEs at least this has been a thing for a while: <a href="https:&#x2F;&#x2F;www.jetbrains.com&#x2F;idea&#x2F;guide&#x2F;tips&#x2F;enable-ml-code-completion&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.jetbrains.com&#x2F;idea&#x2F;guide&#x2F;tips&#x2F;enable-ml-code-com...</a>
评论 #33049263 未加载
vsliraover 2 years ago
These AI tools are not static. They are products with well funded teams behind them constantly iterating to make them better.<p>I don’t dismiss them, but I do put a low weight on arguments of the form “AI is not there yet” give how far it has come in the last 5 years. By 2030, I can see a product understanding the context of a system from multiple repos and, given an intent, producing code and deployment infra that adds new features respecting constraints<p>This is good. A single dev will be able to do the work of a full team and then every small business will be able to develop software that suits their specific needs with a couple of dedicated employees
brrrrrmover 2 years ago
There just aren’t enough “full stack” machine learning folks who can quickly solve these problems.<p>The pure programming people have trouble scoping out which solutions to try due to lack of experience. The pure ML people code in Python notebooks and have little visibility into these issues.<p>Both folks could easily learn the other side and help, but it’s surprisingly rare to see.
fnyover 2 years ago
Copilot is a verbose savant heavily afflicted by Dunning-Kruger... but an extremely useful one.<p>Do you remember how Googling was a skill?<p>Learning to Copilot, Stable Diffusion, GPT are exactly the same thing.<p>Copilot&#x27;s full power (at this time) does not exist in generating reams of code. Here are a few things it excels at:<p>- Snippet search: Say you can&#x27;t remember how to see if a variable is empty in a bash conditional, ask.<p>- Template population: Say I have a series of functions I need to write in a language without good meta programming facilities. I can write a list of the (all combinations), write one example, the ai will pick up on the rest.<p>- Rust: If I get trapped because of some weird borrow checker issue with `fn doit(...`, I begin rewriting the function `fn fixed_doit(...`, and 9&#x2F;10 times Copilot fixes the bug.
评论 #33047236 未加载
评论 #33047814 未加载
Animatsover 2 years ago
The basic problem is that machine learning systems don&#x27;t really understand what they&#x27;re doing. So most of the time, they do the right thing, and when they do a totally wrong thing, they don&#x27;t notice. Works fine for advertising, not so much for self-driving.<p>Look at those systems that generate images or text from a prompt. Usually the results are good, and sometimes they are totally bogus.<p>As I point out occasionally, the big hole in AI is &quot;common sense&quot;, defined as not screwing up big-time in the next 30 seconds. Until that gets solved, AI systems can&#x27;t be trusted very far.<p>The control theory people are trying to fix this, so they can use ML to build control systems with safe behavior. The math is really tough. Way beyond me. See IEEE Transactions on Control Systems Technology to follow this. People are trying to make control theory math and ML math play together. Control theory usually has continuity - if it does the same right thing at 0.21 and 0.22, you can be confident it will do the same right thing at 0.215. ML systems do not have that property. Which is why we see those image recognition demos where some minor change in the background noise totally changes the result.
评论 #33049597 未加载
评论 #33054508 未加载
simon_000666over 2 years ago
“Instead CoPilot attempts to solve the most general problem, have the engineer state their intent and let the AI produce the code.”<p>And here in lies the rub, 90% of the time, the engineers intent is wrong. That’s what real pair programming will help you with and what an ‘ai pair programmer’ will not. It just helps you build something that’s probably wrong faster.
评论 #33048167 未加载
thekomaover 2 years ago
Tangent notes but I would really like an AI writing my commit messages and pull request summaries. Is it a thing?
评论 #33051354 未加载
dqpbover 2 years ago
&gt; AI is software based automation. That we imbue any automated process with intention, emotion, and intelligence, is a defect of our human software<p>Matt Calder thinks he&#x27;s teaching us something about AI, but he&#x27;s really just teaching us something sad about Matt Calder.
imglorpover 2 years ago
Wake me up when AI will join the customer call, negotiate features with product managers, and figure out exactly what needs to get built.
评论 #33047342 未加载
评论 #33046979 未加载
评论 #33047480 未加载
AIorNotover 2 years ago
So it sounds like Their complaining about the UI of Copilot -Doesn’t seem to be a complaint about current capabilities of AI<p>Hmm <a href="https:&#x2F;&#x2F;youtu.be&#x2F;PdFB7q89_3U" rel="nofollow">https:&#x2F;&#x2F;youtu.be&#x2F;PdFB7q89_3U</a>
评论 #33046947 未加载
BrainVirusover 2 years ago
Imagine that you are a CEO of Microsoft. Someone in your company invented an AI tool that makes an average developer much better at coding (more productive, etc.). As a reminder: you company is a software company that itself employs tens of thousand of developers. What would you do?<p>a) Downplay the tool. Use it internally to completely crush competition by writing amazing software. License for exorbitant price to a select few partners, who will also crush their competition.<p>b) Hype it up as much as possible and license it to anyone who&#x27;s willing to pay $10&#x2F;month.<p>I think the rational choice is obvious, which also makes it blatantly obvious that Copilot is not seen as any kind of competitive advantage by Microsoft itself.
评论 #33049457 未加载
评论 #33050744 未加载
评论 #33051370 未加载
masswerkover 2 years ago
Ah, the AI versus advanced automation debate and the Lighthill Report (UK, 1973).<p>[1] Report: <a href="http:&#x2F;&#x2F;www.chilton-computing.org.uk&#x2F;inf&#x2F;literature&#x2F;reports&#x2F;lighthill_report&#x2F;p001.htm" rel="nofollow">http:&#x2F;&#x2F;www.chilton-computing.org.uk&#x2F;inf&#x2F;literature&#x2F;reports&#x2F;l...</a><p>[2] Video of the public debate: <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=03p2CADwGF8&amp;t" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=03p2CADwGF8&amp;t</a>
29athrowawayover 2 years ago
AI is going to rule this world.<p>You cannot serialize your brain and transmit it around the world in 5 seconds, but AI will.<p>What a human can do is encode thoughts into words, and transmit them to people one word at a time with the hope they will understand them, but it&#x27;s incredibly inefficient.<p>Humans have infinitely shitty and limited I&#x2F;O. AI will have any I&#x2F;O it wants.<p>AI won&#x27;t need to encode their thoughts into words. They will be able to send a neural circuit directly to their peers over the network.<p>The handicap will be so absurd that we will be their pets.
评论 #33053190 未加载
holodukeover 2 years ago
Copilot doesn&#x27;t remove the thinking process for large problems. But it helps me with dozens of small things. Like adding log lines, variable names etc.
AtlasBarfedover 2 years ago
If you have AI generating code, then you&#x27;ll have to:<p>1) have the software requirements codified to a near-coded state already<p>2) have a validation&#x2F;verification test suite that validates the software generated works correctly<p>Both of which increase the overall complexity to the point that the AI will be of minimal value-add.
评论 #33049266 未加载
kache_over 2 years ago
We&#x27;ll get there :)
评论 #33047105 未加载
obiefernandezover 2 years ago
Stopped reading once the author started saying CoPilot is not useful.
评论 #33047929 未加载