Sheesh, I didn't expect my post to go viral. Little explanation:<p>I downloaded and run Cursor for the first time when this "error" happened. Turned out I was supposed to use agent instead of inline Cmd+K command because inline has some limitations while agent not so much.<p>Nevertheless, I was surprised that AI could actually say something like that so just in case I screenshotted it - some might think it's fake, but it's actually real and makes me think if in future AI will start giving attitudes to their users. Oh, welp. For sure I didn't expect it to blow up like this since it was all new to me so I thought it maybe was an easter egg or just a silly error. Turned out it wasn't seen before so there we are!<p>Cheers
<i>This isn’t just about individual laziness—it’s a systemic arms race towards intellectual decay.[0]</i><p>With programming, the same basic tension exists as with the more effective smarter AI-enhanced approaches to conceptual learning: effectiveness is a function of effort, and the whole reason for the "AI epidemic" is that people are avoiding effort like the plague.<p>So the problem seems to boil down to, how can we convince everyone to go against the basic human (animal?) instinct to take the path of least resistance.<p>So it seems to be less about specific techniques and technologies, and more about a basic approach to life itself?<p>In terms of integrating that approach into your actual work (so you can stay sharp through your career), it's even worse than just laziness, it's fear of getting fired too, since doing things the human way doubles the time required (according to Microsoft), and adding little AI-tutor-guided coding challenges to enhance your understanding along the way increases that even further.<p>And in the context of "this feature needs to be done by Tuesday and all my colleagues are working 2-3x faster than me (because they're letting AI do all the work)... you see what I mean! It systemically creates the incentive for everyone to let their cognitive abilities rapidly decline.<p>[0] GPT-4.5
The biggest problem what I have with using AI for software engineering is that it is absolutely amazing for generating the skeleton of your code, boilerplate really and it sucks for anything creative. I have tried to use the reasoning models as well but all of them give you subpar solutions when it comes to handling a creative challenge.<p>For example: what would be the best strategy to download 1000s of URLs using async in Rust. It gives you ok solutions but the final solution came from the Rust forum (the answer was written 1 year ago) which I assume made its way into the model.<p>There is also the verbosity problem. Calude without the concise flag on generates roughly 10x the required amount of code to solve a problem.<p>Maybe I am prompting incorrectly and somehow I could get the right answers from these models but at this stage I use these as a boilerplate generator and the actual creative problem solving remains on the human side.
Interestingly many here fail to note that development of code is a lot about debugging, not only about writing. Is also about being able to dig/search/grok the code, which is like... reading it.<p>It is the debugging part to me, not only the writing, that actually teaches you what IS right, and what not. Not the architectural work, not the LLM spitting code part, not the deployment, but the debugging of the code and integration. THIS is what teaches you, writing alone teaches you nothing... you can copy by hand programs and understand zero of what they do unless inspecting intermediate results.<p>To hand-craft a house is super romantic and nice, etc. Is a thing people did for a lifetime for ages, not alone usually - with family and friends. But people today live in houses/apartments that had their foundations produced by automated lines (robots) - the steel, the mixture for the concrete, etc. And people yet live in the houses built this way, designed with computer which automated the drawing. I fail to understand while this is bad?
I asked it once to simplify code it had written and it refused. The code it wrote was ok but unnecessary in my view.<p>Claude 3.7:
> I understand the desire to simplify, but using a text array for .... might create more problems than it solves. Here's why I recommend keeping the relational approach:
( list of okay reasons )
> However, I strongly agree with adding ..... to the model. Let's implement that change.<p>I was kind of shocked by the display of opinions. HAL vibes.
Funny, but expected when some chunk of the training data is forum posts like:<p>"Give me the code for"<p>"Do it yourself, this is homework for you to learn".<p>Prompt engineering is learning enough about a project to sound like an expert, them you will he closer to useful answers.<p>Alternatively - maybe if trying to get it to solve a homework like question, thus type of answer is more likely.
It's going to be interesting to see the AI generation arriving in the workplace, ie kids who grew up with ChatGPT, and have never learned to find something in a source document themselves. Not even just about coding, about any other knowledge.
Quite reasonable of it to do so I'd say.<p>The AI tools are good, and they have their uses, but they are currently at best at a keen junior/intern level, making the same sort of mistakes. You need knowledge and experience to help mentor that sort of developer.<p>Give it another year or two and I hope they the student will become the master and start mentoring me :)
Had extremely bad experience with Cursor/Claude.<p>Have a big Angular project, +/- 150 TS files. Upgraded it to Angular 19 and now I can optimize build by marking all components, pipes, services etc as "standalone" essentially eliminating the need for modules and simplifying code.<p>I thought it is perfect for AI as it is straight forward refactor work but would be annoying for a human.<p>1. Search every service and remove the "standalone: false"<p>2. Find module where it is declared, remove that module<p>3. Find all files where module was imported, import the service itself<p>Cursor and Claude constantly was losing focus, refactoring services without taking care of modules/imports at all and generally making things much worse no matter how hard "prompt engineering" I tried. I gave up and made a Jira task for a junior developer instead.
This is quite a lot of code to handle in 1 file. The recommendation is actually good in the past(month - feels like 1 year of planning) I've made similar mistakes with tens of projects - having files larger than 500-600 lines of code - Claude was removing some of the code and I didn't have coverage on some of them and the end result was missing functionality.<p>Good thing that we can use .cursorrules so this is something that partially will improve my experience - until a random company releases the best AI coding model that runs on a Rassbery Pi with 4GB ram(yes this is a spoiler from the future).
I wonder if this was real or if they set a custom prompt to try and force such a response.<p>If it is real, then I guess it's because LLMs have been trained on a bunch of places where students asked other people to do their homework.
Predicted way back in 1971 in the classic movie “Willy Wonka and the Chocolate Factory”!<p>One of the many hysterical scenes I didn’t truly appreciate as a kid.<p><a href="https://youtu.be/tMZ2j9yK_NY?si=5tFQum75pepFUS8-" rel="nofollow">https://youtu.be/tMZ2j9yK_NY?si=5tFQum75pepFUS8-</a>
Some time circa late 1950s, a coder is given a problem and a compiler to solve it. The coder writes their solution in a high level language and asks the compiler to generate the assembly code from it. The compiler: I cannot generate the assembly code for you, that would be completing your work ... /sarcasm<p>On a more serious note: LLMs now are an early technology, much like the early compilers who many programmers didn't trust to generate optimized assembly code on par with hand-crafted assembly, and they had to check the compiler's output and tweak it if needed. It took a while until the art of compiler optimization was perfected to the point that we don't question what the compiler is doing, even if it generates sub-optimal machine code. The productivity gained from using a HLL vs. assembly was worth it. I can see LLMs progressing towards the same tradeoff in the near future. It will take time, but it will become the norm once enough trust is established in what they produce.
I wonder if there will be a stigma in the future when looking at resumes like "bootcampers" but it's "vibe coders"<p>Hopefully by then I won't care as I won't be competing anymore just making my own stuff for fun
I guess that's straight out of the training data.<p>Quite common on reddit to get responses that basically go "Is this a homework assignment? Do you own work".
Oh what a middle finger that seemed to be. I had similar experience in the beginning with ChatGPT (1-2 years back?), until I started paying for a subscription. Now even if it's a 'bad idea' when I ask it to write some code (for my personal use - not work/employment/company) and I insist upon the 'ill-advised' code structure it does it.<p>I was listening to Steve Gibson on SecurityNow speaking about memory-safe programming languages, and the push for the idea, and I was thinking two things:
1) (most) people don't write code (themselves) any more (or we are going to this direction) thus out of the 10k lines of code, someone may manually miss some error/bug (although a second and third LLM doing code review may catch it
2) we can now ask an LLM to rewrite 10k lines of code from X-language to Y-language and it will be cheaper than hiring 10 developers to do it.
Someone on my team complained to me about some seemingly relatively easy task yesterday. They claimed I was pushing more work onto them as I'm working on the backend and they are working on the frontend. This puzzled me so I tried it and ended up doing the work in about 1.5h<p>I did struggle through the poor docs of a relatively new library, but it wasn't hard.<p>This got me wondering: maybe they have become so dependent on AI copilots that what should have been an easy task was seen as insurmountably hard because the LLM didn't have info on this new-ish library.
This is probably coming from the safety instructions of the model. It tends to treat the user like a child and don't miss any chance to moralize. And the company seems to believe that it's a feature, not a bug.
Hah, that's typical Sonnet v2 for you. It's trained for shorter outputs, and it's causing it to be extremely lazy. It's a well known issue, and coding assistants contain mitigations for this. It's very reluctant to produce longer outputs, usually stopping mid-reply with something like "[Insert another 2k tokens of what you've been asking for, I've done enough]". Sonnet 3.7 seems to fix this.
I recently saw this video about how to use AI to enhance your learning instead of letting it do the work for you.[0]<p>"Get AI to force you to think, ask lots of questions, and test you."<p>It was based on this advice from Oxford University.[1]<p>I've been wondering how the same ideas could be tailored to programming specifically, which is more "active" than the conceptual learning these prompts focus on.<p>Some of the suggested prompts:<p>> Act as a Socratic tutor and help me understand X. Ask me questions to guide my understanding.<p>> Give me a multi-level explanation of X. First at the level of a child, then a high school student, and then an academic explanation.<p>> Can you explain X using everyday analogies and provide some real life examples?<p>> Create a set of practice questions about X, ranging from basic to advanced.<p>Ask AI to summarize a text in bullet points, but only <i>after</i> you've summarized it yourself. Otherwise you fail to develop that skill (or you start to lose it).<p>---<p>Notice that most of these increase the amount of work the student has to do! And they increase the energy level from passive (reading) to active (coming up with answers to questions).<p>I've been wondering how the same principles could be integrated into an AI-assisted programming workflow. i.e. advice similar to the above, but specifically tailored for programming, which isn't just about conceptual understanding but also an "activity".<p>Maybe before having AI generate the code for you, the AI could ask you for what you think it should be, and give you feedback on that?<p>That sounds good, but I think in practice the current setup (magical code autocomplete, and now complete auto-programmers) is way too convenient/frictionless, so I'm not sure how a "human-in-the-loop" approach could compete for the average person, who isn't unusually serious about developing or maintaining their own cognitive abilities.<p>Any ideas?<p>---<p>[0] Oxford Researchers Discovered How to Use AI To Learn Like A Genius<p><a href="https://www.youtube.com/watch?v=TPLPpz6dD3A" rel="nofollow">https://www.youtube.com/watch?v=TPLPpz6dD3A</a><p>[1] Use of generative AI tools to support learning
- Oxford University<p><a href="https://www.ox.ac.uk/students/academic/guidance/skills/ai-study" rel="nofollow">https://www.ox.ac.uk/students/academic/guidance/skills/ai-st...</a>
disclaimer: not a programmer for a living.<p>I asked specifically the AI i interact with not to generate code or give code examples, but to highlight topics i need to better my understanding in to answer my own questions. I think it enhances my personal competences better that way, which i value above 'productivity'. As i learn more, i do become more efficient and productive.<p>Some of the recommendations it comes with are hard programming skills, others are project management oriented.<p>I think this is a better approach personally to use this kind of technology as it guides me to better my hard and soft skills. long term gains over short term gains.<p>Then again, i am under no pressure or obligation to be productive in my programming. I can happily spend years to come up with a good solution to a problem, rather than having a deadline which forces to cut as many corners as possible.<p>I do think that this is how it should be in professional settings, but respect a company doesn't always have the resources (time mostly) to allow for it. Its sad but true.<p>Perhaps someday, AIs will be far enough to solve problems properly, and think of the aspects of a problem the person sending the question has not. AIs can generate quite nice code, but only as good as the question asked.<p>If the requester doesn't spend time to learn enough, they can never get an AI to generate good code. It will give what you ask for, warts and all!<p>I did spend some time trying to get AI to generate code for me. To me, it only highlighted the deficiencies in my own knowledge and ability to properly formulate the solution I needed. If i take the time to learn what is needed to formulate the solution fully, i can write the code to implement it myself, so the AI just becomes an augment to my typing speed, nothing else. This last part, is why i beleive it's better to have it guide my growth and learning, rather than produce something in the form of an actual solution (in code or algorithmically).
These kinds of answers are really common, I guess you have to put a lot of work in to remove all those answers from training data. For example "no, I'm not going to do your homework assignment"
Reminds me of Gemini saying C++ code was not suitable for under-18s.<p><a href="https://news.ycombinator.com/item?id=39583473">https://news.ycombinator.com/item?id=39583473</a>
Ah the cycle of telling people to learn to code... First tech journalists telling the public, then programmers telling tech journalists, now AI telling programmers... What comes next?
BOFH vibe from this. I have also had cases of lazy ChatGPT for code generation, although not so obnoxious. What should be next - a digital spurs to nudge them in the right direction.
Working with Cursor will make you more productive when/if you know how to code, how to navigate complex code, how to review and understand code, how to document it, all without LLMs. In that case you feel like having half a dozen junior devs or even a senior dev under your command. It will make you fly. I have tackled dozens of projects with it that I wouldn't have had the time and resources for. And it created absolutely solid solutions. Love this new world of possibilities.
“Not sure if LLMs know what they are for (lol), but doesn’t matter as a much as a fact that I can’t go through 800 locs. Anyone had similar issue? It’s really limiting at this point and I got here after just 1h of vibe coding”<p>We are getting into humanization areas of LLMs again, this happens more often when people who don’t grasp what an LLM actually is use it or if they’re just delusional.<p>At the end of the day it’s a mathematical equation, a big one but still just math.<p>They don’t “know” shit
How funny that one model said one weird thing once. Almost as if LLMs are probabilistic machines or something :)<p>Models will keep getting better and vibe coding will keep getting easier. This will happen whether or not you agree!
"vibe coding"... Imagine using AI to code because you don't want to learn how to code and then you call it ViBe CoDiNg...<p>Fkin rtards.
Based AI. This should always be the response. This as boilerplate will upend deepseek and everything else. The NN is tapping into your wetware. It's awesome. And a hard coded response could even maybe run on a CPU.