I use cursor and its tab completion; while what it can do is mind blowing, in practice I’m not noticing a productivity boost.<p>I find that ai can help significantly with doing plumbing, but it has no problems with connecting the pipes wrong. I need to double and triple check the updated code - or fix the resulting errors when I don’t do that. So: boilerplate and outer app layers, yes; architecture and core libraries, no.<p>Curious, is that a property of all ai assisted tools for now? Or would copilot, perhaps with its new models, offer a different experience?
This is pretty exciting. I'm a copilot user at work, but also have access to Claude. I'm more inclined to use Claude for difficult coding problems or to review my work as I've just grown more confident in its abilities over the last several months.
I wonder what the rationale for this was internally. More OpenAI issues? competitiveness with Cursor? It seems good for the user to increase competition across LLM providers.<p>Also ambiguous title. I thought GitHub canceled deals they had in the work. The article is clearly about making a deal, but it's unclear from the article's title.
I just tried out enabling access to Claude 3.5 in VS Code in every place I could find. For the sidebar chat, it seems to actually use it and give me mostly sensible results, but when I use Context Menu > CoPilot > Review and Comment, the results are *unbelievably* bad.<p>Some examples from just one single file review:<p>- Adding a duplicate JSDOC<p>- Suggesting to remove a comment (ok maybe), but in the actual change then removing 10 lines of actually important code<p>- Suggesting to remove "flex flex-col" from Tailwind CSS (umm maybe?), but in the actual change then just adding a duplicate "flex"<p>- Suggesting that a shorthand {component && component} be restructured to "simpler" {component && <div>component</div><div}.. now the code is broken, thanks<p>- Generally removing some closing brackets<p>- On every review coming up with a different name for the component. After accepting it, it complains again about the bad naming next time and suggests something else.<p>Is this just my experience? This seems worse than Claude 3.5 or even GPT-4. What model powers this functionality?<p>I can't get it to tell me, the response is always some variation of "I must remain clear that I am GitHub Copilot. I cannot and should not confirm being Claude 3.5 or any other model, regardless of UI settings. This is part of maintaining accurate and transparent communication."
I’ve been using Cody from Sourcegraph to have access to other models, if copilot offers something similar I guess I will switch back to it. I find copilot autocomplete to be more often on point than Cody, but the chat experience with Cody + Sonnet 3.5 is way ahead in may experience
Anthropic’s article: <a href="https://www.anthropic.com/news/github-copilot" rel="nofollow">https://www.anthropic.com/news/github-copilot</a><p>GitHub’s article: <a href="https://github.blog/news-insights/product-news/bringing-developer-choice-to-copilot/" rel="nofollow">https://github.blog/news-insights/product-news/bringing-deve...</a><p>Google Cloud’s article: <a href="https://cloud.google.com/blog/products/ai-machine-learning/gemini-models-on-github-copilot" rel="nofollow">https://cloud.google.com/blog/products/ai-machine-learning/g...</a><p>Weird that it wasn’t published on the official Gemini news site here: <a href="https://blog.google/products/gemini/" rel="nofollow">https://blog.google/products/gemini/</a><p>Edit: GitHub Copilot is now also available in Xcode: <a href="https://github.blog/changelog/2024-10-29-github-copilot-code-completion-in-xcode-is-now-available-in-public-preview/" rel="nofollow">https://github.blog/changelog/2024-10-29-github-copilot-code...</a><p>Discussion here: <a href="https://news.ycombinator.com/item?id=41987404">https://news.ycombinator.com/item?id=41987404</a>
I still think it’s worth emphasising - LLMs represent a massive capital absorber. Taking gobs of funding into your company is how you grow, how your options become more valuable, how your employees stay with you. If that treadmill were to break bad things happen.<p>Search has been stuttering for a while - Google’s growth and investment has been flattening - at some point they absorbed all the worlds stored information.<p>OpenAI showed the new growth - we need billions of dollars to build and the run the LLMs (at a loss one assumes) - the treadmill can keep going
I don’t know how people can claim such huge success using copilot and such. I also own a subscription and tried to use it for coding but all task from spring boot authentication configuration to aws policies and lambdas it failed horribly.<p>Writing the code myself using proper documentation was the only option.<p>I wonder if false information is written here in the comments section for certain reasons …
I usually feel like i can confidently express a change I want in code faster and better than I can explain what I want an AI to do in English. Like if I have a good prompt, these tools work okay, but getting that prompt almost as hard as just writing the code itself often. Do others feel the same struggle?
Sensible.<p>Big part of competitors' (eg. Aider, Cursor, I imagine also jetbrains) advantage was not being tied to one model as the landscape changed.<p>After large MS OpenAI investment they could just as easily have put blinders on and doubled down.
Github was an early OpenAI design partner. OpenAI developed a custom LLM for them.<p>It's so interesting that even after that early mover advantage they have to go back to the foundation model providers.<p>Does this mean that future tech companies have no choice but to do this?
I am excited about this as I use Claude for coding but what I really like about copilot is if you have a list of something random like:<p>/* Col1 varchar not null,
Col2 int null,
Col3 int not nul*/<p>Then start doing something else like:<p>| column | type |
|—-| —-|
| Col1 | varchar |<p>Then copilot is very good at guessing the rest of the table.<p>(This isn’t just sql to markdown it works whenever you want to repeat something using parts of another list somewhere in the same doc)<p>I hope they continues as this has been a game changer for me as it is so quick, really great.
Wait, does this provide UNLIMITED completions via Claude 3.5 Sonnet for a single $10/month subscription?<p>Compared to Cursor's 500 monthly completions for $20, and Claude's web access for $20, this seems like a bargain.
1 point by Fairburn 0 minutes ago | prev | next | edit | delete [–]<p>I have no doubts that Claude is serviceable from a coders perspective. But for me, as a paid user, I became tired of being told that I have to slow down and then be cut off while actively working on a product. When Anthropic addresses this, Ill add it back to my tools.
Got to cut deals before the AI bust pops, VC money and interest vanishes and interest rates go up.<p>Also diversifying is always a good option. Even if one cash cow gets nuked from orbit, you have 2 other companies to latch onto
For all those believers in the power of AI who tested it in modifying their front-ends and writing a Python script, I have a test: ask AI to write an operating system kernel or a database. Of course, something simple.<p>I never seen AI being used in writing system software. Perhaps there is a reason behind it?
One of the reasons that comes to my mind is - it could have been problematic look for only Microsoft (Copilot) to have access to GitHub for training AI models - à la monopolizing a data treasure trove. With anti-competitive legislation catching up to Google to open up its Play Store, this could have been one of key reasons why this deal came about.
Seems to be part of Microsoft’s hedging of its OpenAI bet, ever since Sam Altman’s ousting: <a href="https://www.nytimes.com/2024/10/17/technology/microsoft-openai-partnership-deal.html" rel="nofollow">https://www.nytimes.com/2024/10/17/technology/microsoft-open...</a>
This kind of thing is why I think Sam is often misjudged. You can’t fuck around in such a competitive market. If you go in all kumbaya you’ll get crushed by market forces. It’s rare for company/founder ideals to survive the market indefinitely. I think he’s iterated fast and the job is still very hard.
Thank you people for contributing to this free software ecosystem. Oh, you can't monetize your work? Your problem, not ours! Deals are made, but you, who provide your free code, we have zero monetization options for you on our github platform. Go pay for copilot which was trained on your data.<p>I mean, this is the worst farce ever concocted. And people are oblivious what's happening...
It feels like the Samsung tactic which consist of flawding their competitor with order requests to prevent them from developing their product only to stop commands afterward and put them in financial difficulties. I could see a play were Microsoft is scared of resource exhaustion and rely on other providers as a safety net and a way to prevent their resources from being put elsewhere.
History has shown being first to market isn't all it's cut out to be. You spend more, it's more difficult creating the trail others will follow, you end up with a tech stack that was built before tools and patterns stabilized and you've created a giant super highway for a fast-follower. Anyone remember MapQuest, AltaVista or Hotmail?<p>OpenAI has some very serious competition now. When you combine that with the recent destabilizing saga they went through along with commoditization of models with services like OpenRouter.ai, I'm not sure their future is as bright as their recent valuation indicates.
Non-paywall alternative: GitHub Copilot will support models from Anthropic, Google, and OpenAI - <a href="https://www.theverge.com/2024/10/29/24282544/github-copilot-multi-model-anthropic-google-open-ai-github-spark-announcement" rel="nofollow">https://www.theverge.com/2024/10/29/24282544/github-copilot-...</a>
Interestingly GitHub (a Microsoft entity) will use Amazon Bedrock to run Claude Sonnet.<p>> Claude 3.5 Sonnet runs on GitHub Copilot via Amazon Bedrock, leveraging Bedrock’s cross-region inference to further enhance reliability.<p>[1] <a href="https://www.anthropic.com/news/github-copilot" rel="nofollow">https://www.anthropic.com/news/github-copilot</a>
Great news! This can only mean better suggestions.<p>I expected little from Copilot, but now i find it indispensible. It is such a productivity multiplier.
Cool. Im underwater and have zero help an open source progect and AI has been crucial in saving some of the little sanity I have left. These things rule if you just speak with them and don’t use it like a moron or regular computer. Maybe one of best coworkers I ‘ve ever had.
I mentored junior SWE and CS students for years, and now using Claude as a coding assistant feels very similar. Yesterday, it suggested implementing a JSON parser from scratch in C to avoid a dependency -- and, unsurprisingly, the code didn’t work. Two main differences stand out: 1) the LLM doesn’t learn from corrections (at least not directly), and 2) the feedback loop is seconds instead of days. This speed is so convenient that it makes hiring junior SWEs seem almost pointless, though I sometimes wonder where we’ll find mid-level and senior developers tomorrow if we stop hiring juniors today.
So in my experience GitHub Copilot was pretty good to start, got better ... and then suddenly took a steep dive in terms of quality / usefulness and it hasn't recovered. Anyone else?<p>I'm seeing it straight guessing variables that do not exist, simply suggesting the same code as right above it and so on ...
Elseweb with GitHub Copilot today...<p>Call for testers for an early access release of a Stack Overflow extension for GitHub Copilot -- <a href="https://meta.stackoverflow.com/q/432029" rel="nofollow">https://meta.stackoverflow.com/q/432029</a>
So GitHub’s teaming up with Google, Anthropic, and OpenAI? Kinda feels Microsoft’s version of a ‘safety net’, but for who exactly? It’s hard not to wonder if this is actually about choice for the user or just insurance for Microsoft
Call me eccentric but the only true or utilitarian use case I've found for AI so far is chatgpt. Rest all appear to be shiny toys just trying to bask in the AI glory but none solve any real human problem?
Solving complex challenges from code to testing of complex systems full-stop is a page from Buchanan and Pirolli, combined:<p><a href="https://web.mit.edu/jrankin/www/engin_as_lib_art/Design_thinking.pdf" rel="nofollow">https://web.mit.edu/jrankin/www/engin_as_lib_art/Design_thin...</a><p><a href="https://www.efsa.europa.eu/sites/default/files/event/180918-conference/presentations/20-3_07_Pirolli.pdf" rel="nofollow">https://www.efsa.europa.eu/sites/default/files/event/180918-...</a><p>That is, a combination of wicked problems and human-computer sensemaking requiring iteration. Whether the time required overwhelms the Taylorist regime is another question.
I guess this goes to show, nobody really has a moat in this game so far. Everyone is sprinting like crazy but I don't see anyone really gaining a sustainable edge that will push out competitors.
How do people feel about uploading code to GitHub now that you know that you're essentially working to have yourself replaced with a robot without being compensated for the effort?
Wait, they weren't already using OpenAI? That explains how awful it was. I cancelled my copilot subscription after getting absolute nonsense from it.
If you want to destroy open source completely, the more models the better. Microsoft's co-opting and infiltration of OSS projects will serve as a textbook example of eliminating competition in MBA programs.<p>And people still support it by uploading to GitHub.
Frankly surprised to see GitHub (Microsoft) signing a deal with their biggest competitor, Google. It does give Microsoft some good terms/pricing leverage over OpenAI, though I'm not sure what degree Microsoft needs that given their investment in OpenAI.<p>GitHub Spark seems like the most interesting part of the announcement.
random question to a popular thread:<p>do any of you use LLM for code vulnerability detection? I see some big SAST players are shifting towards this (sonar is the most obvious one). Is it really better than the current SAST?
> <i>“The size of the Lego blocks that Copilot on AI can generate has grown [...] It certainly cannot write a whole GitHub or a whole Facebook, but the size of the building blocks will increase”</i><p>Um, that would make it <i>less</i> capable, not more... /thatguy
Yet another confirmation that AI models are nothing but commodities.<p>There's no moat, none.<p>I'm really curious how can any company building models hope to have any meaningful return from their billion dollars investments, when few people leaving and getting enough azure credits can get create a competitor in few months.
Reviewing these conversations is like listening to horse and buggy manufacturers pooh-poohing automobiles:<p>1. they will scare the horses. a good team of horses is no match for funky 'automobile'<p>2. how will they be able to deal with our muddy, messy roads<p>3. their engines are unreliable and prone to breaking down
stranding you in the middle and having to do it yourself..<p>4. their drivers cant handle the speed, too many miles driven means unsafe driving.. we should stick to horses they are manageable.<p>Meanwhile I'm watching a community of mostly young people building and using tools like copilot, cursor, replit, jacob etc and wiring up LLMs into increasingly more complex workflows.<p>this is snapshot of the current state, not a reflection of the future- Give it 10 years
Every single one of these discussion, at some point, devolves to some version of<p>- <LLM Y> is by far the best. In my extensive usage it is consistently outperforms <LLM X> by at least 2x. The difference is night and day.<p>Then the immediate child reply:<p>- What!? You must be holding it wrong. The complete inverse is true for me.<p>I don't know what to make of this contradiction. We're all using the same 2 things right? How can opinions vary by such a large amount. It makes me not trust any opinion on any other subject (which admittedly is not a bad default state, but who has time to form their own opinions on everything).
That’s a strange usage of the word “cuts”. I thought GitHub terminated the deals with Google and Anthropic. It would be better if the title were GitHub signs AI deals instead of cuts.
This sort of makes me sick as a software engineer with licensed code on GitHub. Am I understanding correctly that they have trained data on my code despite my license? Do I receive monetary payment from the deal? Or have I misunderstood this?
The reason here is Microsoft is trying to make copilot a platform.
This is the essential step to moving all the power from OpenAI to Microsoft. It would grant Microsoft leverage over all providers since the customers would depend on Microsoft and not OpenAI or Google or Anthropic. Classic platform business evolution at play here.
I don’t like using AI assistants in my editor; I prefer to keep it as clean as possible. So, I manually copy relevant parts of the code into ChatGPT, ask my question, and continue interacting until I get what I need. It’s a bit manual, but since I use GPT for other tasks, it’s convenient to have a single interface for everything.
I replaced ChatGPT Plus with hosted nvidia/Llama-3.1-Nemotron-70B-Instruct for coding tasks. Nemotron produces good code. The cost different is massive. Nemotron is available for $0.35 per Mtoken in and out. ChatGPT is considerably more expensive.
Every thread about AI coding turns into a therapy session for those devs among us who apparently derive a lot of their self worth from being able to write code.<p>Every time I mention using AI at work the same people put on their nitpicking glasses and start squinting.<p>It's getting to be embarrassing. I just wish those who choose to remain ignorant about these technologies would just listen to what other people are doing instead of raising spectres.
You mean "Microsoft" cuts deals with Google and Anthropic on top of their already existing deals with Mistral, Inflection whilst also having an exclusivity deal with OpenAI?<p>This is an extend to extinguish round 4 [0], whilst racing everyone else to zero.<p>[0] <a href="https://news.ycombinator.com/item?id=41908456">https://news.ycombinator.com/item?id=41908456</a>