> Copilot excels at low-to-medium complexity tasks in well-tested codebases, from adding features and fixing bugs to extending tests, refactoring, and improving documentation.<p>Bounds bounds bounds bounds. The important part for humans seems to be maintaining boundaries for AI. If your well-tested codebase has the tests built thru AI, its probably not going to work.<p>I think its somewhat telling that they can't share numbers for how they're using it internally. I want to know that Microsoft, the company famous for dog-fooding is using this day in and day out, with success. There's real stuff in there, and my brain has an insanely hard time separating the trillion dollars of hype from the usefulness.
I tried doing some vibe coding on a greenfield project (using gemini 2.5 pro + cline). On one hand - super impressive, a major productivity booster (even compared to using a non-integrated LLM chat interface).<p>I noticed that LLMs need a very heavy hand in guiding the architecture, otherwise they'll add architectural tech debt. One easy example is that I noticed them breaking abstractions (putting things where they don't belong). Unfortunately, there's not that much self-retrospection on these aspects if you ask about the quality of the code or if there are any better ways of doing it. Of course, if you pick up that something is in the wrong spot and prompt better, they'll pick up on it immediately.<p>I also ended up blowing through $15 of LLM tokens in a single evening. (Previously, as a heavy LLM user including coding tasks, I was averaging maybe $20 a month.)
I wish they optimized things before adding more crap that will slow things down even more. The only thing that's fast with copilot is the autocomplete, it sometimes takes several minutes to make edits on a 100 line file regardless of the model I pick (some are faster than others). If these models had a close to 100% hit rate this would be somewhat fine, but going back and forth with something that takes this long is not productive. It's literally faster to open claude/chatgpt on a new tab and paste the question and code there and paste it back into vscode than using their ask/edit/agent tools.<p>I've cancelled my copilot subscription last week and when it expires in two weeks I'll mostly likely shift to local models for autocomplete/simple stuff.
Some example PRs if people want to look:<p><a href="https://github.com/dotnet/runtime/pull/115733">https://github.com/dotnet/runtime/pull/115733</a>
<a href="https://github.com/dotnet/runtime/pull/115732">https://github.com/dotnet/runtime/pull/115732</a>
<a href="https://github.com/dotnet/runtime/pull/115762">https://github.com/dotnet/runtime/pull/115762</a>
Major scam alert, they are training on your code in private repos if you use this<p>You can tell because they advertise “Pro” and “Pro+” but then the FAQ reads,<p>> Does GitHub use Copilot Business or Enterprise data to train GitHub’s model?
> No. GitHub does not use either Copilot Business or Enterprise data to train its models.<p>Aka, even paid individuals plans are getting brain raped
I’ve been trying to use Copilot for a few days to get some help writing against code stored on GitHub.<p>Copilot has been pretty useless. It couldn’t maintain context for more than two exchanges.<p>Copilot: here’s some C code to do that<p>Me: convert that to $OTHER_LANGUAGE<p>Copilot: what code would you like me to convert?<p>Me: the code you just generated<p>Copilot: if you can upload a file or share a link to the code, I can help you translate it …<p>It points me in a direction that’s a minimum of 15 degrees off true north (“true north” being the goal for which I am coding), usually closer to 90 degrees. When I ask for code, it hallucinates over half of the API calls.
I played around with it quite a bit. it is both impressive and scary. most importantly, it tends to indiscriminately use dependencies from random tiny repos, and often enough not the correct ones, for major projects. buyer beware.
"Drowning in technical debt?"<p>Stop fighting and sink!<p>But rest assured that with Github Copilot Coding Agent, your codebase will develop larger and larger volumes of new, exciting, underexplored technical debt that you can't be blamed for, and your colleagues will follow you into the murky depths soon.
My buddy is at GH working on an adjacent project & he hasn't stopped talking about this for the last few days. I think I've been reminded to 'make sure I tune into the keynote on Monday' at least 8 times now.<p>I gave up trying to watch the stream after the third authentication timeout, but if I'd known it was this I'd maybe have tried a fourth time.
I love Copilot in VSCode. I have it set to use Claude most of the time, but it let's you pick your fav LLM, for it to use. I just open the files I'm going to refactor, type into the chat window what I want done, click 'accept' on every code change it recommends in it's answer, causing VSCode to auto-merge the changes into my code. Couldn't possibly be simpler. Then I scrutinize and test. If anything went wrong I just use GitLens to rollback the change, but that's very rare.<p>Especially now that Copilot supports MCP I can plug in my own custom "Tools" (i.e. Function calling done by the AI Agent), and I have everything I need. Never even bothered trying Cursor or Windsurf, which i'm sure are great too, but _mainly_ since they're just forks of VSCode, as the IDE.
The biggest change Copilot has done for me so far is to have me replace my VSCode with VSCodium to be sure it doesn't sneak any uploading of my code to a third party without my knowing.<p>I'm all for new tech getting introduced and made useful, but let's make it all opt in, shall we?
This is quite alarming:
<a href="https://www.cursor.com/security" rel="nofollow">https://www.cursor.com/security</a><p>And this one too:
<a href="https://docs.github.com/en/site-policy/privacy-policies/github-general-privacy-statement" rel="nofollow">https://docs.github.com/en/site-policy/privacy-policies/gith...</a>
I'm building RSOLV (<a href="https://rsolv.dev" rel="nofollow">https://rsolv.dev</a>) as an alternative approach to GitHub's Copilot agent.<p>Our key differentiator is cross-platform support - we work with Jira, Linear, GitHub, and GitLab - rather than limiting teams to GitHub's ecosystem.<p>GitHub's approach is technically impressive, but our experience suggests organizations derive more value from targeted automation that integrates with existing workflows rather than requiring teams to change their processes. This is particularly relevant for regulated industries where security considerations supersede feature breadth. Not everyone can just jump off of Jira on moment's notice.<p>Curious about others' experiences with integrating AI into your platforms and tools. Has ecosystem lock-in affected your team's productivity or tool choices?
These kinds of patterns allow compute to take much more time than a single chat since it is asynchronous by nature, which I think is necessary to get to working solutions on harder problems
Which GitHub subscription level is required for the agent?<p>I found it very confusing - we have GH Business, with Copilot active. Could not find a way to upgrade our Copilot to the level required by the agent.<p>I tried using my personal Copilot for the purpose of trialing the agent - again, a no-go, as my Copilot is "managed" by the organization I'm part of.<p>Also, you will want to add more control over to who can assign things to Copilot Agent - just having write access to the repository is a poor descriminator, I think.
Is there anything that satisfies the people here ? Copilot today is perhaps the only AI that is actually assisting for something productive.<p>Microsoft, besides maybe Google and OpenAI, are the only ones that are actually exploring towards the practical usefulness of AIs. Other kiddies like Sonnet and whatnot are still chasing meaningless numbers and benchmarking scores, that sort of stuff may appeal to high school kids or immatures but burning billions of dollars and energy resources just to sound like a cool kid?
In the early days on LLM, I had developed an "agent" using github actions + issues workflow[1], similar to how this works. It was very limited but kinda worked ie. you assign it a bug and it fired an action, did some architect/editing tasks, validated changes and finally sent a PR.<p>Good to see an official way of doing this.<p>1. <a href="https://github.com/asadm/chota">https://github.com/asadm/chota</a>
So, fun thing.. LinkedIn doesn't use Copilot.<p>I recently created an course for LinkedIn Learning using generative AI for creating SDKs[0]. When I was onsite with them to record it, I found my Github Copilot calls kept failing.. with a network error. Wha?<p>Turns out that LinkedIn doesn't allow people onsite to to Copilot so I had to put my Mifi in the window and connect to that to do my work. It's wild.<p>Btw, I love working with LinkedIn and have 15+ courses with them in the last decade. This is the only issue I've ever had.. but it was the least expected one.<p>0: <a href="https://www.linkedin.com/learning/build-with-ai-building-better-sdks-with-generative-ai/from-theory-to-practice-building-sdks-with-generative-ai" rel="nofollow">https://www.linkedin.com/learning/build-with-ai-building-bet...</a>
I don't know, I feel this is the wrong level to place the AI at this moment. Chat-based AI programming (such as Aider) offers more control, while being almost as convenient.
Anthropic just announced the same thing for Claude Code, same day: <a href="https://docs.anthropic.com/en/docs/claude-code/github-actions" rel="nofollow">https://docs.anthropic.com/en/docs/claude-code/github-action...</a>
Is Copilot a classic case of slow megacorp gets outflanked by more creative and unhindered newcomers (ie Cursor)?<p>It seems Copilot could have really owned the vibe coding space. But that didn’t happen. I wonder why? Lots of ideas gummed up in organizational inefficiencies, etc?
on a other note
<a href="https://github.com/github/dmca/pull/17700">https://github.com/github/dmca/pull/17700</a> GitHub's automated auto-merged DMCA sync PRs get automated copilot reviews for every single one.<p>AMAZING
I go back and forth between ChatGPT and copilot in vs code. It really makes the grammar guessing much easier in objc. It’s not as good on libraries and none existent on 3rd party libraries, but that isn’t maybe because I challenge it enough. It makes tons of flow and grammar errors which are so easy to spot that I end up using the code most of the time after a small correction. I’m optimistic about the future especially since this is only costing me $10 a month. I have dozens of iOS apps to update. All of them are basically productivity apps that I use and sell so double plus good.
Which model does it use? Will this let me select which model to use? I have seen a big difference in the type of code that different models produce, although their prompts may be to blame/credit in part.
So far, i am VERY unimpressed by this. It gets everything completely wrong and tells me lies and completely false information about my code. Cursor is 100000000x better.
> Copilot coding agent is rolling out to GitHub Mobile users on iOS and Android, as well as GitHub CLI.<p>Wait, is this going to pollute the `gh` tool? Please tell me this isn't happening.
So can I switch this to high contrast Black on White on mobile instead? I cannot read any of this (in the bright sunlight where I am) without pulling it through a reader app. People do get why books and other reading materials are not published grey on black, right?
I wonder what the coding agent story will be for bespoke hardware. For instance I'd like to test somethings out on a specific gpu which isnt available on github. Can I configure my own runners and hope for the beat? What about bespoke microcontroller?
In hindsight it was a mistake that Google killed Google Code. Then again, I guess they wouldn't have put enough effort into it to develop into a real GitHub alternative.<p>Now Microsoft sits on a goldmine of source code and has the ability to offer AI integration even to private repositories. I can upload my code into a private repo and discuss it with an AI.<p>The only thing Google can counter with would be to build tools which developers install locally, but even then I guess that the integration would be limited.<p>And considering that Microsoft owns the "coding OS" VS Code, it makes Google look even worse. Let's see what they come up with tomorrow at Google I/O, but I doubt that it will be a serious competition for Microsoft. Maybe for OpenAI, if they're smart, but not for Microsoft.
How does that compare to using agent mode in VS Code?
Is the main difference that the files are being edited remotely instead of on your own machine, or is there something different about the AI powering the remote agent compared to the local one?
UX-wise...<p>I kind of love the idea that all of this works in the familiar flow of raising an issue and having a magic coder swoop in and making a pull request.<p>At the same time, I have been spoiled by Cursor. I feel I would end up preferring that the magic coder is right there with me in the IDE where I can run things and make adjustments without having to do a followup request or comment on a line.
It could be an amazing product. But the aggressive marketing approach from Microsoft plastering "CoPilot" everywhere makes me want to try every alternative.
I have been so far disappointed by copilot's offerings. It's just not good enough for anything valuable. I don't want you to write my getter and setter. And call it a day.
Looks like their GitHub Copilot Workspace.<p><a href="https://githubnext.com/projects/copilot-workspace" rel="nofollow">https://githubnext.com/projects/copilot-workspace</a>
I'm honestly surprised by so much hate. IMHO it's more important to look at 1) the progress we've made + what this can potentially do in 5 years and 2) how much it's already helping people write code than dismissing it based on its current state.
How good does your test suite and code base have to be for the agent to verify re fix properly including testing things to at can be broken else where?