I stumbled upon this repo by accident a few days ago when its source code appeared to contain the only usage the term "GH_COPILOT_TOKEN" in any repo on Github:<p><a href="https://github.com/search?q=GH_COPILOT_TOKEN&type=code">https://github.com/search?q=GH_COPILOT_TOKEN&type=code</a><p>(My Copilot was broken and this was in the error output I was seeing, see: <a href="https://github.com/community/community/discussions/41878">https://github.com/community/community/discussions/41878</a>)<p>What I found there was some truly impressive reverse-engineering work by a single individual. I really like the "JOURNAL" daily-diary they kept of progress and random thoughts so you could see the progression day-by-day.<p>--------<p>One thing I found interesting: The author says that it queries only the 20 recently most opened files of the same language.<p>But in an AMA, I asked about how much "context" Copilot has available, and one of the devs says it can, for example, read header files that pair with C/C++ files that are open in separate tabs:<p><a href="https://github.com/orgs/community/discussions/29932#discussioncomment-3473154">https://github.com/orgs/community/discussions/29932#discussi...</a><p><pre><code> > "I assume Copilot uses the contents of the current project (IE, all files) as contextual information to offer suggestions. Is this the case?"
> "Yes, copilot looks at what we call "related tabs", for example .c files are often paired with .h files, so copilot considers the contents of other tabs open in the editor to try to feed a more complete prompt to the model."</code></pre>
Heh this blew up here :D Didn't know till a friend told me about it.<p>I'd love to know if you guys have any specific questions about copilot's internals that I can try to answer by staring at the code or if you have any feedback for the tool/post!
Honestly I get more value out of ChatGPT than I do from Copilot even though both generate the wrong stuff now and then. But I like to describe the desired functionality by writing in plain English than trying to goad copilot in the right direction by coming up with method and variable names that copilot will "like"
Amazing if this is only a 12B model. If this already increases coding productivity by up to 50% (depending on kind of work), imagine what a 1T model will be capable of! I do wonder if some programmers at FAANG are already having access to a way more powerful coding assistants, and whether they code much at all at this point, or only make high level code specifications and then fix up the automatically generated code.
One problem I’ve always had with Copilot is that it tends to introduce extra parentheses and braces. Say I already have a pair of braces for a function body, and Copilot decides to write the whole function, it will write everything including the closing brace, leaving me with an extra and a syntax error to fix. It really shouldn’t be that hard to tell I already have a closing brace, especially when they’re already considering the suffix.
Free idea for GitHub: a huge bit of missing context for the model right now is the last few edits made by the user. If you move your cursor to a different part of a long file, Copilot immediately forgets about that part of the file. If it knew the last few edits you did then it would be able to make much more intelligent suggestions based on the task you're working on, rather than just current cursor position.
My biggest issue with Copilot is that it only wants to add code.<p>That's useful but I edit code a lot. And if I have 10 similar lines and made one edit, it'd be very convenient for Copilot to suggest edit following line or even lines.
This is about the VSCode extension, which is obfuscated (maybe compiled) JS.<p>The plugin for IntelliJ (PyCharm etc), is this written in Java? Reverse compiling this might give some additional insights.
Huh, was this post revitalized? I remember seeing it (and upvoting it) in /new yesterday, but it didn't reach critical mass for the front page. Seems to be gone now.