TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Emacs-copilot: Large language model code completion for Emacs

377 pointsby yla92over 1 year ago

27 comments

HarHarVeryFunnyover 1 year ago
I&#x27;m sure this, and other LLM&#x2F;IDE integration has it&#x27;s uses, but I&#x27;m failing to see how it&#x27;s really any kind of major productivity boost for normal coding.<p>I believe average stats for programmer productivity of production-quality, debugged and maybe reusable code are pretty low - around 100 LOC&#x2F;day, although it&#x27;s easy to hit 1000 LOC&#x2F;day or more when building throwaway prototypes&#x2F;etc.<p>The difference between productivity in terms of production quality code and hacking&#x2F;prototyping is because of the quality aspect, and for most competent&#x2F;decent programmers coding something themselves is going to produce better quality code, that they understand, than copying something from substack or an LLM. The amount of time it&#x27;d take to analyze the copied code for correctness, lack of vulnerabilities, or even just decent design for future maintainability (much more of a factor in terms of total lifetime software cost than writing the code in the first place) would seem to swamp any time gained in not having to write the code yourself (which is basically the easiest and least time consuming part of any non-trivial software project).<p>I can see the use of LLMs in some learning scenarios, or for cases when writing throwaway code where quality is unimportant, but for production code I think we&#x27;re still a long way from the point where the output of an LLM is going to be developer-level and doesn&#x27;t need to be scrutinized&#x2F;corrected to such a degree that the speed benefit of using it is completely lost!
评论 #38826601 未加载
评论 #38825648 未加载
评论 #38826899 未加载
评论 #38825649 未加载
评论 #38826013 未加载
评论 #38827227 未加载
imiricover 1 year ago
Just what I&#x27;ve been looking for!<p>Thanks for pushing the tooling of self-hosted LLMs forward, Justine. Llamafiles specifically should become a standard.<p>Would there be a way of connecting to a remote LLM that&#x27;s hosted on the same LAN, but not on the same machine? I don&#x27;t use Apple devices, but do have a capable machine on my network for this purpose. This would also allow working from less powerful devices.<p>Maybe the Llamafile could expose an API? This steps into LSP territory, and while there is such a project[1], leveraging Llamafiles would be great.<p>[1]: <a href="https:&#x2F;&#x2F;github.com&#x2F;huggingface&#x2F;llm-ls">https:&#x2F;&#x2F;github.com&#x2F;huggingface&#x2F;llm-ls</a>
评论 #38822998 未加载
评论 #38823186 未加载
评论 #38823722 未加载
评论 #38822987 未加载
评论 #38824808 未加载
theYipsterover 1 year ago
I&#x27;m running a MacBook Pro M1 Max with 64GB RAM and I downloaded the 34B Q55 model (the large one) and can confirm it works nicely. It&#x27;s slow, but usable. Note I am running it on my Asahi Fedora Linux partition, so I do not know if or how it is utilizing the GPU. (Asahi has OpenGL support but not Metal.)<p>My environment is configured with ZSH 5.9. If I invoke the LLM directly as root (via SUDO,) it loads up quickly into a web server and I can interact with it via a web-browser pointed to localhost:8080.<p>However, when I try to run the LLM from Emacs (after loading the LISP script via M-x ev-b,) I get a &quot;Doing vfork: Exec format error.&quot; This is when trying to follow the demo in the Readme by typing C-c C-k after I type the beginning of the isPrime function.<p>Any ideas as to what&#x27;s going wrong?
评论 #38825751 未加载
vocx2txover 1 year ago
Unrelated to the plugin but wow the is_prime function in the video demonstration is awful. Even if the input is not divisible by 2, it&#x27;ll still check it modulo 4, 6, 8, ... which is completely useless. It could be made literally 2x faster by adding a single line of code (a parity check), and then making the loop over odd numbers only. I hope you people using these LLMs are reviewing the code you get before pushing to prod.
评论 #38826094 未加载
评论 #38826082 未加载
评论 #38826352 未加载
dackover 1 year ago
This is great for what it does, but I want a more generic LLM integration that can do this and everything else LLMs do.<p>For example, one key stroke could be &quot;complete this code&quot;, but other keystrokes could be:<p>- send current buffer to LLM as-is<p>- send region to LLM<p>- send region to LLM, and replace with result<p>I guess there are a few orthogonal features. Getting input into LLM various ways (region, buffer, file, inline prompt), and then outputting the result various ways (append at point, overwrite region, put in new buffer, etc). And then you can build on top of it various automatic system prompts like code completion, prose, etc.
评论 #38825027 未加载
评论 #38826235 未加载
评论 #38824537 未加载
Plankaluelover 1 year ago
Super interesting and I will try it out for sure!<p>But: The mode of operation is quite different from how GitHub CoPilot works, so maybe the name is not very well chosen.<p>It&#x27;s somewhat surprising that there isn&#x27;t more development happening in integrating Large Language Models with Emacs. Given its architecture etc., Emacs appears to be an ideal platform for such integration. But most projects haven&#x27;t been worked on for months etc. But maybe the crowd that uses Emacs is mostly also the crowd that would be against utilizing LLMs ?
评论 #38823747 未加载
评论 #38823917 未加载
评论 #38826269 未加载
mgover 1 year ago
For vim, I use a custom command which takes the currently selected code and opens a browser window like this:<p><a href="https:&#x2F;&#x2F;www.gnod.com&#x2F;search&#x2F;ai#q=Can%20this%20Python%20function%20be%20improved%3F%0A%0Adef%20sum_of_squares(n)%3A%0A%20%20%20%20result%20%3D%200%0A%20%20%20%20for%20i%20in%20range(1%2C%20n%2B1)%3A%0A%20%20%20%20%20%20%20%20result%20%2B%3D%20i**2%0A%20%20%20%20return%20result" rel="nofollow">https:&#x2F;&#x2F;www.gnod.com&#x2F;search&#x2F;ai#q=Can%20this%20Python%20funct...</a><p>So I can comfortably ask different AI engines to improve it.<p>The command I use in my vimrc:<p><pre><code> command! -range AskAI &#x27;&lt;,&#x27;&gt;y|call system(&#x27;chromium gnod.com&#x2F;search&#x2F;ai#q=&#x27;.substitute(iconv(@*, &#x27;latin1&#x27;, &#x27;utf-8&#x27;),&#x27;[^A-Za-z0-9_.~-]&#x27;,&#x27;\=&quot;%&quot;.printf(&quot;%02X&quot;,char2nr(submatch(0)))&#x27;,&#x27;g&#x27;)) </code></pre> So my workflow when I have a question about some part of my code is to highlight it, hit the : key, that will put :&#x27;&lt;,&#x27;&gt; on the command line, then I type AskAI&lt;enter&gt;.<p>All a matter of a second as it already is in my muscle memory.
评论 #38823228 未加载
评论 #38823795 未加载
评论 #38825470 未加载
098799over 1 year ago
This is quite intriguing, mostly because of the author.<p>I don&#x27;t understand very well how llamafiles work, so it looks a little suspicious to just call it every time you want completion (model loading etc), but I&#x27;m sure this is somehow covered withing the llamafile&#x27;s system. I wonder about the latency and whether it would be much impacted if a network call has been introduced such that you can use a model hosted elsewhere. Say a team uses a bunch of models for development, shares them in a private cluster and uses them for code completion without the necessity of leaking any code to openai etc.
评论 #38823018 未加载
评论 #38822939 未加载
评论 #38823203 未加载
jhellanover 1 year ago
Does anyone else get &quot;Doing vfork: Exec format error&quot;? Final gen. Intel Mac, 32 GB memory. I can run the llamafile from a shell. Tried both wizardcoder-python-13b and phi
评论 #38823812 未加载
评论 #38823975 未加载
phissenschaftover 1 year ago
I use Emacs for most of my work related to coding and technical writing. I&#x27;ve been running phind-v2-codellama and openhermes using ollama and gptel, as well as github&#x27;s copilot. I like how you can send an arbitrary region to an LLM and ask for things about it. Of course the UX is in early stage, but just imagine if a foundation model can take all the context (i.e. your orgmode files and open file buffers) and can use tools like LSP.
kramergerover 1 year ago
&gt; You need a computer like a Mac Studio M2 Ultra in order to use it. If you have a mere Macbook Pro, then try the Q3 version.<p>The intersection between people who use emacs for coding, and those who own a mac studio ultra must be miniscule.<p>Intel MKL + some minor tweaking gets you really excellent LLM performance on a standard PC, and that&#x27;s without using the GPU.
评论 #38835975 未加载
shepmasterover 1 year ago
What is the upgrade path for a Llamafile? Based on my quick reading and fuzzy understanding, it smushes llama.cpp (smallish, updated frequently) and the model weights (large, updated infrequently) into a single thing. Is it expected that I will need to re-download multiple gigabytes of unchanged models when there&#x27;s a fix to llama.cpp that I wish to have?
评论 #38824939 未加载
bekantanover 1 year ago
Also worth checking out for more general use of LLMs in emacs: <a href="https:&#x2F;&#x2F;github.com&#x2F;karthink&#x2F;gptel">https:&#x2F;&#x2F;github.com&#x2F;karthink&#x2F;gptel</a>
评论 #38825009 未加载
spit2windover 1 year ago
How does one get this recommended WizardCoder-Python-13b llamafile? Searching turns up many results from many websites. Further, it appears that the llamafile is a specific type that somehow encapsulates the model and the code used to interface with it.<p>Is it the one listed here? <a href="https:&#x2F;&#x2F;github.com&#x2F;Mozilla-Ocho&#x2F;llamafile">https:&#x2F;&#x2F;github.com&#x2F;Mozilla-Ocho&#x2F;llamafile</a>
评论 #38823845 未加载
m463over 1 year ago
<p><pre><code> ;;; copilot.el --- Emacs Copilot ;; The `copilot-complete&#x27; function demonstrates that ~100 lines of LISP ;; is all it takes for Emacs to do that thing Github Copilot and VSCode ;; are famous for doing except superior w.r.t. both quality and freedom </code></pre> &gt; ~100 lines<p>I wonder if emacs-copilot could extend itself, or even bootstrap itself from fewer lines of code.
3836293648over 1 year ago
Can I build my own llamafile without the cosmopolitan&#x2F;actually portable executable stuff? I can&#x27;t run them on NixOS
评论 #38830403 未加载
评论 #38828596 未加载
ameliusover 1 year ago
How well does Copilot work for refactoring?<p>Say I have a large Python function and I want to move a part of it to a new function. Can Copilot do that, and make sure that all the referenced local variables from the outer function are passed as parameters, and all the changed variables are passed back through e.g. return values?
评论 #38827062 未加载
accelbredover 1 year ago
Looks cool!<p>If it gets support for ollama or the llama-cpp server, I&#x27;ll give it a go.
pamaover 1 year ago
Excellent work—thanks!<p>Have you perhaps thought about the possibility of an extension that could allow an Emacs user collect data to be used on a different machine&#x2F;cluster for human finetuning?
评论 #38825513 未加载
wistyover 1 year ago
It&#x27;s going to be like self driving cars all over again.<p>Tech people said it will never happen, because even if the car is 10x safer than a normal driver, if it&#x27;s not almost perfect people will never trust it. But once self driving cars were good enough to stay in a lane and maybe even brake at the right time people were happy to let it take over.<p>Remember how well sandboxed we thought we&#x27;d make anything even close to a real AI just in case it decides to take over the world? Now we&#x27;re letting it drive emacs. I&#x27;m sure this current one is safe enough, but we&#x27;re going to be one lazy programmer away from just piping its output into sudo.
评论 #38823487 未加载
评论 #38823625 未加载
评论 #38823997 未加载
评论 #38823491 未加载
spencerchubbover 1 year ago
This has some really nice features that would be awesome to have in github copilot. Namely streaming tokens, customizing the system prompt, and pointing to a local LLM.
looofoooover 1 year ago
Can I run the llm on a ssh server and use it with this plugin?
评论 #38823654 未加载
sterenover 1 year ago
jart, you rock.
评论 #38823425 未加载
osenerover 1 year ago
On a related note, is there a Cursor.sh equivalent for Emacs?
评论 #38823057 未加载
dfgdfg34545456over 1 year ago
How does it work with Haskell, has anyone tried?
mediumsmartover 1 year ago
Just a reminder: llms are not really useful for programmers in general. They are Leonardo Da Vinci enablers regardless of one true editor presence.
nephyrinover 1 year ago
Note that this isn&#x27;t for github&#x27;s copilot, but rather for running your own LLM engine locally. It&#x27;s going to quickly get confused with the unofficial copilot-for-emacs plugin pretty quickly: <a href="https:&#x2F;&#x2F;github.com&#x2F;zerolfx&#x2F;copilot.el">https:&#x2F;&#x2F;github.com&#x2F;zerolfx&#x2F;copilot.el</a>
评论 #38823895 未加载
评论 #38823497 未加载
评论 #38824891 未加载