TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

LocalPilot: Open-source GitHub Copilot on your MacBook

249 pointsby charlieirishover 1 year ago

13 comments

mcbuilderover 1 year ago
Okay, I actually got local co-pilot set up. You will need these 4 things.<p>1) CodeLlama 13B or another FIM model <a href="https:&#x2F;&#x2F;huggingface.co&#x2F;codellama&#x2F;CodeLlama-13b-hf" rel="nofollow noreferrer">https:&#x2F;&#x2F;huggingface.co&#x2F;codellama&#x2F;CodeLlama-13b-hf</a>. You want &quot;Fill in Middle&quot; models because you&#x27;re looking at context on both sides of your cursor.<p>2) HuggingFace llm-ls <a href="https:&#x2F;&#x2F;github.com&#x2F;huggingface&#x2F;llm-ls">https:&#x2F;&#x2F;github.com&#x2F;huggingface&#x2F;llm-ls</a> A large language mode Language Server (is this making sense yet)<p>3) HuggingFace inference framework. <a href="https:&#x2F;&#x2F;github.com&#x2F;huggingface&#x2F;text-generation-inference">https:&#x2F;&#x2F;github.com&#x2F;huggingface&#x2F;text-generation-inference</a> At least when I tested you couldn&#x27;t use something like llama.cpp or exllama with the llm-ls, so you need to break out the heavy duty badboy HuggingFace inference server. Just config and run. Now config and run llm-ls.<p>4) Okay, I mean you need an editor. I just tried nvim, and this was a few weeks ago, so there may be better support. My expereicen was that is was full honest to god copilot. The CodeLlama models are known to be quite good for its size. The FIM part is great. Boilerplace works so much easier with the surrounding context. I&#x27;d like to see more models released that can work this way.
评论 #37947767 未加载
SushiHippieover 1 year ago
FWIW: you can use any other proxy server for this to any openai compatible api server.<p>e.g. with mitmproxy and llama-cpp-python server<p><pre><code> python -m llama_cpp.server --n_ctx 4096 --n_gpu_layers 1 --model .&#x2F;path&#x2F;to&#x2F;..gguf </code></pre> and then with mitmproxy in another terminal<p><pre><code> mitmproxy -p 5001 --mode reverse:http:&#x2F;&#x2F;127.0.0.1:8000 </code></pre> and then set this in your vscode settings.json (the same as for localpilot):<p><pre><code> &quot;github.copilot.advanced&quot;: { &quot;debug.testOverrideProxyUrl&quot;: &quot;http:&#x2F;&#x2F;localhost:5001&quot;, &quot;debug.overrideProxyUrl&quot;: &quot;http:&#x2F;&#x2F;localhost:5001&quot; } </code></pre> works way better for me than localpilot
评论 #37944893 未加载
评论 #37946332 未加载
评论 #37948260 未加载
imrehgover 1 year ago
I guess similar to ollama (recently discussed: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=36802582">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=36802582</a>) which also has support for code-focused models (see: <a href="https:&#x2F;&#x2F;ollama.ai&#x2F;library">https:&#x2F;&#x2F;ollama.ai&#x2F;library</a>).<p>I tried pretty much all of them with Continue in VSCode, and it&#x27;s a bit hit and miss, but the main difference is the way the workflows work (Copilot is mostly line completion, Continue is mostly chat or patches). So the main value add here for me would be a more Copilot-like workflow (which seems to align better with the day-to-day experience I has so far).
评论 #37943591 未加载
评论 #37941895 未加载
评论 #37942524 未加载
评论 #37942323 未加载
ameliusover 1 year ago
Why does it say &quot;Use GitHub Copilot locally on your Macbook with one-click!&quot; when it obviously doesn&#x27;t use the Copilot model?
评论 #37941732 未加载
评论 #37942909 未加载
raincoleover 1 year ago
I found the fact Copilot is close sourced -- not just the model, but even the plugins are close sourced -- is very worrying. Good to see efforts on the alternatives.
评论 #37944488 未加载
评论 #37943908 未加载
评论 #37944054 未加载
james2doyleover 1 year ago
Looks cool! Always like to see these local alternatives. I&#x27;m a Sublime Text user (it is still amazing!) so there aren&#x27;t many options for LLM assistants. The only one I found that works for me on Sublime is <a href="https:&#x2F;&#x2F;codeium.com&#x2F;" rel="nofollow noreferrer">https:&#x2F;&#x2F;codeium.com&#x2F;</a> and it is also free for the basic usage.<p>They have a great list of supported editors:<p>- Android Studio - Chrome (Colab, Jupyter, Databricks and Deepnote, JSFiddle, Codepen, Codeshare, and StackBlitz) - CLion - Databricks - Deepnote - Eclipse - Emacs - GoLand - Google Colab - IntelliJ - JetBrains - Jupyter Notebook - Neovim - PhpStorm - PyCharm - Sublime Text - Vim - Visual Studio - Visual Studio Code - WebStorm - Xcode<p>I have found that the completions are decent enough. I do find that sometimes the completion suggestions are too aggressive and try to complete more than I want so I end up leaving it off until I feel like I could use it.
评论 #37953896 未加载
zeropover 1 year ago
Another OSS alternate - <a href="https:&#x2F;&#x2F;continue.dev&#x2F;">https:&#x2F;&#x2F;continue.dev&#x2F;</a>
wokwokwokover 1 year ago
Hm… the q4 34B code llama (which is used here) performs quite poorly in my experience.<p>Using a high quantised larger model gives you an unrealistic impression that smaller models and larger models are roughly equivalently capable… but it’s a trade off. The larger codellama model is <i>categorically</i> better, if you don’t lobotomise it.<p>It’d be better if instead of making opinionated choices (which aren’t great) it guided you on how to select an appropriate model…
评论 #37942296 未加载
shortrounddev2over 1 year ago
Why does it seem like a lot of the local AI tools target mac specifically? Why don&#x27;t AI developers seem to be able to write cross platform software?
评论 #37941898 未加载
评论 #37947234 未加载
评论 #37941957 未加载
BaculumMeumEstover 1 year ago
I would love to be able to take a base model and fine-tune it on a handful of hand picked repositories that are A) in a specific language I want to use and B) stylistically similar to how I want to write code.<p>I’m not sure how possible that is to do, but I hope we can get there at some point.
shim__over 1 year ago
Does this require M2 AI capabilities or can it also run on other platforms?
评论 #37941272 未加载
评论 #37941318 未加载
Havocover 1 year ago
Haven’t managed to get the official copilot extension to use local anything. Always seems to ask for a login.
评论 #37941831 未加载
ShamelessCover 1 year ago
Queue fifty comments about how language on the repository is misleading, “does this download Microsoft weights?” and other friends.
评论 #37940972 未加载
评论 #37945180 未加载
评论 #37945878 未加载