TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: What's the best self hosted/local alternative to GPT-4?

328 点作者 surrTurr将近 2 年前
Constant outages and the model seemingly getting nerfed[^1] are driving me insane. Which viable alternatives to GPT-4 exist? Preferably self-hosted (I&#x27;m okay with paying for it) and with an API that&#x27;s compatible with the OpenAI API.<p>[^1]: https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=36134249

44 条评论

wokwokwok将近 2 年前
There is literally no alternative.<p>You’re stuck with openai, and you’re stuck with whatever rules, limitations or changes they give you.<p>There are other models, but <i>specifically</i> if you’re actively using gpt-4 and find gpt-3.5 to be below the quality you require…<p>Too bad. You’re out of luck.<p>Wait for better open source models or wait patiently for someone to release a meaningful competitor, or wait for openai to release a better version.<p>That’s it. Right now, there’s no one else letting people have access to their models which are equivalent to gpt-4.
评论 #36139080 未加载
评论 #36139656 未加载
评论 #36138788 未加载
评论 #36139961 未加载
评论 #36138699 未加载
评论 #36138681 未加载
评论 #36138956 未加载
jonathan-adly将近 2 年前
I don&#x27;t know the licensing and all that jazz (even if you self-host for your personal use it shouldn&#x27;t matter). But, this paper[0] released a week ago claims &quot; 99.3% of the performance level of ChatGPT while only requiring 24 hours of finetuning on a single GPU&quot; (QLORA).<p>A quick test of the huggingface demo gives reasonable results[1]. The actual model behind the space is here[2], and should be self-hostable with reasonable effort.<p>0. <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2305.14314" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2305.14314</a> 1. <a href="https:&#x2F;&#x2F;huggingface.co&#x2F;spaces&#x2F;uwnlp&#x2F;guanaco-playground-tgi" rel="nofollow">https:&#x2F;&#x2F;huggingface.co&#x2F;spaces&#x2F;uwnlp&#x2F;guanaco-playground-tgi</a> 2. <a href="https:&#x2F;&#x2F;huggingface.co&#x2F;timdettmers&#x2F;guanaco-33b-merged" rel="nofollow">https:&#x2F;&#x2F;huggingface.co&#x2F;timdettmers&#x2F;guanaco-33b-merged</a>
评论 #36322095 未加载
评论 #36138827 未加载
评论 #36138639 未加载
评论 #36140868 未加载
评论 #36138632 未加载
TradingPlaces将近 2 年前
As people note, you cannot substitute locally for the Azure GPU cloud that GPT-4 runs on. But I believe that will change, and maybe quickly. After years of explosive exponential growth in model size, all of a sudden, small is beautiful.<p>The precipitating factor is that running large models for research is very expensive, but pales in comparison to putting these things into production. Expenses rise exponentially with model size. Everyone is looking for ways to make the models smaller and run at the edge. I will note that PaLM 2 is smaller than PaLM, the first time I can remember something like that happening. The smallest version of PaLM 2 can run at the edge. Small is beautiful.
weystrom将近 2 年前
<a href="https:&#x2F;&#x2F;github.com&#x2F;oobabooga&#x2F;text-generation-webui&#x2F;">https:&#x2F;&#x2F;github.com&#x2F;oobabooga&#x2F;text-generation-webui&#x2F;</a><p>Works on all platforms, but runs much better on Linux.<p>Running this in Docker on my 2080Ti, can barely fit 13B-4bit models into 11G of VRAM, but it works fine, produces around 10-15 tokens&#x2F;second most of the time. It also has an API, that you can use with something like LangChain.<p>Supports multiple ways to run the models, purely with CUDA (I think AMD support is coming too) or on CPU with llama.cpp (also possible to offload part of the model to GPU VRAM, but the performance is still nowhere near CUDA).<p>Don&#x27;t expect open-source models to perform as well as ChatGPT though, they&#x27;re still pretty limited in comparison. Good place to get the models is TheBloke&#x27;s page - <a href="https:&#x2F;&#x2F;huggingface.co&#x2F;TheBloke" rel="nofollow">https:&#x2F;&#x2F;huggingface.co&#x2F;TheBloke</a>. Tom converts popular LLM builds into multiple formats that you can use with textgen and he&#x27;s a pillar of local LLM community.<p>I&#x27;m still learning how to fine-tune&#x2F;train LoRAs, it&#x27;s pretty finicky, but promising, I&#x27;d like to be able to feed personal data into the model and have it reliably answer questions.<p>In my opinion, these developments are way more exciting than whatever OpenAI is doing. No way I&#x27;m pushing my chatlogs into some corp datacenter, but running locally and storing checkpoints safely would achieve my end-goal of having it &quot;impersonate&quot; myself on the web.
davepeck将近 2 年前
There are no viable self-hostable alternatives to GPT-4 or even to GPT3.5.<p>The “best” self-hostable model is a moving target. As of this writing it’s probably one of Vicuña 13B, Wizard 30B, or maybe Guanaco 65B. I’d like to say that Guanaco is wildly better than Vicuña, what with its 5x larger size. But… that seems very task dependent.<p>As anecdata: my experience is that none of these is as good as even GPT3.5 for summarization, extraction, sentiment analysis, or assistance with writing code. Figuring out how to run them is painful. The speed at which their unquantized variants run on any hardware I have access to is painful. Sorting through licensing is… also painful.<p>And again: they’re nowhere close to GPT-4.
amilios将近 2 年前
How much GPU memory do you have access to? If you can run it, Guanaco-65B is probably as close as you can get in terms of something publicly available. <a href="https:&#x2F;&#x2F;github.com&#x2F;artidoro&#x2F;qlora">https:&#x2F;&#x2F;github.com&#x2F;artidoro&#x2F;qlora</a>. But as other comments mention, it&#x27;s still noticeably worse in my experience.
评论 #36138694 未加载
评论 #36138833 未加载
DebtDeflation将近 2 年前
LLM Leaderboard:<p><a href="https:&#x2F;&#x2F;chat.lmsys.org&#x2F;?leaderboard" rel="nofollow">https:&#x2F;&#x2F;chat.lmsys.org&#x2F;?leaderboard</a><p>The short answer is that nothing self hosted can come close to GPT-4. The only thing that comes close period is Anthropic&#x27;s Claude.
deet将近 2 年前
In our experimentation, we&#x27;ve found that it really depends what you&#x27;re looking for. That is you really need to break down down evaluation by task. Local models don&#x27;t have the power yet to just &quot;do it all well&quot; like GPT4.<p>There are open source models that are fine tuned for different tasks, and if you&#x27;re able to pick a specific model for a specific use case you&#x27;ll get better results.<p>---<p>For example, for chat there are models like `mpt-7b-chat` or `GPT4All-13B-snoozy` or `vicuna` that do okay for chat, but are not great at reasoning or code.<p>Other models are designed for just direct instruction following, but are worse at chat `mpt-7b-instruct`<p>Meanwhile, there are models designed for code completion like from replit and HuggingFace (`starcoder`) that do decently for programming but not other tasks.<p>---<p>For UI the easiest way to get a feel for quality of each of the models (or, chat models at least) is probably <a href="https:&#x2F;&#x2F;gpt4all.io&#x2F;" rel="nofollow">https:&#x2F;&#x2F;gpt4all.io&#x2F;</a>.<p>And as others have mentioned, for providing an API that&#x27;s compatible with OpenAI, <a href="https:&#x2F;&#x2F;github.com&#x2F;go-skynet&#x2F;LocalAI">https:&#x2F;&#x2F;github.com&#x2F;go-skynet&#x2F;LocalAI</a> seems to be the frontrunner at the moment.<p>---<p>For the project I&#x27;m working on (in bio) we&#x27;re currently struggling with this problem too since we want a nice UI, good performance, and the ability for people to keep their data local.<p>So at least for the moment, there&#x27;s no single drop-in replacement for all tasks. But things are changing every week and every day, and I believe that open-source and local can be competitive in the end.
simonw将近 2 年前
The answer to this question changes every week.<p>For compatibility with the OpenAI API one project to consider is <a href="https:&#x2F;&#x2F;github.com&#x2F;go-skynet&#x2F;LocalAI">https:&#x2F;&#x2F;github.com&#x2F;go-skynet&#x2F;LocalAI</a><p>None of the open models are close to GPT-4 yet, but some of the LLaMA derivatives feel similar to GPT3.5.<p>Licenses are a big question though: if you want something you can use for commercial purposes your options are much more limited.
评论 #36139342 未加载
Gijs4g将近 2 年前
&gt; Preferably self-hosted (I&#x27;m okay with paying for it)<p>I&#x27;m the founder of Mirage Studio and we created <a href="https:&#x2F;&#x2F;www.mirage-studio.io&#x2F;private_chatgpt" rel="nofollow">https:&#x2F;&#x2F;www.mirage-studio.io&#x2F;private_chatgpt</a>. A privacy-first ChatGPT alternative that can be hosted on-premise or on a leading EU cloud provider.
评论 #36138855 未加载
cypress66将近 2 年前
Nothing self hosted is even remotely close to gpt 3.5, let alone gpt4.<p>Wizardlm-uncensored-30B is fun to play with.
评论 #36149693 未加载
MacsHeadroom将近 2 年前
Guanaco-65B[0] using Basaran[1] for your OpenAI compatible API.<p>(You can use any ChatGPT front-end which lets you change the OpenAI endpoint URL.)<p>[0] <a href="https:&#x2F;&#x2F;huggingface.co&#x2F;TheBloke&#x2F;guanaco-65B-HF" rel="nofollow">https:&#x2F;&#x2F;huggingface.co&#x2F;TheBloke&#x2F;guanaco-65B-HF</a> A QLoRA finetune of LLaMA-65B by Tim Dettmers from the paper here: <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2305.14314" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2305.14314</a><p>[1] <a href="https:&#x2F;&#x2F;github.com&#x2F;hyperonym&#x2F;basaran">https:&#x2F;&#x2F;github.com&#x2F;hyperonym&#x2F;basaran</a>
zorrobyte将近 2 年前
What&#x27;s the best self hosted for ingesting a local codebase and wiki to ask questions of it? Some of the projects linked here have ingest scripts for doc, pdf files; but it&#x27;d be cool to ingest a whole git repo and wiki, have a little chat interface to ask questions about the code.
sgd99将近 2 年前
Not self-hosted&#x2F;local but Claude by Anthropic from what I&#x27;ve heard is really good but the API is not publicly available. It&#x27;s apparently accessible via Poe (<a href="https:&#x2F;&#x2F;poe.com" rel="nofollow">https:&#x2F;&#x2F;poe.com</a>)<p>As for open models, HuggingFace has a nice leaderboard to see which ones are decent: <a href="https:&#x2F;&#x2F;huggingface.co&#x2F;spaces&#x2F;HuggingFaceH4&#x2F;open_llm_leaderboard" rel="nofollow">https:&#x2F;&#x2F;huggingface.co&#x2F;spaces&#x2F;HuggingFaceH4&#x2F;open_llm_leaderb...</a>
ijk将近 2 年前
&quot;Okay with paying for it&quot; gives you a wide range of options.<p>Most of the open source stuff people are talking about is things like running a quantized 33B parameter LLaMA model on a 3090. That can be done on consumer hardware, but isn&#x27;t quite as good at general purpose queries as GPT-4. Depending on your use case and your ability to fine tune it, that might be sufficient for a number of applications. Partcularly if you&#x27;ve got a very specific task.<p>However, if you&#x27;re willing to spend, there are bigger models available (e.g. Falcon 40B, LLaMA 65B) that can be run on data server class machines, if you&#x27;re willing to spend $15-20K.<p>Will that get you GPT-4 level inference? Probably not (though it is difficult to quantify); will it get you a high-quality model that can be further fine-tuned on your own data? Yes.<p>For the smaller models, the fine-tunes for various tasks can be fairly effective; in a few more weeks I expect that they&#x27;ll have continued to improve significantly. There&#x27;s new capabilities being added every week.<p>The biggest weakness that&#x27;s been highlighted in research is that the open source models aren&#x27;t as good at the wide range of tasks that OpenAI&#x27;s RLHF has covered; that&#x27;s partly a data issue and partly a training issue.
f0e4c2f7将近 2 年前
Nothing open source is quite as good as GPT-4 yet but the community continues to edge closer.<p>For general use Falcon seems to be the current best:<p><a href="https:&#x2F;&#x2F;huggingface.co&#x2F;tiiuae" rel="nofollow">https:&#x2F;&#x2F;huggingface.co&#x2F;tiiuae</a><p>For code specifically Replit&#x27;s model seems to be the best:<p><a href="https:&#x2F;&#x2F;huggingface.co&#x2F;replit&#x2F;replit-code-v1-3b" rel="nofollow">https:&#x2F;&#x2F;huggingface.co&#x2F;replit&#x2F;replit-code-v1-3b</a>
CSSer将近 2 年前
There is a model that was just released called falcon-40B that is available for commercial user. It outperforms every other open LLM model available today. Buyer beware, however, because the license is custom[1] and has restrictions for &quot;attributable revenues&quot; over $1M&#x2F;year. I&#x27;ll leave that for you to interpret as you will.<p>[0]: <a href="https:&#x2F;&#x2F;huggingface.co&#x2F;tiiuae&#x2F;falcon-40b-instruct" rel="nofollow">https:&#x2F;&#x2F;huggingface.co&#x2F;tiiuae&#x2F;falcon-40b-instruct</a> [1]: <a href="https:&#x2F;&#x2F;huggingface.co&#x2F;tiiuae&#x2F;falcon-40b-instruct&#x2F;blob&#x2F;main&#x2F;LICENSE.txt" rel="nofollow">https:&#x2F;&#x2F;huggingface.co&#x2F;tiiuae&#x2F;falcon-40b-instruct&#x2F;blob&#x2F;main&#x2F;...</a><p>EDIT: I just realized you seem to be asking for a fully realized, turn-key commercial solution. Yeah, refer to others who say there&#x27;s no alternative. It&#x27;s true. Something like this gives you a lot more power and flexibility, but at the cost of a lot more work building the solution as you try to apply it.
评论 #36153435 未加载
评论 #36139515 未加载
captainmuon将近 2 年前
I think you have to distinguish between self-hosted to run on CPU (like LLAMA), on consumer GPU or on big GPUs. I find the market currently very confusing.<p>I&#x27;m especially interested since the data center I&#x27;m working for is sitting on a bunch of A100 and I get daily requests of people asking for LLMs tuned to specific cases, who can&#x27;t or won&#x27;t use OpenAI for various reasons.
anotheryou将近 2 年前
Here you can try vicunia (and quite a few others) easily <a href="https:&#x2F;&#x2F;chat.lmsys.org&#x2F;" rel="nofollow">https:&#x2F;&#x2F;chat.lmsys.org&#x2F;</a><p>They also have A&#x2F;B testing with a leaderboard where vicunia wins for the self-hostable ones: <a href="https:&#x2F;&#x2F;chat.lmsys.org&#x2F;?leaderboard" rel="nofollow">https:&#x2F;&#x2F;chat.lmsys.org&#x2F;?leaderboard</a>
nabakin将近 2 年前
I would monitor and research each of these top models to determine which best fits your use case.<p><a href="https:&#x2F;&#x2F;lmsys.org&#x2F;blog&#x2F;2023-05-25-leaderboard&#x2F;" rel="nofollow">https:&#x2F;&#x2F;lmsys.org&#x2F;blog&#x2F;2023-05-25-leaderboard&#x2F;</a><p><a href="https:&#x2F;&#x2F;huggingface.co&#x2F;spaces&#x2F;HuggingFaceH4&#x2F;open_llm_leaderboard" rel="nofollow">https:&#x2F;&#x2F;huggingface.co&#x2F;spaces&#x2F;HuggingFaceH4&#x2F;open_llm_leaderb...</a><p><a href="https:&#x2F;&#x2F;assets-global.website-files.com&#x2F;61fd4eb76a8d78bc0676b47d&#x2F;64547b623e779885728099ec_image5.png" rel="nofollow">https:&#x2F;&#x2F;assets-global.website-files.com&#x2F;61fd4eb76a8d78bc0676...</a><p><a href="https:&#x2F;&#x2F;www.mosaicml.com&#x2F;blog&#x2F;mpt-7b" rel="nofollow">https:&#x2F;&#x2F;www.mosaicml.com&#x2F;blog&#x2F;mpt-7b</a><p>Also keep up to date with r&#x2F;LocalLLaMA where new best open models are posted all the time.
kertoip_1将近 2 年前
You can check out this leaderboard to see a current state of LLM alternatives to GPT4<p><a href="https:&#x2F;&#x2F;lmsys.org&#x2F;blog&#x2F;2023-05-25-leaderboard&#x2F;" rel="nofollow">https:&#x2F;&#x2F;lmsys.org&#x2F;blog&#x2F;2023-05-25-leaderboard&#x2F;</a><p>But unfortunately for now it seems there aren&#x27;t any viable self-hosted options...
AndroTux将近 2 年前
<a href="https:&#x2F;&#x2F;gpt4all.io&#x2F;" rel="nofollow">https:&#x2F;&#x2F;gpt4all.io&#x2F;</a> works fairly well on my 16 GB M1 Pro MacBook. It&#x27;s certainly not on a level with ChatGPT, but what is?<p>It&#x27;s a simple app download and allows you to select from multiple available models. No hacking required.
评论 #36138781 未加载
评论 #36138590 未加载
samwillis将近 2 年前
If you want&#x2F;need to go cpu only then llama.cpp, and the assorted front ends people are building for it, is looking like a good project: <a href="https:&#x2F;&#x2F;github.com&#x2F;ggerganov&#x2F;llama.cpp">https:&#x2F;&#x2F;github.com&#x2F;ggerganov&#x2F;llama.cpp</a>
评论 #36138426 未加载
评论 #36138471 未加载
Veen将近 2 年前
It depends what you mean by &quot;viable alternatives&quot; and how much money you are prepared to spend on hardware to self-host. As others have mentioned, you can try llama.cpp and LocalAI, but for most ChatGPT-like applications, you won&#x27;t get anything like as good results. I&#x27;ve found that using GPT-4 via the OpenAI API is somewhat more reliable than ChatGPT, either via the Playground or via a local chat interface like <a href="https:&#x2F;&#x2F;github.com&#x2F;mckaywrigley&#x2F;chatbot-ui">https:&#x2F;&#x2F;github.com&#x2F;mckaywrigley&#x2F;chatbot-ui</a>
RecycledEle将近 2 年前
I often worry about aa &quot;The Machine Stops&quot; scenario.<p>GPT AI actually gives me hope. What if we can store and run an AI in a phone-sized-device that is superior to a similarly sized library of books? Can we have a rugged, solar-powered device that could survive the fall of Civilization and help us rebuild?<p>It would certainly have military applications in a warfare. Imagine being the 21ct century equivalent of a 1940&#x27;s US Marine on Guadal Canal who need to know some survival skills. ChatGPT-on-a-phone would be handy if you could keep the battery charged.
评论 #36144245 未加载
0xbadc0de5将近 2 年前
I&#x27;ll +1 the votes for Guanaco and Vicuna running with the Oobabooga text-generation-webui.<p>With a 4090, you can get ChatGPT 3.5 level results from Guanaco 33B. Vicuna 13B is a solid performer on more resource-constrained systems.<p>I&#x27;d urge the naysayers who tried the OPT and LLaMA models only to give up to note that the the LLM field is moving very quickly - the current set of models are already vastly superior to the LLaMA models from just two months ago. And there is no sign the progress is slowing - in fact, it seems to be accelerating.
vs4vijay将近 2 年前
You can find more details here - <a href="https:&#x2F;&#x2F;old.reddit.com&#x2F;r&#x2F;LocalGPT&#x2F;" rel="nofollow">https:&#x2F;&#x2F;old.reddit.com&#x2F;r&#x2F;LocalGPT&#x2F;</a>
colesantiago将近 2 年前
The best self hosted&#x2F;local alternative to GPT-4 is a (self hosted) GPT-X variant by OpenAI.<p>No kidding, and I am calling it on the record right here.<p>OpenAI will release an &#x27;open source&#x27; model to try and recoup their moat in the self hosted &#x2F; local space.<p><a href="https:&#x2F;&#x2F;www.theinformation.com&#x2F;briefings&#x2F;openai-readies-new-open-source-ai-model" rel="nofollow">https:&#x2F;&#x2F;www.theinformation.com&#x2F;briefings&#x2F;openai-readies-new-...</a>
评论 #36139597 未加载
ludovicianul将近 2 年前
This is a good candidate: <a href="https:&#x2F;&#x2F;github.com&#x2F;imartinez&#x2F;privateGPT">https:&#x2F;&#x2F;github.com&#x2F;imartinez&#x2F;privateGPT</a>
meroes将近 2 年前
This is like an artist getting used to Adobe’s products before they’re put behind a wall. And borrowing HN’s attitude to that, you apparently deserve it
FieryTransition将近 2 年前
You can fine tune a open source model for your task and achieve better results, at least, instead of just using them directly. But they are still not close to the openai models in generality. Huggingface is the place for exploring models, recently went through a lot of them for my use case, and they are simply not good enough, yet.
born-jre将近 2 年前
There is so much parallel progress happening left and right at the same time they are not there yet. When things like sparseGPT and models fine-tuned with data with tool ability (not just instruct data) may be soon we get there, as long as there is progress i am hopeful. Some sort of inference optimized hardware would also help.
danpalmer将近 2 年前
&gt; Preferably self-hosted (I&#x27;m okay with paying for it)<p>The big models, if even available, need &gt;100GB of graphics memory to run and would likely take minutes to warm up.<p>The pricing available via OpenAI&#x2F;GCP&#x2F;etc is only effective when you can multi-tenant many users. The cost to run one of these systems for private use would be ~$250k per year.
评论 #36138504 未加载
anon291将近 2 年前
I admittedly haven&#x27;t used GPT-4 yet, but I&#x27;ve replaced several uses of GPT-3 with RWKV on the Raven dataset. I can load it onto my RTX 2060 with 12GB of mem (quantized of course), and use it to whittle down or summarize data for GPT.
MagicMoonlight将近 2 年前
OpenAssistant is pretty good. It still has some censorship but nowhere near the levels of commercial models.<p>It’s actually impressive how good it is considering the limited resources they have.
paulus-saulus将近 2 年前
<a href="https:&#x2F;&#x2F;huggingface.co&#x2F;tiiuae&#x2F;falcon-7b" rel="nofollow">https:&#x2F;&#x2F;huggingface.co&#x2F;tiiuae&#x2F;falcon-7b</a>
cl42将近 2 年前
Have you tried using GPT-4 via Azure? My understanding is that it&#x27;s faster and more reliable.
airgapstopgap将近 2 年前
There really do not exist any alternatives, self-hosted or not. But more importantly, there may never be, what with the rising tide of AI risks and regulations discourse. It seems that soon training and opensourcing or otherwise making accessible a model of that class will be impossible, even as the cost of its production falls.
评论 #36138823 未加载
leros将近 2 年前
Is anyone using a self hosted thing to assist with parsing?
0xferruccio将近 2 年前
Buy a tinybox from tiny corp <a href="https:&#x2F;&#x2F;tinygrad.org&#x2F;" rel="nofollow">https:&#x2F;&#x2F;tinygrad.org&#x2F;</a>
Saruto将近 2 年前
Falcon 40B
Marlon1788将近 2 年前
openai not so open. should rebrand to closedai
boringuser2将近 2 年前
I&#x27;ve gone down this rabbit hole and I want to reaffirm what the other commenters are saying: even if you use a massive model and have the compute to back it up at a reasonable pace (you likely don&#x27;t), it sucks, can&#x27;t even hold a candle to GPT 3.5
Y_Y将近 2 年前
You could hire a human to manually respond to the queries
评论 #36138477 未加载
评论 #36139128 未加载
评论 #36138463 未加载