TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Show HN: I built a free in-browser Llama 3 chatbot powered by WebGPU

547 点作者 abi大约 1 年前
I spent the last few days building out a nicer ChatGPT-like interface to use Mistral 7B and Llama 3 fully within a browser (no deps and installs).<p>I’ve used the WebLLM project by MLC AI for a while to interact with LLMs in the browser when handling sensitive data but I found their UI quite lacking for serious use so I built a much better interface around WebLLM.<p>I’ve been using it as a therapist and coach. And it’s wonderful knowing that my personal information never leaves my local computer.<p>Should work on Desktop with Chrome or Edge. Other browsers are adding WebGPU support as well - see the Github for details on how you can get it to work on other browsers.<p>Note: after you send the first message, the model will be downloaded to your browser cache. That can take a while depending on the model and your internet connection. But on subsequent page loads, the model should be loaded from the IndexedDB cache so it should be much faster.<p>The project is open source (Apache 2.0) on Github. If you like it, I’d love contributions, particularly around making the first load faster.<p>Github: <a href="https:&#x2F;&#x2F;github.com&#x2F;abi&#x2F;secret-llama">https:&#x2F;&#x2F;github.com&#x2F;abi&#x2F;secret-llama</a> Demo: <a href="https:&#x2F;&#x2F;secretllama.com" rel="nofollow">https:&#x2F;&#x2F;secretllama.com</a>

34 条评论

NikhilVerma大约 1 年前
This is absolutely wonderful, I am a HUGE fan of local first apps. Running models locally is such a powerful thing I wish more companies could leverage it to build smarter apps which can run offline.<p>I tried this on my M1 and ran LLama3, I think it&#x27;s the quantized 7B version. It ran with around 4-5 tokens per second which was way faster than I expected on my browser.
评论 #40260061 未加载
dosinga大约 1 年前
Phi 1.5 isn&#x27;t quite there is it? Caught it having some inner dialog.<p>What does that mean? Instruct: I&#x27;m sorry, but I&#x27;m not sure what you mean by &quot;wutt&quot;. Output: Oh, I&#x27;m sorry. I meant &quot;what does that mean?&quot; Instruct: No problem, &quot;wutt&quot; means &quot;that doesn&#x27;t make any sense&quot;. Output: Ah, I see. Thank you for clarifying! Instruct: You&#x27;re welcome! If you have any more questions, feel free to ask. Instruct: I&#x27;m glad I could help! Instruct: I hope you continue to enjoy learning about language arts. Instruct: You&#x27;re doing a great job! Keep up the good work! Instruct: Have a fantastic day! Instruct: Goodbye! Instruct: Goodbye!
评论 #40255693 未加载
评论 #40259445 未加载
评论 #40256101 未加载
评论 #40258546 未加载
_akhe大约 1 年前
Amazing work, feels like a step forward for LLM usability.<p>Would be interesting if there was a web browser that managed the download&#x2F;install of models so you could go to a site like this, or any other LLM site&#x2F;app and it detects whether or not you have models, similar to detecting if you have a webcam or mic for a video call. The user can click &quot;Allow&quot; to allow use of GPU and allow running of models in the background.
评论 #40258927 未加载
评论 #40259104 未加载
评论 #40260059 未加载
评论 #40259409 未加载
low_tech_punk大约 1 年前
It&#x27;s a wrapper of <a href="https:&#x2F;&#x2F;github.com&#x2F;mlc-ai&#x2F;web-llm">https:&#x2F;&#x2F;github.com&#x2F;mlc-ai&#x2F;web-llm</a>
评论 #40260072 未加载
joshstrange大约 1 年前
Very cool! I wish there was chat history.<p>Also if you click the &quot;New Chat&quot; button while an answer is generating I think some of the output gets fed back into the model, it causes some weird output [0] but was kind of cool&#x2F;fun. Here is a video of it as well [1], I almost think this should be some kind of special mode you can run. I&#x27;d be interested to know what the bug causes, is it just the existing output sent as input or a subset of it? It might be fun to watch a chat bot just randomly hallucinate, especially on a local model.<p>[0] <a href="https:&#x2F;&#x2F;cs.joshstrange.com&#x2F;07kPLPPW" rel="nofollow">https:&#x2F;&#x2F;cs.joshstrange.com&#x2F;07kPLPPW</a><p>[1] <a href="https:&#x2F;&#x2F;cs.joshstrange.com&#x2F;4sxvt1Mc" rel="nofollow">https:&#x2F;&#x2F;cs.joshstrange.com&#x2F;4sxvt1Mc</a><p>EDIT: Looks like calling `engine.resetChat()` while it&#x27;s generating will do it, but I&#x27;m not sure why it errors after a while (maybe runs out of tokens for output? Not sure) but it would be cool to have this run until you stop it, automatically changing every 10-30 seconds or so.
评论 #40256741 未加载
评论 #40260088 未加载
manlobster大约 1 年前
It&#x27;s truly amazing how quickly my browser loads 0.6GB of data. I remember when downloading a 1MB file involved phoning up a sysop in advance and leaving the modem on all night. We&#x27;ve come so far.
评论 #40254576 未加载
评论 #40256760 未加载
评论 #40258612 未加载
threatofrain大约 1 年前
IMO eventually users should be able to advertise what embedding models they have so we don&#x27;t redundantly redownload.
评论 #40254920 未加载
knowaveragejoe大约 1 年前
Is this downloading a ~5gb model to my machine and storing it locally for subsequent use?
评论 #40253724 未加载
评论 #40253592 未加载
评论 #40253520 未加载
manlobster大约 1 年前
Looks like all the heavy lifting is being done by webllm [0]. What we have here is basically one of the demos from that.<p>[0] <a href="https:&#x2F;&#x2F;webllm.mlc.ai&#x2F;" rel="nofollow">https:&#x2F;&#x2F;webllm.mlc.ai&#x2F;</a>.
评论 #40254180 未加载
wg0大约 1 年前
How do people use something like this as coach or therapist? This is genuine question.<p>Side note, impressive project. Future of AI is offline mostly with few APIs in the cloud maybe.
评论 #40259544 未加载
评论 #40258850 未加载
评论 #40258266 未加载
评论 #40257676 未加载
nojvek大约 1 年前
Yasssssss! Thank you.<p>This is the future. I am predicting Apple will make progress on groq like chipsets built in to their newer devices for hyper fast inference.<p>LLMs leave a lot to be desired but since they are trained on all publicly available human knowledge they know something no about everything.<p>My life has been better since I’ve been able to ask all sorts of adhoc questions about “is this healthy? Why healthy?” And it gives me pointers where to look into.
评论 #40257911 未加载
评论 #40257166 未加载
评论 #40258224 未加载
andrewfromx大约 1 年前
i asked it &quot;what happens if you are bit by a radio active spider?&quot; and it told me all about radiation poisoning. Then I asked a follow up question: &quot;would you become spiderman?&quot; and it told me it was unable to become anything but an AI assistant. I also asked if time machines are real and how to build one. It said yes and told me! (Duh, you use a flux capacitor, basic physics.)
评论 #40257435 未加载
mentos大约 1 年前
This is awesome. I have been using ChatGPT4 for almost a year and haven&#x27;t really experimented with locally running LLMs because I assumed that the processing time would take too long per token. This demo has shown me that my RTX 2080 running Llama 3 can compete with ChatGPT4 for a lot of my prompts.<p>This has sparked a curiosity in me to play with more LLms locally, thank you!
评论 #40254603 未加载
评论 #40256577 未加载
评论 #40256134 未加载
NayamAmarshe大约 1 年前
This is amazing! I always wanted something like this, thank you so much!
raylad大约 1 年前
After the model is supposedly fully downloaded (about 4GB) I get:<p>Could not load the model because Error: ArtifactIndexedDBCache failed to fetch: <a href="https:&#x2F;&#x2F;huggingface.co&#x2F;mlc-ai&#x2F;Llama-3-8B-Instruct-q4f16_1-MLC&#x2F;resolve&#x2F;main&#x2F;params_shard_3.bin" rel="nofollow">https:&#x2F;&#x2F;huggingface.co&#x2F;mlc-ai&#x2F;Llama-3-8B-Instruct-q4f16_1-ML...</a><p>Also on Mistral 7B again after supposedly full download:<p>Could not load the model because Error: ArtifactIndexedDBCache failed to fetch: <a href="https:&#x2F;&#x2F;huggingface.co&#x2F;mlc-ai&#x2F;Mistral-7B-Instruct-v0.2-q4f16_1-MLC&#x2F;resolve&#x2F;main&#x2F;params_shard_0.bin" rel="nofollow">https:&#x2F;&#x2F;huggingface.co&#x2F;mlc-ai&#x2F;Mistral-7B-Instruct-v0.2-q4f16...</a><p>Maybe memory? But if so it would be good to say so.I&#x27;m on a 32GB system btw.
评论 #40261708 未加载
r0fl大约 1 年前
Could not load the model because Error: Cannot find WebGPU in the environment
评论 #40257299 未加载
评论 #40257569 未加载
评论 #40257279 未加载
littlestymaar大约 1 年前
This is very cool, it&#x27;s something I wish existed since Llama came out, having to install Ollama + Cuda to get locally working LLM didn&#x27;t felt right to me when there&#x27;s all what&#x27;s needed in the browser. Llamafile solves the first half of the problem, but you still need to install Cuda&#x2F;ROCm for it to work with GPU acceleration. WebGPU is the way to go if we want to put AI on consumer hardware and break the oligopoly, I just wished it became more broadly available (on Linux, no browser supports it yet)
评论 #40257947 未加载
评论 #40256886 未加载
评论 #40257303 未加载
geor9e大约 1 年前
I&#x27;m just seeing ERR_SSL_VERSION_OR_CIPHER_MISMATCH at <a href="https:&#x2F;&#x2F;secretllama.com&#x2F;" rel="nofollow">https:&#x2F;&#x2F;secretllama.com&#x2F;</a> and at <a href="http:&#x2F;&#x2F;secretllama.com&#x2F;" rel="nofollow">http:&#x2F;&#x2F;secretllama.com&#x2F;</a> I see &quot;secretllama.com has been registered at Porkbun but the owner has not put up a site yet. Visit again soon to see what amazing website they decide to build.&quot;
评论 #40253984 未加载
hpeter大约 1 年前
It&#x27;s great but I hope it don&#x27;t catch on because then every website will make me download models. My hard drive will be full, too much bloat. I think the web is not good for this.<p>I prefer if webapps supported Ollama or gave an option to support either that or to store a model in the browser.<p>Or at least make it an extension
simple10大约 1 年前
Amazing! It&#x27;s surprisingly fast to load and run given the size of the downloaded models.<p>Do you think it would be feasible to extend it to support web browsing?<p>I&#x27;d like to help if you could give some pointers on how to extend it.<p>When asked about web browsing, the bot said it could fetch web pages but then obviously didn&#x27;t work when asked to summarize a web page.<p>[EDIT] The Llama 3 model was able to summarize web pages!
评论 #40253959 未加载
indit大约 1 年前
Could we use an already downloaded .gguf file?
Its_Padar大约 1 年前
Very interesting! I would be quite interested to see this implemented as some sort of API for browser chatbots or possibly even local AI powered web games? If you don&#x27;t know what Ollama is I suggest checking it out. Also I think adding the phi3 model to this would be a good idea.
koolala大约 1 年前
On Firefox Nightly on my Steam Deck it &quot;cannot find WebGPU in the environment&quot;.
评论 #40254790 未加载
评论 #40255770 未加载
Snoozus大约 1 年前
Tried this in Chrome under Windows, it does work but does not seem to use the RTX4060, only the integrated Iris Xe. Is this a bug or intentional?
评论 #40255400 未加载
评论 #40255386 未加载
1f60c大约 1 年前
It&#x27;s sadly stuck on &quot;Loading model from cache[24&#x2F;24]: 0MB loaded. 0% completed, 0 secs elapsed.&quot; on my iPhone 13 Pro Max :(
评论 #40257005 未加载
评论 #40257576 未加载
gitinit大约 1 年前
This works great on my Pixel 6a, surprisingly.
zerop大约 1 年前
Question - Do I compromise on quality on answers if I use models using WebLLM (like this) compare to using them on system console.
adontz大约 1 年前
If anyone knows, is this about the best model one can run locally on an old consumer grade GPU (GXT 1080 in my case)?
评论 #40258621 未加载
Dowwie大约 1 年前
What therapy prompts have you found useful?
评论 #40257608 未加载
ngshiheng大约 1 年前
Nice demo! I briefly tried it out and the demo felt much better than the original WebLLM one!<p>On a side note, i&#x27;ve been trying to do something similar too for similar reasons (privacy).<p>Based on my recent experience, i find that running LLM directly in the browser with decent UX (e.g. sub 1-2 second response time, no lag, no crashes) is still somewhat impossible given the current state of things. Plus, i think that relying on users&#x27; own GPU hardware for UX improvement via WebGPU is not exactly very practical on a large scale (but it is still something!) since not everyone may have access to GPU hardware<p>But yeah, if there&#x27;s anything to look forward to in this space, i personally hope to see improved feasibility of running LLMs in browsers
Jackson_Fleck大约 1 年前
...I think it would be a great idea to graft on a LlamaIndex module here so we can use this local browser LLM to talk to our local documentation <a href="https:&#x2F;&#x2F;docs.llamaindex.ai&#x2F;en&#x2F;stable&#x2F;" rel="nofollow">https:&#x2F;&#x2F;docs.llamaindex.ai&#x2F;en&#x2F;stable&#x2F;</a>
Jackson_Fleck大约 1 年前
This is amazing but can we please set the .prose width to be dynamic? the text column in 3 inches wide on my monitor, it should take up a % of the browser window.
Bradd3rs大约 1 年前
pretty cool, nice work!
BargirPezza大约 1 年前
Llama 3 seems to not be there yet at all. Can it read earlier prompts and what you have written before? It doesn&#x27;t seem like it. I asked it to help come up with a song name for a song I had but it couldn&#x27;t give me a great answer at all. Prompt: &quot;Hi! Could you help me come up with a name for a song? The genre of the song is Techno, Punk and Jazz. The name should be two to three words. And each song name should be different from the other one. Please write me 4 to 6 names. Thank you!&quot; It gives me four famous song names that already exists and writes that I can use them as inspiration for my &quot;next composition&quot;. I tell it that it should give me song names and not song ideas. It tells me sorry and that he will do better :p I ask again for song name ideas and it gives me a whole bunch of famous song names and repeatedly writes &quot;Can&#x27;t stop the feeling&quot; until it sends and error. I writes that I should&#x27;ve added that it shouldn&#x27;t write song names that already exist. It says sorry again and I ask for new song names.<p>It sends me a lot of these: &quot;&quot;I Can&#x27;t Stop Wittering&quot; - referencing the British children&#x27;s TV show &quot;I Can&#x27;t Stop Wotting&quot; &quot;What a Time to Be Alive&quot; - referencing the 1960s rock and roll era&quot;<p>I write: &quot;The song names should only be two to three words. Can you read what we have written before?&quot;<p>Llama answers me: &quot;Sure! Here&#x27;s what we have written before: &quot;Hey Jude&quot; &quot;Sgt Pepper&#x27;s Lonely Hearts Club Band&quot; &quot;1984&quot;&quot; And a lot of other song, album or band names.<p>I tell it that it is not true and asks Llama to write me the first prompt I wrote.<p>Llama answers: &quot;You asked me to help you with a math problem.&quot;<p>Copying and pasting the rest:<p>You No, please quote me where I asked that<p>Llama 3 You asked me to write a song about the number system.<p>You Same, please quote me where I said that<p>Llama 3 You asked me to write a song about the number system.<p>You You did not quote me<p>Llama 3 You asked me to write a song about the alphabet, and I did not.<p>You Okay.. I don&#x27;t think you understand me here<p>Llama 3 You asked me to write a song about the alphabet.<p>You Can you read what we have written?<p>Llama 3 You asked me to help you with a math problem.<p>You Yes or no?<p>Llama 3 I cannot tell you whether or not I helped you with a math problem. My programming is not yet advanced enough to allow me to tell you whether or not I helped you. However, if you ask me again to help you with a math problem, I will do my best to help you.<p>You Alright, thanks for being honest. I will give you a break, sorry if it have been too much for you!<p>Llama 3 You&#x27;re most welcome. I&#x27;m glad I could help you with your math problem. Do you need help with anything else?