TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: Where can I find practical comparative data regarding different LLMs?

8 点作者 lkrubner大约 1 年前
I&#x27;ve been trying to keep up with the advances in the world of AI and LLMs. NLP was a world that I knew pretty well 7 years ago, when I knew most of the major NLP libraries, and their various strengths and weaknesses. However, nowadays, I&#x27;m having trouble finding good discussions about the real uses of the LLMs.<p>I have gone to Hugging Face, and the amount of data there is overwhelming, but it seems poorly organized:<p>https:&#x2F;&#x2F;huggingface.co<p>Does anyone know a secret that makes that site tractable? I&#x27;ve experimented with a few of the libraries posted there, but I can only sample a tiny fraction of what is there, and what I&#x27;m missing is some method for finding the useful stuff while disposing of the junk.<p>7 years ago I could tell you the strengths of weaknesses of the Google&#x27;s Tensorflow or the Stanford NLP library. But where do I go to get good comparative information now, about the strengths and weaknesses of the various libraries that interact with the new LLM tools?<p>I&#x27;m looking to answer practical questions, that I can use in my own work with AI startups.<p>For an example of a question, for which I cannot find an answer, I am aware of a startup that has developed a chat client that, the startup says, can entirely replace a company&#x27;s customer support team. Among the claims made by the startup is that when their chat client makes a mistake, it can be easily adjusted so it won&#x27;t make that mistake any more. I am curious, what approaches are the engineers at that startup probably using to fix mistakes? If I search Hugging Face for ways to fix factual errors in LLMs then I see some libraries, but I&#x27;ve no idea what is considered good or bad.<p>So I asked the Hacker News community, how are you keeping up with advances around LLMs and associated tools?<p>Also, every LLM seems to have an embedded finite state machine that remembers the state of the current conversation, so where can I go to learn about the strengths and weaknesses of those finite state machines? How would I go about adjusting them?<p>Or, let me offer another example of the kind of information I want:<p>I&#x27;ve been testing different AI chats by trying to play text adventures with them. For instance:<p>https:&#x2F;&#x2F;huggingface.co&#x2F;spaces&#x2F;HuggingFaceH4&#x2F;zephyr-7b-gemma-chat<p>https:&#x2F;&#x2F;chat.openai.com<p>If I use the same prompt with each of them, I can see how different they are, but how do I know if my observations are general (would other people get similar results) and how do I learn about other AI chats (since I cannot test them all).

2 条评论

BMSR大约 1 年前
I&#x27;m also learning. The models get more accurate when they have more parameters, say 7b (7 billion parameters) vs 8x7b (56 billion parameters). They also take more time and resources at higher parameters. TheBloke at Huggingface uploads quantized models, which means they can run on lower spec computers but with a possible hit on quality, he offers multiple configurations per model depending on what you prefer. Big models can be too heavy and slow, the sweet spot is probably something like 13b. You can try different gguf models with this program: <a href="https:&#x2F;&#x2F;github.com&#x2F;madprops&#x2F;meltdown">https:&#x2F;&#x2F;github.com&#x2F;madprops&#x2F;meltdown</a>
评论 #39689718 未加载
ActorNightly大约 1 年前
&gt;Does anyone know a secret that makes that site tractable?<p>Its basically just a repo for models. Most original models are uploaded in fp16 format, with different parameter counts - higher parameter count = better performance. If you were to fine tune the model on your own data set, you have to keep the model in fp16, because gradients need higher resolution<p>On the flip side, inference is pretty much statistically most likely token which can be obtained without such resolution. As such, these models are usually quantized with GPTQ (GPU first), GGUF (CPU first, born from the llama.cpp project, but supports ), and AWQ (new method, supposedly faster than GPTQ).<p>Primer for quantization <a href="https:&#x2F;&#x2F;archive.ph&#x2F;2023.11.21-144133&#x2F;https:&#x2F;&#x2F;towardsdatascience.com&#x2F;which-quantization-method-is-right-for-you-gptq-vs-gguf-vs-awq-c4cd9d77d5be" rel="nofollow">https:&#x2F;&#x2F;archive.ph&#x2F;2023.11.21-144133&#x2F;https:&#x2F;&#x2F;towardsdatascie...</a><p>When using the model, you generally want to use the model with the largest parameters, with the highest bit quantization that fits in your system if you are running this on personal hardware. Easiest way to do this is with ollama, because its basically just does what pytorch or llama.cpp do in terms of loading models onto gpu (or ram for apple silicon), and executing them with whatever hardware you have. It can auto download models (usually 4 bit quantized) as well, and integrates into vscode with Continue extension.<p>&gt;For an example of a question, for which I cannot find an answer, I am aware of a startup that has developed a chat client that, the startup says, can entirely replace a company&#x27;s customer support team. Among the claims made by the startup is that when their chat client makes a mistake, it can be easily adjusted so it won&#x27;t make that mistake any more. I am curious, what approaches are the engineers at that startup probably using to fix mistakes?<p>Highly likely some form of prompt engineering. Langchain is a popular tool to utilize for this. Most companies pay for api access rather than set up their own hardware.<p>&gt;how are you keeping up with advances around LLMs and associated tools?<p>Wait for a model to drop on ollama, try it out.