TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Self-host StableLM-2-Zephyr-1.6B. Portable across GPUs CPUs OSes

2 点作者 3Sophons超过 1 年前

1 comment

3Sophons超过 1 年前
“Small” LLMs are the ones that have 1-2B parameters (instead of 7-200B). They are still trained with trillions of words. The idea is to push the envelope on “information compression” to develop models that can be much faster and much smaller for specialized use cases, such as as a “pre-processor” for larger models on the edge.<p>StableLM-2-Zephyr-1.6B is one such model. The video shows an LlamaEdge app runs this model at real-time speed on a MacBook. With the LlamaEdge cross-platform runtime, you can customize the app on a MacBook and deploy it on a Raspberry Pi or Jetson Nano device!