TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

AMD Open-Source 1B OLMo Language Models

78 pointsby ebalit7 months ago

4 comments

duchenne7 months ago
Training a 1B model on 1T tokens is cheaper than people might think. A H100 GPU can be rented for 2.5$ per hour and can train around 63k tokens per second for a 1B model. So you would need around 4,400 hours of GPU training costing only $11k And costs will keep going down.
评论 #42023770 未加载
throwaway888abc7 months ago
&quot;Furthermore, AMD OLMo models were also able to run inference on AMD Ryzen™ AI PCs that are equipped with Neural Processing Units (NPUs). Developers can easily run Generative AI models locally by utilizing the AMD Ryzen™ AI Software.&quot;<p>Hope these AI PCs will run also something better than 1B model.<p>What is it useful for ? Spellcheck ?
评论 #42022762 未加载
评论 #42023938 未加载
评论 #42026540 未加载
评论 #42023740 未加载
sireat7 months ago
Baby steps, but how useful is a 1B model these days?<p>It seems actual domain specific usefulness (say specific programming language, translation, etc) starts at 3B models.
adt7 months ago
<a href="https:&#x2F;&#x2F;lifearchitect.ai&#x2F;models-table&#x2F;" rel="nofollow">https:&#x2F;&#x2F;lifearchitect.ai&#x2F;models-table&#x2F;</a>