TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Explaining Llm.c in Layman Terms

38 点作者 publius_0xf3大约 1 年前

5 条评论

saagarjha大约 1 年前
I'm curious who the target audience of this tweet is. Are there a lot of people who know all about tensors, deep learning, and PyTorch but have no idea what C is?
评论 #40010644 未加载
评论 #40010815 未加载
评论 #40010636 未加载
评论 #40010775 未加载
评论 #40010762 未加载
teleforce大约 1 年前
Very interesting notes and as commented by one of the sibling comments it is really a rumination on abstraction rather on than implementation off llm.c itself.<p>It will be very interesting to implement this small llm in D language and use the new tiny autograd library in D [1]. Since now D compiler does support C, this should results in less line of codes excluding the small autograd library in D [2]. Given that the D code will be able to inline C code, and even CTFE it, if done correctly the implementation will be faster as well.<p>[1] A tiny autograd library:<p><a href="https:&#x2F;&#x2F;code.dlang.org&#x2F;packages&#x2F;tiny-autodiff" rel="nofollow">https:&#x2F;&#x2F;code.dlang.org&#x2F;packages&#x2F;tiny-autodiff</a><p>[2] Adding ANSI C11 C compiler to D so it can import and compile C files directly:<p><a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=27102584">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=27102584</a>
ChrisArchitect大约 1 年前
Actual link: <a href="https:&#x2F;&#x2F;twitter.com&#x2F;karpathy&#x2F;status&#x2F;1778153659106533806" rel="nofollow">https:&#x2F;&#x2F;twitter.com&#x2F;karpathy&#x2F;status&#x2F;1778153659106533806</a><p>It&#x27;s not a thread, no nitter needed.
ChrisArchitect大约 1 年前
Related:<p><i>Llm.c – LLM training in simple, pure C&#x2F;CUDA</i><p><a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=39973467">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=39973467</a>
fefe23大约 1 年前
This does not explain llm.c. It is more like a rumination on abstraction.