TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

RLHF a LLM in <50 lines of Python

223 点作者 patelajay285超过 1 年前

18 条评论

mk_stjames超过 1 年前
I feel the preparation and loading of the dataset has been abstracted too far away. I have no idea what type of data format I need or how it is loaded for this (it is using a pre-prepared huggingface dataset?). If I have local data how should it be loaded? What does that even look like? Is it expecting some sort of JSON?<p>When you get so far as to abstracting every step to loading a one-liner from huggingface, including the downloading of a prepared dataset with no example of doing the same on custom local dataset, you&#x27;ve abstracted too far to be useful for anyone other than the first user.
评论 #39336870 未加载
评论 #39336998 未加载
评论 #39339907 未加载
jerpint超过 1 年前
I don’t understand the obsession of LOC for wrappers - it’s the whole point of a wrapper. It makes it much easier for the user at the expense of making it less hackable<p>Title should be instead “Library for low-code RLHF in python”
评论 #39336582 未加载
评论 #39336158 未加载
评论 #39335771 未加载
评论 #39339597 未加载
评论 #39335830 未加载
评论 #39336302 未加载
lopkeny12ko超过 1 年前
It&#x27;s not 50 lines of code if all the real work is done by importing a library...<p>That&#x27;s like saying, I can solve any problem in 2 lines of code. I&#x27;ll publish a library for it first, then:<p>import foo; foo.do_the_thing()<p>Magic!
评论 #39336948 未加载
评论 #39337780 未加载
评论 #39340710 未加载
patelajay285超过 1 年前
Hi everyone, there are no easy tools for synthetic data generation or training and aligning LLMs simply in Python. Most of the stuff out there are messy adhoc scripts.<p>DataDreamer is an open source Python package with a nice API from the University of Pennsylvania that does all this that we’re actively developing. Will be here to answer questions.<p><a href="https:&#x2F;&#x2F;github.com&#x2F;datadreamer-dev&#x2F;DataDreamer">https:&#x2F;&#x2F;github.com&#x2F;datadreamer-dev&#x2F;DataDreamer</a>
评论 #39335874 未加载
g4zj超过 1 年前
Very cool, but I can&#x27;t help but feel like titles that reference low-LOC are a bit clickbait-y when nearly all the heavy lifting is done by imported libraries.
评论 #39335805 未加载
imjonse超过 1 年前
The first paragraphs says RLHF can be used to align models, and the seconds say here&#x27;s how to do it by using DPO. These two methods are not the same, and the latter is not an instance of the former.
评论 #39335858 未加载
评论 #39335845 未加载
proto-n超过 1 年前
Yeah well in bash I can do it in one line: `python train.py`. I hate examples like this, the 50loc statement is totally useless (and so is the code example, as I can&#x27;t learning anything from it).
MrYellowP超过 1 年前
I don&#x27;t prefer aligned models and I&#x27;m a human. It&#x27;s not okay to claim that that&#x27;s what humans prefer. There might be a subset of humans who can&#x27;t handle words, but they&#x27;re not even remotely in the majority.<p>Algined models are dumber, treat everyone like they&#x27;re stupid immature idiots who can&#x27;t handle words and they&#x27;re a wannabe moral authority.
theptip超过 1 年前
Interested if local RLHF is actually viable; can you get meaningful steering from 1k feedback points on a narrow task? I feel that annotation count is achievable with a single dedicated annotator making a few comments per minute (though tedious), 10k would be a week of work so achievable for a very dedicated hobbyist, and 100k seems out of reach for a hobby project.<p>Say for simple conversation usecases (eg customer support for a specific product, interactive fiction, things like that without deep technical knowledge).<p>I was also wondering if it’s possible to do such RLHF for SD running locally.
aethelyon超过 1 年前
This is cool, but the data collection is the hard part, right?
评论 #39339031 未加载
bbstats超过 1 年前
I can abstract this to 2 lines
v4dok超过 1 年前
I feel like the current meta on finetuning LLMs is random accounts at X&#x2F;Twitter. Google results are littered with SEO garbage or some kind of guides that fail to work the moment you need something slightly different.
rldjbpin超过 1 年前
it is very conflicting to see &quot;do x in y LOC&quot; in this field, especially when most of the workflow for different models are fragmented across non-overlapping frameworks&#x2F;tooling.<p>to actually do something from scratch or using the author&#x27;s code requires adopting something esoteric just for this purpose. for these scenarios it is nice to appreciate hf and their abstraction. but the reinventing the wheel situation is very frustrating to work with.<p>if you want to go beyond the demo, you have to deal with this painful reality. i hope there is more progress on this rather than making stacks of api.
spdustin超过 1 年前
It occurs to me that there must be a model that&#x27;s been &quot;aligned&quot; opposite to the usual RLHF. Or has nobody done that?
ilaksh超过 1 年前
How do you normally do DPO? Is that built in to PyTorch or something?<p>Theoretically the hard part is collecting the examples with rejections etc.
评论 #39339053 未加载
potatoman22超过 1 年前
This seems useful, thanks!
rrr_oh_man超过 1 年前
RLHF = Reinforcement Learning from Human Feedback
cztomsik超过 1 年前
DPO is not RLHF.