I feel the preparation and loading of the dataset has been abstracted too far away. I have no idea what type of data format I need or how it is loaded for this (it is using a pre-prepared huggingface dataset?). If I have local data how should it be loaded? What does that even look like? Is it expecting some sort of JSON?<p>When you get so far as to abstracting every step to loading a one-liner from huggingface, including the downloading of a prepared dataset with no example of doing the same on custom local dataset, you've abstracted too far to be useful for anyone other than the first user.
I don’t understand the obsession of LOC for wrappers - it’s the whole point of a wrapper. It makes it much easier for the user at the expense of making it less hackable<p>Title should be instead “Library for low-code RLHF in python”
It's not 50 lines of code if all the real work is done by importing a library...<p>That's like saying, I can solve any problem in 2 lines of code. I'll publish a library for it first, then:<p>import foo; foo.do_the_thing()<p>Magic!
Hi everyone, there are no easy tools for synthetic data generation or training and aligning LLMs simply in Python. Most of the stuff out there are messy adhoc scripts.<p>DataDreamer is an open source Python package with a nice API from the University of Pennsylvania that does all this that we’re actively developing. Will be here to answer questions.<p><a href="https://github.com/datadreamer-dev/DataDreamer">https://github.com/datadreamer-dev/DataDreamer</a>
Very cool, but I can't help but feel like titles that reference low-LOC are a bit clickbait-y when nearly all the heavy lifting is done by imported libraries.
The first paragraphs says RLHF can be used to align models, and the seconds say here's how to do it by using DPO. These two methods are not the same, and the latter is not an instance of the former.
Yeah well in bash I can do it in one line: `python train.py`. I hate examples like this, the 50loc statement is totally useless (and so is the code example, as I can't learning anything from it).
I don't prefer aligned models and I'm a human. It's not okay to claim that that's what humans prefer. There might be a subset of humans who can't handle words, but they're not even remotely in the majority.<p>Algined models are dumber, treat everyone like they're stupid immature idiots who can't handle words and they're a wannabe moral authority.
Interested if local RLHF is actually viable; can you get meaningful steering from 1k feedback points on a narrow task? I feel that annotation count is achievable with a single dedicated annotator making a few comments per minute (though tedious), 10k would be a week of work so achievable for a very dedicated hobbyist, and 100k seems out of reach for a hobby project.<p>Say for simple conversation usecases (eg customer support for a specific product, interactive fiction, things like that without deep technical knowledge).<p>I was also wondering if it’s possible to do such RLHF for SD running locally.
I feel like the current meta on finetuning LLMs is random accounts at X/Twitter. Google results are littered with SEO garbage or some kind of guides that fail to work the moment you need something slightly different.
it is very conflicting to see "do x in y LOC" in this field, especially when most of the workflow for different models are fragmented across non-overlapping frameworks/tooling.<p>to actually do something from scratch or using the author's code requires adopting something esoteric just for this purpose. for these scenarios it is nice to appreciate hf and their abstraction. but the reinventing the wheel situation is very frustrating to work with.<p>if you want to go beyond the demo, you have to deal with this painful reality. i hope there is more progress on this rather than making stacks of api.