TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Training and aligning LLMs with RLHF and RLHF alternatives

102 点作者 rasbt超过 1 年前

2 条评论

scoresmoke超过 1 年前
Discussions about LLM alignment often miss topics of data quality and quantity. It turns out that current models like Llama 2 use 10K+ prompts and responses for supervised fine-tuning (SFT) and 100K+ human preference pairs. While the preferences are pretty easy to annotate, producing a good SFT dataset is uneasy.<p><a href="https:&#x2F;&#x2F;evalovernite.substack.com&#x2F;p&#x2F;rlhf-math-aint-enough" rel="nofollow noreferrer">https:&#x2F;&#x2F;evalovernite.substack.com&#x2F;p&#x2F;rlhf-math-aint-enough</a><p><a href="https:&#x2F;&#x2F;doi.org&#x2F;10.5281&#x2F;zenodo.8186168" rel="nofollow noreferrer">https:&#x2F;&#x2F;doi.org&#x2F;10.5281&#x2F;zenodo.8186168</a>
jamesblonde超过 1 年前
I read here that Yann LeCun claimed that even with RLHF, LLMs will still hallucinate - that it&#x27;s an unavoidable consequence of their autoregressive nature<p><a href="https:&#x2F;&#x2F;www.hopsworks.ai&#x2F;dictionary&#x2F;rlhf-reinforcement-learning-from-human-feedback" rel="nofollow noreferrer">https:&#x2F;&#x2F;www.hopsworks.ai&#x2F;dictionary&#x2F;rlhf-reinforcement-learn...</a>
评论 #37458973 未加载
评论 #37458947 未加载
评论 #37462170 未加载
评论 #37462876 未加载