TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Teaching LLMs to zip their lips

1 pointsby repeat_orabout 2 years ago

1 comment

repeat_orabout 2 years ago
Gretel introduces Reinforcement Learning from Privacy Feedback (RLPF), a method that can be used to align large language models (LLMs) to improve generative quality while also making them more privacy-preserving. Language models leaking proprietary data or custom prompts is a problem that's currently plaguing many generative AI applications. We propose RLPF to mitigate some of these issues. We also suggest future directions to reduce bias, discrimination, and other harmful characteristics that might exist in today’s language models.