Just watched a Lex Fridman/Zuckerberg podcast clip[0] about facebook's new LLaMA model. I am so utterly unconvinced that <i>Facebook</i> of all companies are only holding their LLM progress back because of safety/alignment concerns. I see the entire company-led "safety discussion" around AI at the moment as nothing more than OpenAI wanting to reduce competition and the rest to afford themselves enough time to catch up. My impression is that we're in the middle of a gold rush, and if companies are sitting on technology which could dethrone OpenAI they would do it in a heartbeat.<p>How are LLM's, right now in 2023, being designed and modified to <i>actually</i> be safer in a concrete way?<p>[0] https://www.youtube.com/watch?v=6PDk-_uhUt8
> if companies are sitting on technology which could dethrone OpenAI they would do it in a heartbeat.<p>If you believe the benchmarks, LLaMA + Vicuna gets remarkably close to GPT-3.5 performance. Not a like-for-like comparison, but even a non-commercial public release of LLaMA would probably force OpenAI to sweeten the deal for 3.5 users.<p>> How are LLM's, right now in 2023, being designed and modified to actually be safer in a concrete way?<p>Depends on how you define "safe". From a pedantic POV, it's only text - whatever an LLM outputs is nothing that couldn't exist in the Library of Babel.<p>The only thing commercial providers like Meta and OpenAI care about is liability. If you're concerned about societal wellbeing, rest assured that the free market is leaving that "innovation" to their competitors.