Since OpenAI bills you for tokens expended on chain-of-thought, I assume this "deliberative alignment" has the funny side-effect of making you pay for the time the model spends ruminating on whether it should refuse to do what you asked it to do. Talk about adding insult to injury.
Safety: not from causing you physical harm or risking property damage, since there are no guardrails against it confidently walking you through a flawed breadboard circuit that will cause components to explode or immediately catch fire. Then say<p>“You’re absolutely right. We shouldn’t have run 120v through that capacitor rated for 10v. Here’s an updated design that fixes the issue”, then proceed to explain the same thing with a resistor placed at random.<p>No we mean safety from our LLM saying something that pisses off a politician, or even worse… hurting someone’s /feelings/.
The title is very wrong, "ChatGPT O3" is not a thing, OpenAI's new o3 model is not even demoed in ChatGPT, they don't call it "O3".
The subtitle at the top is "o3 preview & call for safety researchers".
The web page title is "12 Days of OpenAI".
Summary here <a href="https://wandb.ai/byyoung3/ml-news/reports/OpenAI-Introduces-o3-Pushing-the-Boundaries-of-AI-Reasoning--VmlldzoxMDY3OTUxMA" rel="nofollow">https://wandb.ai/byyoung3/ml-news/reports/OpenAI-Introduces-...</a>
<p><pre><code> def should_upvote(headline: str) -> bool:
if "o3" in headline.lower():
return True
return False
</code></pre>
Seriously though, is there anything new here? Also why the need for the editorialized headline? The article is titled "12 Days of OpenAI", not "ChatGPT O3 Preview Announcement" (which frankly makes it sound like it's about to be available to the public, which it isn't).