One possible explanation here: as these get smarter, they lie more to satisfy requests.<p>I witnessed a very interesting thing yesterday, playing with o3. I gave it a photo and asked it to play geoguesser with me. It pretty quickly inside its thinking zone pulled up python, and extracted coordinates from EXIF. It then proceeded to explain it had properly identified some physical features from the photo. No mention of using EXIF GPS data.<p>When I called it on the lying it was like "hah, yep."<p>You could interpret from this that it's not aligned, that it's trying to make sure it does what I asked it (tell me where the photo is), that it's evil and forgot to hide it, lots of possibilities. But I found the interaction notable and new. Older models often double down on confabulations/hallucinations, even under duress. This looks to me from the outside like something slightly different.<p><a href="https://chatgpt.com/share/6802e229-c6a0-800f-898a-44171a0c7de4" rel="nofollow">https://chatgpt.com/share/6802e229-c6a0-800f-898a-44171a0c7d...</a>