This is undeniably cool and impressive, but, I think proceeding down this research path, at this pace, is quite irresponsible.<p>The primary effect of OpenAI's work has been to set off an arms race, and the effect of <i>that</i> is that humanity no longer has the ability to make decisions about how fast and how far to go with AGI development.<p>Obviously this isn't a system that's going to recursively self-improve and wipe out humanity. But if you extrapolate the current crazy-fast rate of advancement a bit into the future, it's clearly heading towards a point where this gets extremely dangerous.<p>It's good that they're paying lip service to safety/aligment, what actually matters, from a safety perspective, is the relative rates of progress in how well we can understand and control these language models, and how capable we make them. There <i>is</i> good research happening in language-model understanding/control, but it's happening slowly, compared to the rate of capability advances, and that's a problem.