Seeing someone as trustworthy as Scott choose to work on AI safety is a pretty good sign for the state of the field IMO. It seems like a lot of studious people agree AI alignment is important but then end up shoehorning the problem into whatever framework they are most expert in. When all you have is a hammer etc... I feel like he has good enough taste to avoid this pitfall.<p>Semi-related - I'd want to see some actual practical application for this research to prove they're on the right track. But maybe conceptually that's just impossible without a strong AI to test with, at which point it's already over? Alignment papers are impressively complex and abstract but I have this feeling while reading them that it's just castles made of sand.
"When you start reading about AI safety, it’s striking how there are two separate communities—the one mostly worried about machine learning perpetuating racial and gender biases, and the one mostly worried about superhuman AI turning the planet into goo" - great quote.
The debate around "What is AGI?" is becoming increasingly irrelevant. If in two iterations of DallE it can do 30% of graphic design work just as well as a human, who cares if it really "understands" art. It is going to start making an impact on the world.<p>Same thing with self driving. If the car doesn't "understand" a complex human interaction, but still achieves 10x safety at 5% of the cost of a human, it is going to have a huge impact on the world.<p>This is why you are seeing people like Scott change their tune. As AI tooling continue to get better and cheaper and Moore's law continue for a couple years, GTP will be better than humans at MANY tasks.
"[...] where the misuse of AI for spambots, surveillance, propaganda, and other nefarious purposes is already a major societal concern [...]"<p>I'm curious what he will do and whether for example he approves of the code laundering CoPilot tool. I also hope he'll resist being used as an academic promoter of such tools, explicitly or implicitly (there are many ways, his mere association with the company buys goodwill already).
'OpenAI, of course, has the word “open” right in its name, ... don’t expect me to share any proprietary information'<p>Yeah, Mr Aaronson just lost quite a bit of respect from my side. Going into AI is a great move, moving to the ClosedAI corporation.......? Why?<p>(Edit: Removed an outdated reference to Elon Musk, thanks @pilaf !)
I'm really happy this is happening and hope to see more. Namely, the AI safety & alignment challenge attracting our best minds who would previously have prioritized other math, physics and comp sci.
I'm not sure how Scott ended up buying the party line of the weird AGI Doomsday Cult but so be it. In any case, none of the things he says about verifying AI in this post make any sense at all, and if OpenAI actually cares about verifying AI and not just about hiring people who believe in the AGI Doomsday party line, probably they should hire verification people. Alas, that is not the point.
AI is not going to become self aware and destroy the world.<p>AI is going to cause something like the industrial revolution of the 19th century: massive changes in who is rich, massive changes in the labor market, massive changes in how people make war, etc.<p>It’s already started really.<p>What worries me most is that as long as society is capitalist, AI will be used to optimize for self-enrichment, likely causing an even greater concentration of capital than what we have today.<p>I wouldn’t be surprised that the outcome is a new kind of aristocracy, where society is divided between those who have access to Ai and those who don’t.<p>And that I don’t think falls into the “Ai safety” field. Especially since OpenAi is Vc-backed
Working on OpenAI instead of trying unconventional options such as decentralized models governance might increase inequalities . Why would the community decide to repeat what they denounce in big tech?
So, physics is a dead-end? Given that Scott is running his own research lab, a year is a very long time and him working out of his field is an indication that physics is in a big trouble.
From the website header:<p>> Also, next pandemic, let's approve the vaccines faster!<p>This is obviously very important to them. Is there some proof that the vaccine was unnecessarily delayed or just that they believe if we mess up and humanity suffers, so what?