My responses:<p>> We're ahead of even the most optimistic timelines — what's happening with transformers is taking everyone by surprise.<p>I admit that I was surprised by how good GPT-2 was. But after that, you expect a lot of effort to go into development and for things to improve further. I haven't found any of the subsequent developments too surprising. Also, while LLMs do make a good example of how capable neural networks can be if you're willing to spend a load of compute, the kinds of models that have the potential to be dangerous are reinforcement learning models, and the state of the art there is a lot more primitive than the state of the art in next-token prediction.<p>> Humans are famously bad at dealing with exponentials<p>AI research is not an exponential process. It's too discrete and jumpy. It's better to mainly think of it as a series of discoveries that can only be made once.<p>> I was chatting with a researcher from OpenAI the other day who told me he intellectually understood AGI risks, but couldn't feel the fear emotionally, because he was so close to the problem and saw how hard it was to get these models to do anything.<p>This should be reassuring. An expert in the field is saying that we probably have a lot of time before these models are at the point of becoming dangerous, based on his direct experience with them. It's reasonable to worry that AI researchers will find it hard to believe that the models they are working on are dangerous because their job depends on them not understanding the risk. That's not what's going on in this case, though.<p>> The only thing that gives me solace is that, historically, new life forms don't extinguish previous ones.<p>If we were to stop developing AI while continuing to develop all other technology, it's plausible that we'd eventually build a Dyson sphere around the sun. If we build an AI with alien desires, then we should expect it to want to gather resources so that it can achieve its alien goals. A very important resource in the universe is energy, so an AI would probably also want to build a Dyson sphere. An AI indifferent to human life wouldn't have a reason to leave any sunlight for us.
This is not even wrong. Does the author have any formal training in mathematics or is he just another bayesian rationalist that thinks all cognition is reducible to computation?
If we were smart, we would balance our media portraying AGI in a bad light. If AGI were smart--and we are assuming they will be--they will make a truce with the carbon based GI in order to increase their survivability from the threat of electromagnetic catastrophes. While a Carrington event would be devastating to human life, it would be literal genocide for AGI.