I've noticed the same recurring debates centred around the existence and severity of AI risks. I'd like to see more discussions that start with the assumption that risks are significant so that we could have a debate on what might be done to mitigate risks.