I posted this in part because it explains why I think attempts to regulate AI (including the recently-proposed moratorium) have a decent chance of working. They won't prevent dangerous misuse of AI technology altogether, but they can substantially reduce the risks, much like with bioweapons.
All those barriers and more apply to nuclear weapons. Prior to the Manhattan Project, nobody was even sure if nuclear energy could be sufficiently weaponized.<p>The biggest barrier to bio weapons is, I think motivation. Bio weapons are hard to target effectively and there is a chance of blowback. In addition, unlike nuclear weapons, they are not as useful against military targets. A nuclear bomb can annihilate a tank column. However, every major military has the capability of offering bio weapons protection to its soldiers (for example filtration systems in tanks).<p>In addition, because of incubation, etc, the country you are targeting always has the time for retaliation, if not with bio weapons then with nuclear weapons.<p>Thus it is not clear what bio weapons buy a country that nuclear weapons don’t.
We need utility-scale molecular sensing.
How will we prevent bioterrorism when it gets much easier to make super-pathogens? How much freedom will we have to restrict?
These questions are answered and/or mooted if enough people have solid-state, label-free, universal molecular sensing at home.