Looks like the actual threat is that it's hard to get currently known chemical weapons synthesized because labs will refuse to do so, while it could be much easier to have some novel AI-generated molecule synthesized because the labs don't know what it does.<p>Seems easily countered by using the same toxicity prediction software when evaluating synthesis requests (but I'm not sure whether this actually matters, or whether skilled chemists can easily synthesize anything themselves anyway).
For me, AI also suggested countless methods with step-by-step instructions to achieve xyz (like exporting data from one program to another) whilst hallucinating buttons that don't exist, functions that don't exist, disregarding file incompatibility et al.<p>I would take whatever it has to say about untested chemical weapons with a very large pinch of salt.
The worst case scenario I can think of is a generated prion disease... a respatory version of Mad Cow disease, or something like that.<p>Fortunately the training dataset for that is extremely small, and protein folding/generation is a different duck, but it still doesn't seem that far away.
Makes you wonder if you could get an LLM to find you common ingredients for things to make them but then I remember the chlorine gas is already easily accessible and easy to make. Surely like many things info hazards are often contained. Is this really an issue? If you know how to do one thing then you'd be able to do the rest of it. Not really sure if this is a real issue. What does everyone think?
Here’s alternate reporting:<p>“ AI is dreaming up drugs that no one has ever seen. Now we’ve got to see if they work.”<p><a href="https://www.technologyreview.com/2023/02/15/1067904/ai-automation-drug-development/" rel="nofollow">https://www.technologyreview.com/2023/02/15/1067904/ai-autom...</a>