Given how shit GPT is at programming and the amount of training data available in this domain - I highly doubt it would be more useful in this area over a Google search.
Can't read the article because of the paywall, so this may/not be related...<p>Is it fair to say that the real liability here is a dataset mapping protein/molecule structures to outcomes/effects? Hypothetically the govt could always require OpenAI to blur responses with malicious intent. But if the underlying corpus is available, what's stopping a bad actor from training another model to do the same thing?<p>I guess the question I'm asking here is what risk is unique to their model if not the data it was trained on?