“ART provides tools that enable developers and researchers to defend and evaluate their ML models and applications against a number of adversarial threats, such as evasion, poisoning, extraction, and inference.”<p>The first two attacks, evasion & poisoning highlight the incredible importance of having high quality data when training models. Evasion is false-negatives that are allowed because the model did not have a diverse enough selection of training data and poisoning can occur when the data sources are not well vetted. Data quality is probably the single biggest problem with ML models, and I wish we’d see more of a focus on it.
I am very glad to see this. I looked for techniques to counter adversarial ai, and I was disappointed to find a lot of useless approaches and nothing actually useful. Many people have published ideas, without seriously trying to attack them. I hope someone can identify better approaches.
I realize that IBM is the US government's IT department but their involvement doesn't instill a great deal of confidence that anything this program has created is more than a heavily documented dumpster file
A book about IBM and Nazi Germany <a href="https://en.wikipedia.org/wiki/IBM_and_the_Holocaust" rel="nofollow">https://en.wikipedia.org/wiki/IBM_and_the_Holocaust</a>