Are they trying to justify bombing kids and civilians in the future by blaming it on AI? The outcome is terrible, it’s like saying this car is driven by AI but it keeps hitting walls and pedestrians every time it drives..
The article is utterly silent on what seems a critical question to me: how is the outcome evaluated? Number of targets generated? Fraction accepted by a human analyst? The outcome of a detailed investigation after a strike as to what was actually hit (yeah right)? The effect that striking a target has on Israel’s long term security (sounds likely to be negative at this point to me)?<p>How is a “target” defined anyway?