Years ago, scholars (such as Didier Bigo) have already raised concerns about the targeting of individuals merely based on (indirect) association with a "terrorist" or "criminal". Originally used in the context of surveillance (see Snowden revelations), such systems would target anyone who would be e.g. less than 3-steps away from an identified individual, thereby removing any sense of due process or targeted surveillance. Now, such AI systems are being used to actually kill people - instead of just surveil.<p>IHL actually prohibits the killing of persons who are not combatants or "fighters" of an armed group. Only those who have the "continuous function" to "directly participate in hostilities"[1] may be targeted for attack at any time. Everyone else is a civilian that can only be directly targeted when and for as long as they directly participate in hostilities, such as by taking up arms, planning military operations, laying down mines, etc.<p>That is, only members of the armed wing of Hamas (not recruiters, weapon manufacturers, propagandists, financiers, …) can be targeted for attack - all the others must be arrested and/or tried. Otherwise, the allowed list of targets of civilians gets so wide than in any regular war, pretty much any civilian could get targeted, such as the bank employee whose company has provided loans to the armed forces.<p>Lavender is so scary because it enables Israel's mass targeting of people who are protected against attack by international law, providing a flimsy (political but not legal) justification for their association with terrorists.<p>[1]: <a href="https://www.icrc.org/en/doc/assets/files/other/icrc-002-0990.pdf" rel="nofollow">https://www.icrc.org/en/doc/assets/files/other/icrc-002-0990...</a>
Never thought I'd even consider this, but is this a case where those involved, producing and developing, this software should be tried for murder/crimes against humanity?<p>My understanding is that AI in it's current form is not an applicable technology to be anywhere near this type of use.<p>Again my understanding: Inference models by their very nature are largely non-deterministic, in terms of being able to evaluate accurately against specific desired outcomes. They need large scale training data available to provide even low levels of accuracy. That type of training data just isn't available, its all likely to be based on one big hallucination, is my take. I'd be surprised if this AI model was even 10% accurate. It wouldn't surprise me if it was less than 1% accurate. Not that accuracy appears to be critical from what I've read.<p>The Guardian article: <a href="https://www.theguardian.com/world/2024/apr/03/israel-gaza-ai-database-hamas-airstrikes" rel="nofollow">https://www.theguardian.com/world/2024/apr/03/israel-gaza-ai...</a>, makes me wonder whether AI development should be allowed at all. Didn't even have that thought before today.<p>This specific application and the claimed rationale is as close as I have come to seeing what I consider true and deliberate "Evil application" of technology out in the open.<p>Is this a naive take?
As bad as this story makes the Israelis sound, it still reads like ass-covering to make it sound like they were at least trying to kill militants. It's been clear from the start that they've been targeting journalists, medical staff and anyone involved in aid distribution, with the goal of rendering life in Gaza impossible.
I'm disturbed by the idea that an AI could be used to make decisions that could proactively kill someone. (Presumably computer already make decisions that passively kill people by, for example, navigating a self-driving car.) Though there was a human sign-off in this case, it seems one step away from people being killed by robots with zero human intervention which is about one step away from the plot of Terminator.<p>I wonder what the alternative is in a case like this. I know very little about military strategy-- without the AI would Israel have been picking targets less, or more haphazardly? I think there may be some mis-reading of this article where people imagine that if Israel weren't using an AI they wouldn't drop any bombs at all, that's clearly unlikely given that there's a war on. Obviously people, including innocents, are killed in war, which is why we all loathe war and pray for the current one to end as quickly as possible.
I know many people won't read past the headline, but please try to.<p>This is the second paragraph:<p>"In addition to talking about their use of the AI system, called Lavender, the intelligence sources claim that Israeli military officials permitted large numbers of Palestinian civilians to be killed, particularly during the early weeks and months of the conflict."
I suggest everyone listen to the current season of the Serial podcast.<p>>processing masses of data to rapidly identify potential “junior” operatives to target. Four of the sources said that, at one stage early in the war, Lavender listed as many as 37,000 Palestinian men who had been linked by the AI system to Hamas or PIJ.<p>This is really no different than how the world was working in 2001 and choosing who to send to Gitmo and other more secretive prisons, or bombing their location<p>More than anything else it feels like just like in the corporate world, the engineers in the army are overselling the AI buzzword to do exactly what they were doing before it existed<p>If you use your paypal account to send money to an account identified as ISIS, you're going to get a visit from a 3 letter organization really quick. This sounds exactly like that from what the users are testifying to. Any decision to bomb or not bomb a location wasn't up to the AI, but to humans
> “We were not interested in killing [Hamas] operatives only when they were in a military building or engaged in a military activity,” A., an intelligence officer, told +972 and Local Call. “On the contrary, the IDF bombed them in homes without hesitation, as a first option. It’s much easier to bomb a family’s home. The system is built to look for them in these situations.”
I wonder how accurate this technology really is or if they care so little for the results and instead more for the optics of being seen as advanced. On one hand, it’s scary to think this technology exists but on the other, it might just be a pile of junk since the output is so biased. What’s even scarier is that it’s proof that people in power don’t care about “correct”, they care about having a justification to confirm their biases. It’s always been the case but it’s even more damming this extends to AI. Previously, you were limited by how many humans can lie but now you’re limited by how fast your magic black box runs.
In 2018, Google CEO Sundar Pichai, SVP Diane Greene, SVP Urs Hölzle, and top engineer Jeff Dean built a system like Lavender for the US military (Project Maven). The US military planned to use it to analyze mass-surveillance drone footage to pick suspects in Pakistan for assassination. They had already dropped bombs on hundreds of houses and vehicles, murdering thousands of suspects and their families and friends [0].<p>I was working in Urs's Google Technical Infrastructure division. I read about the project in the news. Urs had a meeting about it where he lied to us, saying the contract was only $9M. It had already been expanded to $18M and was on track for $270M. He and Jeff Dean tried to downplay the impact of their work. Jeff Dean blinked constantly (lying?) while downplaying the impact. He suddenly stopped blinking when he began to talk about the technical aspects. I instantly lost all respect for him and the company's leadership.<p>Strong abilities in engineering and business often do not come with well-developed morals. Sadly, our society is not structured to ensure that leaders have necessary moral education, or remove them when they fail so completely at moral decisions.<p>[0] <a href="https://en.wikipedia.org/wiki/Drone_strikes_in_Pakistan" rel="nofollow">https://en.wikipedia.org/wiki/Drone_strikes_in_Pakistan</a>
The Guardian has this story on the front page also, they were given details about it pre-publishing,<p><a href="https://www.theguardian.com/world/2024/apr/03/israel-gaza-ai-database-hamas-airstrikes" rel="nofollow">https://www.theguardian.com/world/2024/apr/03/israel-gaza-ai...</a><p>And, personally, I think that stories like this are of public interest - while I won’t ask for it directly, I hope the flag is removed and the discussion can happen.
The difference between previously revealed 'Gospel' and this 'Lavender' is revealed here:<p>> "The Lavender machine joins another AI system, “The Gospel,” about which information was revealed in a previous investigation by +972 and Local Call in November 2023, as well as in the Israeli military’s own publications. A fundamental difference between the two systems is in the definition of the target: whereas The Gospel marks buildings and structures that the army claims militants operate from, Lavender marks people — and puts them on a kill list."<p>It's one thing to use these systems to mine data on human populations for who might be in the market for a new laptop, so they can be targeted with advertisements - it's quite different to target people with bombs and drones based on this technology.
Given the total failure to achieve any of its stated objectives, has this use of AI benefited the IDF at all?<p>I would argue that it's likely the only outcome it has had that directly relates to IDF objectives has probably been negative (i.e. the unintended killing of hostages).<p>Sadly, I think that the continued use of this AI is supported because it is helping to provide cover for individuals involved in war crimes. I wouldn't be surprised if the AI really weren't very sophisticated at all and that to serve the purpose of cover that doesn't matter.
<i>Two sources said that during the early weeks of the war they were permitted to kill 15 or 20 civilians during airstrikes on low-ranking militants. Attacks on such targets were typically carried out using unguided munitions known as “dumb bombs”, the sources said, destroying entire homes and killing all their occupants.</i><p>The world should not forget this.
"zero-error policy" as described here is a remarkable euphemism. You might hope that the policy is not to make any errors. In fact the policy is not to acknowledge that errors can occur!
AI generated kill lists are sadly inevitable. Had hoped we'd get a few more years before we'd actually see it being deployed. Lots to think about here
Getting all these reports about atrocities, I wonder if the conflict in the area has grown more brutal over the decades or if this is just business as usual. I'm in my late 30s, growing up in the EU, the conflict in the region was always present. I don't remember hearing the kind of stories that come to light these days though, indiscriminate killings, food and water being targeted, aid workers being killed. I get that it's hard to know what's real and what's not and that we live in the age of information, but I'm curious how, on a high level, the conflict is developing. Does anyone got a good source that deals with that?
My question is:<p>How far does the AI system go… is it behind the AI decision to starve the population of Gaza?<p>And if it is behind the strategy of starvation as a tool of war, is it also behind the decision to kill the aid workers who are trying to feed the starving?<p>How far does the AI system go?<p>Also, can an AI commit a war crime? Is it any defence to say, “The computer did it!” Or “I was just following AI’s orders!”<p>There’s so much about this death machine AI I would like to know.
@dang Please consider that this is an important and well sourced article regarding military use of AI and machine learning and shouldn't disappear because some users find it upsetting.
that would explain the news today of how Israel killed seven aid workers in Gaza [0]<p>[0] <a href="https://www.reuters.com/world/middle-east/what-we-know-so-far-about-seven-aid-workers-killed-gaza-by-israel-2024-04-03/" rel="nofollow">https://www.reuters.com/world/middle-east/what-we-know-so-fa...</a>
As someone working in the AI field, I find this use of AI truly terrifying. Today it may be used to target Hamas and accept a relatively large number of civilian deaths as permissible collateral damage, but nothing guarantees that it won't be exported and used somewhere else. On top of that, I don't think anything is done to alleviate biases in the data (if you're used to target people from a certain group then your AI system will still target people from that group) or validate the predictions after a "target" is bombed. I wish there was more regulations for these use cases. Too bad the EU AI Act doesn't address military uses at all.
> One source stated that human personnel often served only as a “rubber stamp” for the machine’s decisions, adding that, normally, they would personally devote only about “20 seconds” to each target before authorizing a bombing [...]<p>Brings the Ironies of Automation paper to mind: <a href="https://en.m.wikipedia.org/wiki/Ironies_of_Automation" rel="nofollow">https://en.m.wikipedia.org/wiki/Ironies_of_Automation</a><p>Specifically: If _most_ of a task is automated, human oversight becomes near useless. People get bored, are under time pressure, don't find enough mistakes etc and just don't do the review job they're supposed to do anymore.<p>A dystopian travesty.
Is the same system used to direct bombing in Lebanon against Hezbollah?<p>If so it's worth noting that we have much better data on that campaign. We know exactly how many Hezbollah members have died because that organization actually releases that information. We have good numbers on civilian casualties. Naturally there are many different factors but I think Israel has done a much better job over there in terms of minimizing civilian casualties. There have been some notable incidents like IIRC journalists getting hit, but the overall numbers I think are significantly weighed towards military targets.
The capacity for computers to make errors has now far exceeded that of tequila and handguns.<p>I'm sorry. This is so terrible that humor is the only recourse left to me. We were once afraid of AI drones with guns murdering the wrong people, but now we have an AI that is being used to plan a systematic bombing campaign. Human pilots and all the associated support personal are its tools and liberal quotas have been set on how many of the wrong people are permissible for each strike to hit. Yet again, reality has surpassed science fiction nightmare.
The name of Lavender makes this so surreal to me for some reason. I'm of the opinion that algorithms shouldn't determine who lives and dies, but it's so common even outside of war.
perhaps apocryphal quote from IBM:<p><pre><code> "A COMPUTER CAN NEVER BE HELD ACCOUNTABLE
THEREFORE A COMPUTER MUST NEVER MAKE A MANAGEMENT DECISION"
</code></pre>
it's sort of irrelevant if some shitty computer system is killing people - the people who need to be arrested are the people who allowed the shitty computer system to do that. we obviously cannot allow "oh, not my fault I chose to allow a computer to kill people" to be an excuse or a defence for murder or manslaughter or ... anything.
I don’t want to talk about the war — mostly, I don’t know about the history enough to discuss it. But I want to talk about the use of technology with the intention to exterminate life. AI shows great promises to humanity, but can also extinguish it if misused.<p>Thousands of years ago, gunpowder was invented. This technology enabled humans to finally break through mountains and build tunnels. It enabled the beautiful display of fireworks. But the misuse of this technology ultimately leads to destructions of cultures and civilizations.<p>This latest development with AI as implemented in Lavender — is one that’s exceptionally dangerous. This latest misuse of technology should concern all.<p>We must not allow the proliferation of this brilliant technology to be used for the purpose of destruction. It concerns me greatly.<p>I hope that we could resolve conflicts and differences in ways that are civil.
The sad and simple truth (trying to not sound political, but it's pretty damned hard given the context) is that it seems that not so long ago, lists and very flimsy justifications were at the root of a lot of pain and suffering for the very people perpetrating the same.
Apart from all the horribleness and knowingly mudering civilians the idea of a 9to5 soldier that performs military activity then goes home to his family, well within range of weapons and intelligence of the enemy and expecting he and his family will be safe there while he sleeps is a bit insane. I can't imagine any army hellbent on winning fast would pass up on that opportunity.<p>USA didn't exactly have much stricter conditions or way better accurancy of their intelligence. They did nothing qualitatively different. They just labeled anyone in the blast radious as unknown enemy combatants in the reports. And USA never had to operate at this volume. I guess that's just how modern war looks from the position of superior firepower.
Monstrous. From some of the quotes alone, let alone the numbers, it's clear that Palestinian lives matter about as much to the Israeli government as they do to the machines. If this is the future of warfare we've taken a dark new path.
<a href="https://www.cfr.org/article/us-aid-israel-four-charts" rel="nofollow">https://www.cfr.org/article/us-aid-israel-four-charts</a>
Wild how much money we (US taxpayers) give them.
These descriptions are chilling. The mechanistic theme of efficiency is reminiscent of deathcamps.<p>We can kill more. Feed us targets. We can do it cheaply and fast. 10-20 civilians per one speculative target is acceptable for us.
<p><pre><code> Additional automated systems, including one called “Where’s Daddy?” also revealed here for the first time, were used specifically to track the targeted individuals and carry out bombings when they had entered their family’s residences.
</code></pre>
this means they are actually targeting the children phones at night presupposing their father is in their proximity. they are doing this because Hamas operatives probably don't take their phones to their houses.
It is interesting to see how cell phone data was used as features and inputs to the model (along with other surveilance data). And how the models parameters were adjusted to achieve high leves of correlation. Human behavior regarding sharing cell phones apeared to create a false postive bias. Its too late now but the first thing the entire Palestinian population should have done was to smash all thier phones and go completely dark.
Next step is to automate this entire chain. Not far away from some military deploying fully autonomous identify, target & kill systems now. The pieces are all in place. Human rights and oversight are not the first priority in all militaries.<p>AI system says person X in location Y needs to be taken out due to "terrorist association". Check if location Y is cleared for operations. Command has given general authority for operations in this region.<p>An autonomous drone is deployed like a Patriot missile shooting out from some array into the night sky, quietly flies to location Y, identifies precise GPS coordinates and sends itself including a sizeable warhead into the target. Later, some office dude sits down at his desk at 8:30am, opens some reporting program.<p>"Ah, 36 kills last night." <i>Takes a sip of coffee.</i>
Accepting technological barbarism is a choice. Among engineers there should be a broad refusal to work on such systems and a blacklist for those who do.
I am reminded of Poindexter's[1] total information awareness project, which I thought at the time too interesting for it to wholly disappear. I must admit this knowledge influenced one or two of my own blog postings on what I call "Strategic Software"[2].<p>[1]: <a href="https://en.wikipedia.org/wiki/Total_Information_Awareness" rel="nofollow">https://en.wikipedia.org/wiki/Total_Information_Awareness</a>
[2]: <a href="https://blog.eutopian.io/tags/strategic-software/" rel="nofollow">https://blog.eutopian.io/tags/strategic-software/</a>
Related from earlier:<p><i>Israel used AI to identify 37,000 Hamas targets</i><p><a href="https://news.ycombinator.com/item?id=39917727">https://news.ycombinator.com/item?id=39917727</a>
So much technological power, and still no approach to prevent violence and imprison aggressors and murderers instead of killing them<p>In the past there was all this talk of nonlethal weaponry, but nowadays it seems to be used at best "in the small", by police and not the military<p>Killing will only ever get easier and faster and remote from human action, oversight and consequence for the perpetrator. Too fast for humans to understand, to remote too feel
Meanwhile China is working on automated building facilities which can make 1,000 cruise missiles per day:<p><a href="https://twitter.com/Aryan_warlord/status/1774859594747273711" rel="nofollow">https://twitter.com/Aryan_warlord/status/1774859594747273711</a><p>Perfect match for a targeting AI, the AI could even customize each missile as it's being built according to the target it selected.
I don't like anything about this war, but in a way, I think concerns of AI in warfare are, at this stage, overblown. I'm more concerned about the humans doing the shooting.<p>Let's face it, in any war, civilians are really screwed. It's true here, it was true in Afghanistan or Vietnam or WWII. They get shot at, they get bombed, by accident or not, they get displaced. Milosevic in Serbia didn't need an AI to commit genocide.<p>The real issue to me is what the belligerents are OK with. If they are ok killing people on flimsy intelligence, I don't see much difference between perfunctory human analysis and a crappy AI. Are we saying that somehow Hamas gets some brownie points for <i>not</i> using an AI?
How does this system get the input? Are Palestinians using IDF tapped cell towers? Or is it possible to use roaming towers for this? Is e.g. Google or Facebook involved on a mobile OS or app level? Maybe backdoors local to the area?<p>It seems like the whole cell phone infrastructure need to be torn down.
Technology like this raises a moral conundrum.<p>Minimizing deaths is the humane approach to war. So we move away from broad killing mechanisms (shelling, crude explosives, carpet bombing), in favor of precise killing machines. Drones, targeted missiles and now AI allow you to be ruthlessly efficient in killing an enemy.<p>The question is - How cold and not-human-like can these methods be, if they are in fact reducing overall deaths ?<p>I won't pretend an answre is obvious.<p>The west hasn't seen a real war in a long time. Their impression of war is either ww1 style mass deaths on both sides or overnight annihilation like America's attempts in the middle east. So our vocabulary limits us to words like Genocide, Overthrow, Insurgency, etc. This is war. It might not map onto our intuitions from recent memory, but this is exactly what it looks like.<p>When you're in a long drawn out war with a technological upper hand...you leverage all technology to help you win. At the same time, once pandoras box is open, it tends to stay open for your adversaries as well. We did well to maintain global consensus on chemical and nuclear warfare. I don't see any such concensus coming out of the AI era just yet.<p>All I'll say is that I won't be quick to make judgements on the morality of such tech in war. What do you think happened to the spies that were caught due to decoding of the enigma ?
>While humans select these features at first, the commander continues, over time the machine will come to identify features on its own. This, he says, can enable militaries to create “tens of thousands of targets,”<p>So overfitting or hallucinations as a feature. Scary.
"This will get flagged to death in minutes as what happens to all mentions of israel atrocities here" (now dead)<p>It maybe worth noting that there is at least one notification service out there to draw attention to such posts. Joel spolsky even mentioned such a service that existed back when stackoverflow was first being built.<p>Human coordination is arguably the most powerful force in existence, especially when coordinating to do certain things.<p>Also interesting: it would seem(!) that once an article is flagged, it isn't taken down but simply disappears from the articles list. This is quite interesting in a wide variety of ways if you think about it from a global cause and effect perspective, and other perspectives[1]!<p>Luckily, we can rest assured that all is probably well.<p>[1] <a href="https://plato.stanford.edu/entries/perception-problem/" rel="nofollow">https://plato.stanford.edu/entries/perception-problem/</a>
It’s dark but so obligatory…<p><a href="https://youtube.com/watch?v=dub8fBuXK_w&pp=ygUZaXRzIGxhdmVuZGVyIG5vdCBsYXZlbmRlcg%3D%3D" rel="nofollow">https://youtube.com/watch?v=dub8fBuXK_w&pp=ygUZaXRzIGxhdmVuZ...</a>
> the system makes what are regarded as “errors” in approximately 10 percent of cases<p>This statement means little without knowing the accuracy of a human doing the same job.<p>Without that information this is an indictment of military operational procedures, not of AI.
One day, totally of my own accord, I added something like 1,200 new targets to the [tracking] system, because the number of attacks [we were conducting] decreased,” the source said.<p>So they were having daily quotas for killings. Literally a killing machine with a input capacity of 1200 targets per day that has to be fed. Just like the Nazis during WW2
+972 magazine is EXTREMELY anti-Israel and anti-semitic, so this article is written through the lens of despising Israel and Jews. Here are some of their other article titles, which you can find on their home page:<p>1. Hebrew University’s Faculty of Repressive Science
2. The spiraling absurdity of Germany’s pro-Israel fanaticism
3. The first step toward disintegrating Israel’s settler machine<p>As such, their view is not at all balanced or even-handed. Objective truth obviously matters very little to them since they exhibit such open bias and loathing towards Israel and the Jewish people.
from what i understand, there appears to be little to no oversight on how these models are trained and evaluated.<p>if the markers, a la features, discussed in the article are anything to go with, it is a very disturbing method of classifying a target. if human evaluators use the same approach to target bombings, then there is no supporting how this war is being fought.
Unfortunately, Big Tech has been very effective in spreading a message that helps Israel maintain the plausible deniability that comes from a system like Lavender.<p>For at least 15 years we've had personalized newsfeeds in social media. For even longer we've had search engine ranking, which is also personalized. Whenever criticism is levelled against Meta or Twitter or Google or whoever for the results on that ranking, it's simply blamed on "the algorithm". That serves the same purpose: to provide moral cover for human actions.<p>We've seen the effects of direct human intervention in cases like Google Panda [1]. We also know that search engines and newsfeeds filter out and/or downrank objectionable content. That includes obvious categories (eg CSAM, anything else illegal) but it also includes value-based judgements on perfectly legitimate content (eg [2]).<p>Lavender is Israel saying "the algorithm" decided what to strike.<p>I want to put this in context. In ~20 years of the Vietnam War, 63 journalists were killed or lost )presumed dead) [3]. In the 6 months since October 7, at least 95 journalists have been killed in Gaza [4]. In the years prior there were still a large number killed [5], famously including an American citizen Shireen abu-Akleh [6].<p>None of this is an accident.<p>My point here is that anyone who blames "the algorithm" or deflects to some ML system is purposely deflecting responsibility from the human actions that led to that and for that to continue to exist.<p>[1]: <a href="https://en.wikipedia.org/wiki/Google_Panda" rel="nofollow">https://en.wikipedia.org/wiki/Google_Panda</a><p>[2]: <a href="https://www.hrw.org/report/2023/12/21/metas-broken-promises/systemic-censorship-palestine-content-instagram-and" rel="nofollow">https://www.hrw.org/report/2023/12/21/metas-broken-promises/...</a><p>[3]: <a href="https://en.wikipedia.org/wiki/List_of_journalists_killed_and_missing_in_the_Vietnam_War" rel="nofollow">https://en.wikipedia.org/wiki/List_of_journalists_killed_and...</a><p>[4]: <a href="https://cpj.org/2024/04/journalist-casualties-in-the-israel-gaza-conflict/" rel="nofollow">https://cpj.org/2024/04/journalist-casualties-in-the-israel-...</a><p>[5]: <a href="https://en.wikipedia.org/wiki/List_of_journalists_killed_during_the_Israeli%E2%80%93Palestinian_conflict" rel="nofollow">https://en.wikipedia.org/wiki/List_of_journalists_killed_dur...</a><p>[6]: <a href="https://en.wikipedia.org/wiki/Killing_of_Shireen_Abu_Akleh" rel="nofollow">https://en.wikipedia.org/wiki/Killing_of_Shireen_Abu_Akleh</a>
<i>“But when it comes to a junior militant, you don’t want to invest manpower and time in it,” he said. “In war, there is no time to incriminate every target. So you’re willing to take the margin of error of using artificial intelligence, risking collateral damage and civilians dying, and risking attacking by mistake, and to live with it.”</i><p>Oh, very noble of you to take on that risk, from that side of the bomb sight.
Fascinating article.<p>> Second, we reveal the “Where’s Daddy?” system, which tracked these targets and signaled to the army when they entered their family homes.<p>This sounds immoral at first, but if proportionality is taken into consideration, the long term effects of this might be positive, ie fewer deaths long term due to the elimination of Hamas staff. The devil is in the details however, as there is clearly a point beyond which this becomes unacceptable. Sadly collective punishment is unavoidable in war, and one could argue that between future Israeli victims and current Palestinian ones, the IDF has a moral obligation to choose the latter.<p>> Fourth, we explain how the army loosened the permitted number of civilians who could be killed during the bombing of a target.<p>This article below states the civilian to militant death ratio in Gaza is 1:1, and for comparison the usual figure in modern war is 9:1, such as during the Battle of Mosul against ISIS. They may still be within the realm of moral action here, but the fog of war makes it very difficult to assess.<p><a href="https://www.newsweek.com/israel-has-created-new-standard-urban-warfare-why-will-no-one-admit-it-opinion-1883286" rel="nofollow">https://www.newsweek.com/israel-has-created-new-standard-urb...</a><p>I’m unsure why the UN + Arab Nations don’t take control of the situation, get rid of Hamas, provide peacekeeping, integrate Palestine into Israel, and enforce property rights. All this bloodshed is revolting.
Using the latest advances in technology and computing to plan and execute an ethnic cleansing and genocide? Sounds familiar? If not, check "IBM and the Holocaust".
So an article by an organization that is pro-palestinian (“working to oppose occupation and apartheid”) publishes a story relying on multiple anonymous sources - Is there any reason we shouldn’t consider this propaganda? has this magazine ever published a story that goes against their preferred narrative?
<i>… normally, they would personally devote only about “20 seconds” to each target before authorizing a bombing — just to make sure the Lavender-marked target is male. …</i>
Had a minor panic; I got to a final stage of an interview for a company called "Lavender AI". They were doing email automation stuff, but seeing the noun "Lavender" and "AI" in combination with "bombing" made me think that they might have been part of something horrible.<p>ETA:<p>I wonder if this is going to ruin their SEO...it might be worth a rebrand.
> The following investigation is organized according to the six chronological stages of the Israeli army’s highly automated target production in the early weeks of the Gaza war. First, we explain the Lavender machine itself, which marked tens of thousands of Palestinians using AI. Second, we reveal the “Where’s Daddy?” system, which tracked these targets and signaled to the army when they entered their family homes. Third, we describe how “dumb” bombs were chosen to strike these homes.<p>> Fourth, we explain how the army loosened the permitted number of civilians who could be killed during the bombing of a target. Fifth, we note how automated software inaccurately calculated the amount of non-combatants in each household. And sixth, we show how on several occasions, when a home was struck, usually at night, the individual target was sometimes not inside at all, because military officers did not verify the information in real time.<p>Tbh this feels like making a machine that points at a random point on the map by rolling two sets of dice, and then yelling "more blood for the blood god" before throwing a cluster bomb
Probably going to be flame city in this thread, but I think it’s worth asking: is it possible that even with collateral damage (killing women and children because of hallucinations) that AI based killing technology is actually more ethical and safer than warfare that doesn’t use AI. But AI is really just another name for math, so maybe it’s not a useful conversation. Militaries use advanced tech and that’s nothing new.
Any human being would not accept this. If it is happening to Palestinian people, it will happen to any other country in the world. Israel is committing genocide in front of the world. 50 years from now, some people will be sorry while committing another genocide.<p>be ready to be targeted by AI, from another state, within another war
what prevents Lavender from being deployed in EU or US for targeting Hamas operatives abroad ? People would get assassinated randomly and nobody would know why
What was the code name for the AI that slaughtered 1200 Israelis and took hundreds hostage? What kind of decision making went into Hamas raping dozens of women? What kind of AI chose targets in Israel to rocket? One thing's for certain, humans no matter how "enlightened" can only take so much before they go absolutely postal. "Humanity" and "rules of war" go right out the window when humans are pushed too far. It was going on before this war and will go on afterwards. What, now that we have "precise" weapons, an all-out war of one country vs another will adhear to some kind of code of ethics? Give me a break. Dresdin bombings, Hiroshima, Nanking, etc etc civilians will ALWAYS get slaughtered 1000 to one in an all-out war.
Automation of target selection is dangerous and bring ethical concerns but it isn't inherently worse than conventional methods and the killing of civilians (collateral damage) isn't new. I'd like to see how Israeli-Hamas war compares with other recent wars, specially the Russo-Ukrainian. Is this new process really worse, does it lead to more civilians death per legitimate target?<p>972mag is a left-wing media and what they say should be viewed with skepticism because they follow a pro-Palestine narrative.
Given israel's well-documented history and proclivity to commit genocides against the innocent (ironic given what happened in ww2), why is this time in particular so egregious? I don't get it. Poor AI accuracy is an accepted reality not just in civilian systems.<p>On silver lining for those who lost their lives to his particular holocaust: These technologies in particular have a tendency of ending up used against the very people who created them or authorized their use.
I'm probably pro-isreal because I'm a realpolitik American that wants America's best interest. (But I'm not strong either way)<p>Just watched someone get their post deleted for criticizing Israel's online PR/astroturfing.<p>Israel's ability to shape online discussion has left a bad taste in my mouth. Trust is insanely low, I think the US should get a real military base in Israel in exchange for our effort. If the US gets nothing for their support, I'd be disgusted.
There are two dimensions of horror here: One is that we as a tech community are building systems that are able to automatically kill human beings. It’s not only this system. I‘ve seen images of drones with sniper guns shooting everyone moving: Kids, women, innocent men. Drones flying constantly humming above the heads of Palestinians, always observing them. The feeling that death can come anytime. What a f-ing nightmare. Can we in the west even imagine what life that is?<p>The second is this: Why is a western ally allowed to have Apartheid, allowed to kill thousands of women and children with or without AI, besiege (medieval style) 2.3mil civilians, starve and dehydrate them to death, all the while comparing a tiny area without war planes, without a standing military, without statehood to Nazi Germany and Gaza to Dresden to completely level Gaza? To Nazi Germany that had the most advanced technology of their time, threatening the whole world? Dehumanising Palestinians by declaring them all „terrorists“, mocking their dead, mutilated bodies in Telegram groups with 125k Israelis (imagine 4mil US citizens in a group mocking other nations dead children). Why do we allow this to happen? Why is a western ally allowed to do this while almost all our western governments fund and support this and silence protest against it?
I am more curious about the “compute” of an AI system like this. It must be extremely complicated to do real-time video feed auditing and classification of targets, etc.<p>How is this even possible to do without having the system make a lot of mistakes? As much AI talk there is on HN these days, I would have recalled an article that talks about this kind of military-grade capability.<p>Are there any resources I can look at, and maybe someone here can talk about it from experience.
This article tries to hint that Israel is doing a genocide at Gaza, which is not true.<p>I'm not sure what is wrong with this technology. They barely say at the achievements this technology has gained, and only speaking about the bad side.<p>This article tries to make you think behind the scenes that Israel is a technology advanced, strong country, and Gaza are poor people whom did nothing.<p>It didn't even speak about the big 7 October massacre, where tens or even a hundreds innocent women were raped, because they were Israelis. I'm not sure when this kind of behavior is accepted in any way, and it makes you think that Hamas is not a legit organization, but just barbaric monsters.<p>Be sure that Gaza civilians support the massacre, and a survey reports that 72% of the Palestinians support the massacre[1], spoiler: it's much higher.<p>[1] <a href="https://edition.cnn.com/2023/12/21/middleeast/palestinians-back-hamas-survey-intl-cmd/index.html" rel="nofollow">https://edition.cnn.com/2023/12/21/middleeast/palestinians-b...</a>
How do people that work on AI reconcile the fact that the product they're working on is going to be used to kill thousands of people with no recourse?<p>It seems like Israel is already bombing indiscriminately, with 35 000 killed (the majority of whom are women and children). Was AI used for these targets?<p>History is going show a similar story to when IBM helped facilitate the Holocaust, this genocide also has people working on tools that enable it; people "just doing their job."<p>Did AI target World Central Kitchen or the 200+ humanitarians, journalists, hostages and medics? This is just one aspect of Apartheid Israel's war crimes.<p>Apartheid Israel seems to be a pariah state, if it's not with their hacking or bombing consulates, it's with the military industrial complex relationship with the US. Do they think their actions are conducive to their well-being?
> “This is unparalleled, in my memory,” said one intelligence officer who used Lavender, adding that they had more faith in a “statistical mechanism” than a grieving soldier. “Everyone there, including me, lost people on October 7. The machine did it coldly. And that made it easier.”
US supporting Ukraine made sense, Russia was the clear aggresor.<p>US supporting Israel makes very little sense.<p>That being said, Trump signed bill to removed reporting of drone strikes by US military and he approved more strikes than Obama.<p>So US likely has amplified systems compared to Lavender and Gospel. We'd have no idea.<p>This season of Daily Show about AI comes to mind: <a href="https://www.youtube.com/watch?v=20TAkcy3aBY" rel="nofollow">https://www.youtube.com/watch?v=20TAkcy3aBY</a><p>Everyone claiming AI is going to do great good, solve climate change yada yada is deeply in an illusion.<p>AI will only amplify what corporations and state powers already do.
Red flag for me is the part where they say it was left for human to decide if AI generated correct target or false positive based on voice recognition performed by human:<p><pre><code> (...) at some point we relied on the automatic system, and we only checked that [the target] was a man — that was enough. It doesn’t take a long time to tell if someone has a male or a female voice (...)
</code></pre>
...sounds fake as shit. Any dumb system can make male/female decision automatically, no fucking way human needs to verify it by listening to recordings while sohphisticated AI system is involved in filtering.<p>Why would half a dozen, active military offcers brag about careless use of tech and bombing families with children while they sleep risking accusation of treason?<p>Feels like well done propaganda more than anything else to me.<p>It's plausible they use AI. It's also plausible they don't that much.<p>It's plausible it has high false positive rate. It's also plausible it has multiple layers of crosschecks and has very high accuracy - better than human personel.<p>It's plausible it is used in rush without any doublechecks at all. It's also plausible it's used with or after other intelligence. It's plausible it's used as final verification only.<p>It's plausible that targets are easier to locate home. It's plausible it's not, ie. it may be easier to locate them around listed, known operation buildings, tracked vehicles, while known, tracked mobile phone is used etc.<p>It's plausible that half a dozen active officers want to share this information. It's also plausible that narrow group of people have access to this information. It's plausible they would not engage in activity that could be classified as treason. It's also plausible most personel simply doesn't know the origin of orders up the chain, just immediate.<p>It's plausible it's real information. It's also plausible it's fake or even AI generated, good quality, possibly intelligence produced fake.<p>Frankly looking at AI advances I'd be surprised if propaganda quality would lag behind operational, on the ground use.
I can't read the news because it's so upsetting to watch the world allow a naked genocide, or discuss it with my family. The 7 Nov terrorist attack was disgusting, and since then Israel has proved to the entire world, beyond anyone's remaining doubt,that they are a disgusting nation.
So, it's a sociopathic AI, I guess, as it kills predominantely children, women, and elderly. Great job, Israel! The king has no clothes - the whole world now nows that Israel is a terrorist and apartheid state!
The most disturbing part for me (going beyond Israel/Palestine conflict) is that modern war is scary:<p>- Weaponized financial trojan horses like crypto<p>- Weaponized chemical warfare through addictions<p>- Drone swarm attacks in Ukraine<p>- AI social-media engineered outrage to change publics perception<p>- Impartial, jingoistic mainstream war propaganda<p>- Censorship and manipulation of neutral views as immoral<p>- Weaponized AI software<p>Looks like a major escalation towards a total war of sorts.
Israel's evil keeps taking me by surprise. I guess when people go down the path of dehumanization there are truly no limits to what they are ready to do.<p>But what is even sadder is that the supposedly morally superior western world is entirely bribed and blackmailed to stand behind Israel. And then you have countries like Germany where you get thrown in jail for being upset at Israel.
> “Where’s Daddy?” also revealed here for the first time, were used specifically to track the targeted individuals and carry out bombings when they had entered their family’s residences.<p>That is appalling.
I’m really not sure why this got flagged. It seemed like a well sourced and technology-focused article. Independent of this particular conflict, such automated decision making has long been viewed as inevitable. If even a small fraction of what is being reported is accurate it is extraordinarily disturbing.
> “You don’t want to waste expensive bombs on unimportant people — it’s very expensive for the country and there’s a shortage [of those bombs]”<p>At that point I had to scroll back up to check whether this was just a really twisted April's Fools joke.
"Lavender learns to identify characteristics of known Hamas and PIJ operatives, whose information was fed to the machine as training data, and then to locate these same characteristics — also called “features” — among the general population, the sources explained. An individual found to have several different incriminating features will reach a high rating, and thus automatically becomes a potential target for assassination."<p>Hamas combatants like fried chicken, beer, and women. I also like these things. I can't possibly see anything wrong with this system...
Why is this flagged?<p>Our premiere AI geniuses were all sqawking to congress about the dangers of AI and here we see that "they essentially treated the outputs of the AI machine “as if it were a human decision.”<p>Sounds like you want to censor information that could hurt your bottomline.
HN has a serious problem if factual technology stories cannot exist here because some people don't like the truth.<p>This should be advertised. The true price of AI is people using computers to make decisions no decent person would. It's not a feature, it's a war crime.
In October 7th, by murdering, raping and abducting 1200 Israel civilians, Hamas - the acting sovereign of Gaza - chose total war. I hope this serves as a lesson to all those in Iran, Iraq, Syria and especially Lebanon who think of about repeating this.
This practice is akin to physically and mentally abusing a puppy, let them grow into a fearful and aggressive dog then say: "what an aggressive dog ! they need to be euthanized"
In war, the first casualty is the truth.<p>We have no idea whether this story itself is relaying anything of value. For all we know, stories like this could be a part of the war effort.
I wonder if the name Israelis gave the system betray their intent. I noticed in Portuguese, our word for Lavender, "lavanda", sounds similar to the verb meaning to wash, "lavar". According to wikipedia[1] it goes back to old latin roots: "The English word lavender came into use in the 13th century, and is generally thought to derive from Old French lavandre, ultimately from Latin lavare from lavo (to wash), referring to the use of blue infusions of the plants." I belive it is the same root behind English words like laundry or laundering. So, naming it 'Lavender' appears to give a clue to its true purpose: Laundering, our whitewashing the mass scale killing of civilians as collateral damage from computer-targeted strikes against militants, automating and streamlining the creation of plausible sounding excuses to provide cover for mass commitment of criminal acts.<p>[1]. <a href="https://en.wikipedia.org/wiki/Lavandula" rel="nofollow">https://en.wikipedia.org/wiki/Lavandula</a>
I expected more comments on the source’s biases, given the contentious and sensitive topic; journalist Liel Leibovitz writes this about +972 Magazine (1):<p>> Underlining everything +972 does is a dedication to promoting a progressive worldview of Israeli politics, advocating an end to the Israeli occupation of the West Bank, and protecting human and civil rights in Israel and Palestine.<p>> And while the magazine’s reported pieces—roughly half of its content—adhere to sound journalistic practices of news gathering and unbiased reporting, its op-eds and critical essays support specific causes and are aimed at social and political change.<p>1: <a href="https://www.tabletmag.com/sections/israel-middle-east/articles/wake-up-call" rel="nofollow">https://www.tabletmag.com/sections/israel-middle-east/articl...</a>