TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

'Lavender': The AI machine directing Israel's bombing in Gaza

1418 pointsby contemporary343about 1 year ago

133 comments

Quanttekabout 1 year ago
Years ago, scholars (such as Didier Bigo) have already raised concerns about the targeting of individuals merely based on (indirect) association with a &quot;terrorist&quot; or &quot;criminal&quot;. Originally used in the context of surveillance (see Snowden revelations), such systems would target anyone who would be e.g. less than 3-steps away from an identified individual, thereby removing any sense of due process or targeted surveillance. Now, such AI systems are being used to actually kill people - instead of just surveil.<p>IHL actually prohibits the killing of persons who are not combatants or &quot;fighters&quot; of an armed group. Only those who have the &quot;continuous function&quot; to &quot;directly participate in hostilities&quot;[1] may be targeted for attack at any time. Everyone else is a civilian that can only be directly targeted when and for as long as they directly participate in hostilities, such as by taking up arms, planning military operations, laying down mines, etc.<p>That is, only members of the armed wing of Hamas (not recruiters, weapon manufacturers, propagandists, financiers, …) can be targeted for attack - all the others must be arrested and&#x2F;or tried. Otherwise, the allowed list of targets of civilians gets so wide than in any regular war, pretty much any civilian could get targeted, such as the bank employee whose company has provided loans to the armed forces.<p>Lavender is so scary because it enables Israel&#x27;s mass targeting of people who are protected against attack by international law, providing a flimsy (political but not legal) justification for their association with terrorists.<p>[1]: <a href="https:&#x2F;&#x2F;www.icrc.org&#x2F;en&#x2F;doc&#x2F;assets&#x2F;files&#x2F;other&#x2F;icrc-002-0990.pdf" rel="nofollow">https:&#x2F;&#x2F;www.icrc.org&#x2F;en&#x2F;doc&#x2F;assets&#x2F;files&#x2F;other&#x2F;icrc-002-0990...</a>
评论 #39920901 未加载
评论 #39921736 未加载
评论 #39918931 未加载
评论 #39921715 未加载
评论 #39921321 未加载
评论 #39923797 未加载
评论 #39921334 未加载
评论 #39921772 未加载
评论 #39925167 未加载
评论 #39924626 未加载
评论 #39923322 未加载
评论 #39926617 未加载
评论 #39940717 未加载
评论 #39924696 未加载
评论 #39923218 未加载
评论 #39919285 未加载
评论 #39926608 未加载
评论 #39926217 未加载
评论 #39920875 未加载
评论 #39922914 未加载
评论 #39926520 未加载
评论 #39934195 未加载
评论 #39924756 未加载
评论 #39926309 未加载
NomDePlumabout 1 year ago
Never thought I&#x27;d even consider this, but is this a case where those involved, producing and developing, this software should be tried for murder&#x2F;crimes against humanity?<p>My understanding is that AI in it&#x27;s current form is not an applicable technology to be anywhere near this type of use.<p>Again my understanding: Inference models by their very nature are largely non-deterministic, in terms of being able to evaluate accurately against specific desired outcomes. They need large scale training data available to provide even low levels of accuracy. That type of training data just isn&#x27;t available, its all likely to be based on one big hallucination, is my take. I&#x27;d be surprised if this AI model was even 10% accurate. It wouldn&#x27;t surprise me if it was less than 1% accurate. Not that accuracy appears to be critical from what I&#x27;ve read.<p>The Guardian article: <a href="https:&#x2F;&#x2F;www.theguardian.com&#x2F;world&#x2F;2024&#x2F;apr&#x2F;03&#x2F;israel-gaza-ai-database-hamas-airstrikes" rel="nofollow">https:&#x2F;&#x2F;www.theguardian.com&#x2F;world&#x2F;2024&#x2F;apr&#x2F;03&#x2F;israel-gaza-ai...</a>, makes me wonder whether AI development should be allowed at all. Didn&#x27;t even have that thought before today.<p>This specific application and the claimed rationale is as close as I have come to seeing what I consider true and deliberate &quot;Evil application&quot; of technology out in the open.<p>Is this a naive take?
评论 #39926215 未加载
评论 #39943064 未加载
评论 #39926862 未加载
评论 #39926167 未加载
评论 #39938462 未加载
rowanseymourabout 1 year ago
As bad as this story makes the Israelis sound, it still reads like ass-covering to make it sound like they were at least trying to kill militants. It&#x27;s been clear from the start that they&#x27;ve been targeting journalists, medical staff and anyone involved in aid distribution, with the goal of rendering life in Gaza impossible.
评论 #39922041 未加载
评论 #39920827 未加载
评论 #39918181 未加载
评论 #39921082 未加载
评论 #39919506 未加载
评论 #39924193 未加载
评论 #39921180 未加载
sequoiaabout 1 year ago
I&#x27;m disturbed by the idea that an AI could be used to make decisions that could proactively kill someone. (Presumably computer already make decisions that passively kill people by, for example, navigating a self-driving car.) Though there was a human sign-off in this case, it seems one step away from people being killed by robots with zero human intervention which is about one step away from the plot of Terminator.<p>I wonder what the alternative is in a case like this. I know very little about military strategy-- without the AI would Israel have been picking targets less, or more haphazardly? I think there may be some mis-reading of this article where people imagine that if Israel weren&#x27;t using an AI they wouldn&#x27;t drop any bombs at all, that&#x27;s clearly unlikely given that there&#x27;s a war on. Obviously people, including innocents, are killed in war, which is why we all loathe war and pray for the current one to end as quickly as possible.
评论 #39921238 未加载
评论 #39921676 未加载
评论 #39922955 未加载
评论 #39921725 未加载
评论 #39922819 未加载
评论 #39921557 未加载
评论 #39923379 未加载
评论 #39921830 未加载
评论 #39922904 未加载
评论 #39921799 未加载
评论 #39922533 未加载
评论 #39923546 未加载
smt88about 1 year ago
I know many people won&#x27;t read past the headline, but please try to.<p>This is the second paragraph:<p>&quot;In addition to talking about their use of the AI system, called Lavender, the intelligence sources claim that Israeli military officials permitted large numbers of Palestinian civilians to be killed, particularly during the early weeks and months of the conflict.&quot;
评论 #39924653 未加载
评论 #39935669 未加载
评论 #39921004 未加载
评论 #39921103 未加载
评论 #39920909 未加载
评论 #39926834 未加载
shmattabout 1 year ago
I suggest everyone listen to the current season of the Serial podcast.<p>&gt;processing masses of data to rapidly identify potential “junior” operatives to target. Four of the sources said that, at one stage early in the war, Lavender listed as many as 37,000 Palestinian men who had been linked by the AI system to Hamas or PIJ.<p>This is really no different than how the world was working in 2001 and choosing who to send to Gitmo and other more secretive prisons, or bombing their location<p>More than anything else it feels like just like in the corporate world, the engineers in the army are overselling the AI buzzword to do exactly what they were doing before it existed<p>If you use your paypal account to send money to an account identified as ISIS, you&#x27;re going to get a visit from a 3 letter organization really quick. This sounds exactly like that from what the users are testifying to. Any decision to bomb or not bomb a location wasn&#x27;t up to the AI, but to humans
评论 #39918086 未加载
评论 #39921438 未加载
arminiusreturnsabout 1 year ago
&gt; “We were not interested in killing [Hamas] operatives only when they were in a military building or engaged in a military activity,” A., an intelligence officer, told +972 and Local Call. “On the contrary, the IDF bombed them in homes without hesitation, as a first option. It’s much easier to bomb a family’s home. The system is built to look for them in these situations.”
评论 #39919326 未加载
评论 #39919409 未加载
评论 #39923709 未加载
randysalamiabout 1 year ago
I wonder how accurate this technology really is or if they care so little for the results and instead more for the optics of being seen as advanced. On one hand, it’s scary to think this technology exists but on the other, it might just be a pile of junk since the output is so biased. What’s even scarier is that it’s proof that people in power don’t care about “correct”, they care about having a justification to confirm their biases. It’s always been the case but it’s even more damming this extends to AI. Previously, you were limited by how many humans can lie but now you’re limited by how fast your magic black box runs.
评论 #39919224 未加载
评论 #39918766 未加载
mleonhardabout 1 year ago
In 2018, Google CEO Sundar Pichai, SVP Diane Greene, SVP Urs Hölzle, and top engineer Jeff Dean built a system like Lavender for the US military (Project Maven). The US military planned to use it to analyze mass-surveillance drone footage to pick suspects in Pakistan for assassination. They had already dropped bombs on hundreds of houses and vehicles, murdering thousands of suspects and their families and friends [0].<p>I was working in Urs&#x27;s Google Technical Infrastructure division. I read about the project in the news. Urs had a meeting about it where he lied to us, saying the contract was only $9M. It had already been expanded to $18M and was on track for $270M. He and Jeff Dean tried to downplay the impact of their work. Jeff Dean blinked constantly (lying?) while downplaying the impact. He suddenly stopped blinking when he began to talk about the technical aspects. I instantly lost all respect for him and the company&#x27;s leadership.<p>Strong abilities in engineering and business often do not come with well-developed morals. Sadly, our society is not structured to ensure that leaders have necessary moral education, or remove them when they fail so completely at moral decisions.<p>[0] <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Drone_strikes_in_Pakistan" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Drone_strikes_in_Pakistan</a>
skilledabout 1 year ago
The Guardian has this story on the front page also, they were given details about it pre-publishing,<p><a href="https:&#x2F;&#x2F;www.theguardian.com&#x2F;world&#x2F;2024&#x2F;apr&#x2F;03&#x2F;israel-gaza-ai-database-hamas-airstrikes" rel="nofollow">https:&#x2F;&#x2F;www.theguardian.com&#x2F;world&#x2F;2024&#x2F;apr&#x2F;03&#x2F;israel-gaza-ai...</a><p>And, personally, I think that stories like this are of public interest - while I won’t ask for it directly, I hope the flag is removed and the discussion can happen.
评论 #39919169 未加载
评论 #39920691 未加载
评论 #39918837 未加载
评论 #39924575 未加载
评论 #39920953 未加载
photochemsynabout 1 year ago
The difference between previously revealed &#x27;Gospel&#x27; and this &#x27;Lavender&#x27; is revealed here:<p>&gt; &quot;The Lavender machine joins another AI system, “The Gospel,” about which information was revealed in a previous investigation by +972 and Local Call in November 2023, as well as in the Israeli military’s own publications. A fundamental difference between the two systems is in the definition of the target: whereas The Gospel marks buildings and structures that the army claims militants operate from, Lavender marks people — and puts them on a kill list.&quot;<p>It&#x27;s one thing to use these systems to mine data on human populations for who might be in the market for a new laptop, so they can be targeted with advertisements - it&#x27;s quite different to target people with bombs and drones based on this technology.
评论 #39918966 未加载
tmnvixabout 1 year ago
Given the total failure to achieve any of its stated objectives, has this use of AI benefited the IDF at all?<p>I would argue that it&#x27;s likely the only outcome it has had that directly relates to IDF objectives has probably been negative (i.e. the unintended killing of hostages).<p>Sadly, I think that the continued use of this AI is supported because it is helping to provide cover for individuals involved in war crimes. I wouldn&#x27;t be surprised if the AI really weren&#x27;t very sophisticated at all and that to serve the purpose of cover that doesn&#x27;t matter.
评论 #39927555 未加载
评论 #39924119 未加载
评论 #39923639 未加载
评论 #39929406 未加载
dw_arthurabout 1 year ago
<i>Two sources said that during the early weeks of the war they were permitted to kill 15 or 20 civilians during airstrikes on low-ranking militants. Attacks on such targets were typically carried out using unguided munitions known as “dumb bombs”, the sources said, destroying entire homes and killing all their occupants.</i><p>The world should not forget this.
评论 #39920806 未加载
评论 #39920878 未加载
评论 #39918043 未加载
评论 #39918056 未加载
评论 #39920697 未加载
评论 #39929428 未加载
评论 #39930573 未加载
评论 #39920985 未加载
评论 #39918366 未加载
评论 #39919591 未加载
wantlotsofcurryabout 1 year ago
Upsetting how quickly the other thread was flagged and downranked.
评论 #39920732 未加载
评论 #39919303 未加载
评论 #39919197 未加载
评论 #39919203 未加载
评论 #39919541 未加载
评论 #39919192 未加载
评论 #39921016 未加载
me_againabout 1 year ago
&quot;zero-error policy&quot; as described here is a remarkable euphemism. You might hope that the policy is not to make any errors. In fact the policy is not to acknowledge that errors can occur!
hunglee2about 1 year ago
AI generated kill lists are sadly inevitable. Had hoped we&#x27;d get a few more years before we&#x27;d actually see it being deployed. Lots to think about here
评论 #39920633 未加载
评论 #39918651 未加载
评论 #39921048 未加载
评论 #39918721 未加载
评论 #39920908 未加载
评论 #39918660 未加载
lekeabout 1 year ago
Kind of speculation at this point, but I wonder if Lavendar was involved in the recent killing of the World Central Kitchen Aid workers.
评论 #39922558 未加载
评论 #39921808 未加载
评论 #39924230 未加载
评论 #39921787 未加载
barbazooabout 1 year ago
Getting all these reports about atrocities, I wonder if the conflict in the area has grown more brutal over the decades or if this is just business as usual. I&#x27;m in my late 30s, growing up in the EU, the conflict in the region was always present. I don&#x27;t remember hearing the kind of stories that come to light these days though, indiscriminate killings, food and water being targeted, aid workers being killed. I get that it&#x27;s hard to know what&#x27;s real and what&#x27;s not and that we live in the age of information, but I&#x27;m curious how, on a high level, the conflict is developing. Does anyone got a good source that deals with that?
评论 #39921037 未加载
评论 #39919607 未加载
评论 #39920167 未加载
评论 #39922383 未加载
评论 #39921017 未加载
评论 #39921617 未加载
supposemaybeabout 1 year ago
My question is:<p>How far does the AI system go… is it behind the AI decision to starve the population of Gaza?<p>And if it is behind the strategy of starvation as a tool of war, is it also behind the decision to kill the aid workers who are trying to feed the starving?<p>How far does the AI system go?<p>Also, can an AI commit a war crime? Is it any defence to say, “The computer did it!” Or “I was just following AI’s orders!”<p>There’s so much about this death machine AI I would like to know.
评论 #39919447 未加载
评论 #39919397 未加载
评论 #39920118 未加载
malfistabout 1 year ago
There is no justification for killing noncombatants, even if AI told you you could.
评论 #39917968 未加载
评论 #39918177 未加载
评论 #39917929 未加载
评论 #39917974 未加载
评论 #39918037 未加载
Mgtyalxabout 1 year ago
@dang Please consider that this is an important and well sourced article regarding military use of AI and machine learning and shouldn&#x27;t disappear because some users find it upsetting.
评论 #39921761 未加载
评论 #39919307 未加载
评论 #39918630 未加载
评论 #39918538 未加载
jijjiabout 1 year ago
that would explain the news today of how Israel killed seven aid workers in Gaza [0]<p>[0] <a href="https:&#x2F;&#x2F;www.reuters.com&#x2F;world&#x2F;middle-east&#x2F;what-we-know-so-far-about-seven-aid-workers-killed-gaza-by-israel-2024-04-03&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.reuters.com&#x2F;world&#x2F;middle-east&#x2F;what-we-know-so-fa...</a>
评论 #39918005 未加载
评论 #39918038 未加载
评论 #39922849 未加载
koutetsuabout 1 year ago
As someone working in the AI field, I find this use of AI truly terrifying. Today it may be used to target Hamas and accept a relatively large number of civilian deaths as permissible collateral damage, but nothing guarantees that it won&#x27;t be exported and used somewhere else. On top of that, I don&#x27;t think anything is done to alleviate biases in the data (if you&#x27;re used to target people from a certain group then your AI system will still target people from that group) or validate the predictions after a &quot;target&quot; is bombed. I wish there was more regulations for these use cases. Too bad the EU AI Act doesn&#x27;t address military uses at all.
评论 #39924878 未加载
评论 #39923819 未加载
fhd2about 1 year ago
&gt; One source stated that human personnel often served only as a “rubber stamp” for the machine’s decisions, adding that, normally, they would personally devote only about “20 seconds” to each target before authorizing a bombing [...]<p>Brings the Ironies of Automation paper to mind: <a href="https:&#x2F;&#x2F;en.m.wikipedia.org&#x2F;wiki&#x2F;Ironies_of_Automation" rel="nofollow">https:&#x2F;&#x2F;en.m.wikipedia.org&#x2F;wiki&#x2F;Ironies_of_Automation</a><p>Specifically: If _most_ of a task is automated, human oversight becomes near useless. People get bored, are under time pressure, don&#x27;t find enough mistakes etc and just don&#x27;t do the review job they&#x27;re supposed to do anymore.<p>A dystopian travesty.
评论 #39924411 未加载
评论 #39924521 未加载
评论 #39924571 未加载
irobethabout 1 year ago
I&#x27;m reminded of [1] a recent Palantir promotional video<p>[1] <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=XEM5qz__HOU" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=XEM5qz__HOU</a>
评论 #39921379 未加载
评论 #39921023 未加载
diyseguyabout 1 year ago
The new political excuse for genocide: wasn&#x27;t me, the AI did it.
评论 #39918589 未加载
评论 #39919331 未加载
评论 #39918565 未加载
评论 #39918583 未加载
supposemaybeabout 1 year ago
Is the AI the one deciding to let all the children of Gaza starve? I’d like to know how far this death machine goes?
评论 #39920402 未加载
评论 #39919257 未加载
nsguyabout 1 year ago
Is the same system used to direct bombing in Lebanon against Hezbollah?<p>If so it&#x27;s worth noting that we have much better data on that campaign. We know exactly how many Hezbollah members have died because that organization actually releases that information. We have good numbers on civilian casualties. Naturally there are many different factors but I think Israel has done a much better job over there in terms of minimizing civilian casualties. There have been some notable incidents like IIRC journalists getting hit, but the overall numbers I think are significantly weighed towards military targets.
评论 #39926317 未加载
评论 #39924857 未加载
belochabout 1 year ago
The capacity for computers to make errors has now far exceeded that of tequila and handguns.<p>I&#x27;m sorry. This is so terrible that humor is the only recourse left to me. We were once afraid of AI drones with guns murdering the wrong people, but now we have an AI that is being used to plan a systematic bombing campaign. Human pilots and all the associated support personal are its tools and liberal quotas have been set on how many of the wrong people are permissible for each strike to hit. Yet again, reality has surpassed science fiction nightmare.
评论 #39925289 未加载
hindsightbiasabout 1 year ago
&quot;Because of the system, the targets never end.&quot;<p>The future is now.
评论 #39917987 未加载
评论 #39918311 未加载
gerashabout 1 year ago
Is there a list of congress people who support sending our tax money to Israel?
评论 #39924670 未加载
评论 #39922508 未加载
评论 #39968082 未加载
fullstickabout 1 year ago
The name of Lavender makes this so surreal to me for some reason. I&#x27;m of the opinion that algorithms shouldn&#x27;t determine who lives and dies, but it&#x27;s so common even outside of war.
评论 #39919851 未加载
评论 #39919463 未加载
bananapubabout 1 year ago
perhaps apocryphal quote from IBM:<p><pre><code> &quot;A COMPUTER CAN NEVER BE HELD ACCOUNTABLE THEREFORE A COMPUTER MUST NEVER MAKE A MANAGEMENT DECISION&quot; </code></pre> it&#x27;s sort of irrelevant if some shitty computer system is killing people - the people who need to be arrested are the people who allowed the shitty computer system to do that. we obviously cannot allow &quot;oh, not my fault I chose to allow a computer to kill people&quot; to be an excuse or a defence for murder or manslaughter or ... anything.
评论 #39928210 未加载
seemingleeabout 1 year ago
I don’t want to talk about the war — mostly, I don’t know about the history enough to discuss it. But I want to talk about the use of technology with the intention to exterminate life. AI shows great promises to humanity, but can also extinguish it if misused.<p>Thousands of years ago, gunpowder was invented. This technology enabled humans to finally break through mountains and build tunnels. It enabled the beautiful display of fireworks. But the misuse of this technology ultimately leads to destructions of cultures and civilizations.<p>This latest development with AI as implemented in Lavender — is one that’s exceptionally dangerous. This latest misuse of technology should concern all.<p>We must not allow the proliferation of this brilliant technology to be used for the purpose of destruction. It concerns me greatly.<p>I hope that we could resolve conflicts and differences in ways that are civil.
asmallcatabout 1 year ago
The sad and simple truth (trying to not sound political, but it&#x27;s pretty damned hard given the context) is that it seems that not so long ago, lists and very flimsy justifications were at the root of a lot of pain and suffering for the very people perpetrating the same.
scotty79about 1 year ago
Apart from all the horribleness and knowingly mudering civilians the idea of a 9to5 soldier that performs military activity then goes home to his family, well within range of weapons and intelligence of the enemy and expecting he and his family will be safe there while he sleeps is a bit insane. I can&#x27;t imagine any army hellbent on winning fast would pass up on that opportunity.<p>USA didn&#x27;t exactly have much stricter conditions or way better accurancy of their intelligence. They did nothing qualitatively different. They just labeled anyone in the blast radious as unknown enemy combatants in the reports. And USA never had to operate at this volume. I guess that&#x27;s just how modern war looks from the position of superior firepower.
评论 #39928361 未加载
Sporktacularabout 1 year ago
Monstrous. From some of the quotes alone, let alone the numbers, it&#x27;s clear that Palestinian lives matter about as much to the Israeli government as they do to the machines. If this is the future of warfare we&#x27;ve taken a dark new path.
bythreadsabout 1 year ago
That seems like a very very political site judging from the other articles - also seems ai generated half of it - sure this holds up?
评论 #39928004 未加载
asadaltabout 1 year ago
Lavender: This generations gas chamber.
yborisabout 1 year ago
PSA: <a href="https:&#x2F;&#x2F;www.stopkillerrobots.org&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.stopkillerrobots.org&#x2F;</a>
评论 #39921543 未加载
评论 #39920682 未加载
BoggleFiendabout 1 year ago
<a href="https:&#x2F;&#x2F;www.cfr.org&#x2F;article&#x2F;us-aid-israel-four-charts" rel="nofollow">https:&#x2F;&#x2F;www.cfr.org&#x2F;article&#x2F;us-aid-israel-four-charts</a> Wild how much money we (US taxpayers) give them.
Stevvoabout 1 year ago
First time I&#x27;ve really felt like I&#x27;m living in a dystopian science fiction.
scotty79about 1 year ago
These descriptions are chilling. The mechanistic theme of efficiency is reminiscent of deathcamps.<p>We can kill more. Feed us targets. We can do it cheaply and fast. 10-20 civilians per one speculative target is acceptable for us.
NickC25about 1 year ago
This shouldn&#x27;t be flagged.
评论 #39924276 未加载
slimabout 1 year ago
<p><pre><code> Additional automated systems, including one called “Where’s Daddy?” also revealed here for the first time, were used specifically to track the targeted individuals and carry out bombings when they had entered their family’s residences. </code></pre> this means they are actually targeting the children phones at night presupposing their father is in their proximity. they are doing this because Hamas operatives probably don&#x27;t take their phones to their houses.
评论 #39928333 未加载
thread_idabout 1 year ago
It is interesting to see how cell phone data was used as features and inputs to the model (along with other surveilance data). And how the models parameters were adjusted to achieve high leves of correlation. Human behavior regarding sharing cell phones apeared to create a false postive bias. Its too late now but the first thing the entire Palestinian population should have done was to smash all thier phones and go completely dark.
评论 #39927834 未加载
neuronicabout 1 year ago
Next step is to automate this entire chain. Not far away from some military deploying fully autonomous identify, target &amp; kill systems now. The pieces are all in place. Human rights and oversight are not the first priority in all militaries.<p>AI system says person X in location Y needs to be taken out due to &quot;terrorist association&quot;. Check if location Y is cleared for operations. Command has given general authority for operations in this region.<p>An autonomous drone is deployed like a Patriot missile shooting out from some array into the night sky, quietly flies to location Y, identifies precise GPS coordinates and sends itself including a sizeable warhead into the target. Later, some office dude sits down at his desk at 8:30am, opens some reporting program.<p>&quot;Ah, 36 kills last night.&quot; <i>Takes a sip of coffee.</i>
评论 #39928045 未加载
abvdaskerabout 1 year ago
Accepting technological barbarism is a choice. Among engineers there should be a broad refusal to work on such systems and a blacklist for those who do.
评论 #39921223 未加载
评论 #39920700 未加载
评论 #39921075 未加载
评论 #39919318 未加载
评论 #39921085 未加载
评论 #39921055 未加载
nickdothuttonabout 1 year ago
I am reminded of Poindexter&#x27;s[1] total information awareness project, which I thought at the time too interesting for it to wholly disappear. I must admit this knowledge influenced one or two of my own blog postings on what I call &quot;Strategic Software&quot;[2].<p>[1]: <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Total_Information_Awareness" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Total_Information_Awareness</a> [2]: <a href="https:&#x2F;&#x2F;blog.eutopian.io&#x2F;tags&#x2F;strategic-software&#x2F;" rel="nofollow">https:&#x2F;&#x2F;blog.eutopian.io&#x2F;tags&#x2F;strategic-software&#x2F;</a>
pbsladekabout 1 year ago
<a href="https:&#x2F;&#x2F;www.theguardian.com&#x2F;world&#x2F;2024&#x2F;apr&#x2F;03&#x2F;israel-gaza-ai-database-hamas-airstrikes" rel="nofollow">https:&#x2F;&#x2F;www.theguardian.com&#x2F;world&#x2F;2024&#x2F;apr&#x2F;03&#x2F;israel-gaza-ai...</a><p><a href="https:&#x2F;&#x2F;www.theguardian.com&#x2F;world&#x2F;2024&#x2F;apr&#x2F;03&#x2F;israel-defence-forces-response-to-claims-about-use-of-lavender-ai-database-in-gaza" rel="nofollow">https:&#x2F;&#x2F;www.theguardian.com&#x2F;world&#x2F;2024&#x2F;apr&#x2F;03&#x2F;israel-defence...</a>
ChrisArchitectabout 1 year ago
Related from earlier:<p><i>Israel used AI to identify 37,000 Hamas targets</i><p><a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=39917727">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=39917727</a>
7373737373about 1 year ago
So much technological power, and still no approach to prevent violence and imprison aggressors and murderers instead of killing them<p>In the past there was all this talk of nonlethal weaponry, but nowadays it seems to be used at best &quot;in the small&quot;, by police and not the military<p>Killing will only ever get easier and faster and remote from human action, oversight and consequence for the perpetrator. Too fast for humans to understand, to remote too feel
dist-epochabout 1 year ago
Meanwhile China is working on automated building facilities which can make 1,000 cruise missiles per day:<p><a href="https:&#x2F;&#x2F;twitter.com&#x2F;Aryan_warlord&#x2F;status&#x2F;1774859594747273711" rel="nofollow">https:&#x2F;&#x2F;twitter.com&#x2F;Aryan_warlord&#x2F;status&#x2F;1774859594747273711</a><p>Perfect match for a targeting AI, the AI could even customize each missile as it&#x27;s being built according to the target it selected.
amaiabout 1 year ago
And how did Hamas decide whom to target on October the 7th? Probably not by an AI. But was the result therefor more &quot;human&quot;?
评论 #39931180 未加载
rich_sashaabout 1 year ago
I don&#x27;t like anything about this war, but in a way, I think concerns of AI in warfare are, at this stage, overblown. I&#x27;m more concerned about the humans doing the shooting.<p>Let&#x27;s face it, in any war, civilians are really screwed. It&#x27;s true here, it was true in Afghanistan or Vietnam or WWII. They get shot at, they get bombed, by accident or not, they get displaced. Milosevic in Serbia didn&#x27;t need an AI to commit genocide.<p>The real issue to me is what the belligerents are OK with. If they are ok killing people on flimsy intelligence, I don&#x27;t see much difference between perfunctory human analysis and a crappy AI. Are we saying that somehow Hamas gets some brownie points for <i>not</i> using an AI?
评论 #39922185 未加载
rightbyteabout 1 year ago
How does this system get the input? Are Palestinians using IDF tapped cell towers? Or is it possible to use roaming towers for this? Is e.g. Google or Facebook involved on a mobile OS or app level? Maybe backdoors local to the area?<p>It seems like the whole cell phone infrastructure need to be torn down.
评论 #39921990 未加载
评论 #39922666 未加载
评论 #39925454 未加载
评论 #39928199 未加载
screyeabout 1 year ago
Technology like this raises a moral conundrum.<p>Minimizing deaths is the humane approach to war. So we move away from broad killing mechanisms (shelling, crude explosives, carpet bombing), in favor of precise killing machines. Drones, targeted missiles and now AI allow you to be ruthlessly efficient in killing an enemy.<p>The question is - How cold and not-human-like can these methods be, if they are in fact reducing overall deaths ?<p>I won&#x27;t pretend an answre is obvious.<p>The west hasn&#x27;t seen a real war in a long time. Their impression of war is either ww1 style mass deaths on both sides or overnight annihilation like America&#x27;s attempts in the middle east. So our vocabulary limits us to words like Genocide, Overthrow, Insurgency, etc. This is war. It might not map onto our intuitions from recent memory, but this is exactly what it looks like.<p>When you&#x27;re in a long drawn out war with a technological upper hand...you leverage all technology to help you win. At the same time, once pandoras box is open, it tends to stay open for your adversaries as well. We did well to maintain global consensus on chemical and nuclear warfare. I don&#x27;t see any such concensus coming out of the AI era just yet.<p>All I&#x27;ll say is that I won&#x27;t be quick to make judgements on the morality of such tech in war. What do you think happened to the spies that were caught due to decoding of the enigma ?
评论 #39926057 未加载
评论 #39925467 未加载
评论 #39928145 未加载
tokaiabout 1 year ago
&gt;While humans select these features at first, the commander continues, over time the machine will come to identify features on its own. This, he says, can enable militaries to create “tens of thousands of targets,”<p>So overfitting or hallucinations as a feature. Scary.
mistermannabout 1 year ago
&quot;This will get flagged to death in minutes as what happens to all mentions of israel atrocities here&quot; (now dead)<p>It maybe worth noting that there is at least one notification service out there to draw attention to such posts. Joel spolsky even mentioned such a service that existed back when stackoverflow was first being built.<p>Human coordination is arguably the most powerful force in existence, especially when coordinating to do certain things.<p>Also interesting: it would seem(!) that once an article is flagged, it isn&#x27;t taken down but simply disappears from the articles list. This is quite interesting in a wide variety of ways if you think about it from a global cause and effect perspective, and other perspectives[1]!<p>Luckily, we can rest assured that all is probably well.<p>[1] <a href="https:&#x2F;&#x2F;plato.stanford.edu&#x2F;entries&#x2F;perception-problem&#x2F;" rel="nofollow">https:&#x2F;&#x2F;plato.stanford.edu&#x2F;entries&#x2F;perception-problem&#x2F;</a>
wizardforhireabout 1 year ago
It’s dark but so obligatory…<p><a href="https:&#x2F;&#x2F;youtube.com&#x2F;watch?v=dub8fBuXK_w&amp;pp=ygUZaXRzIGxhdmVuZGVyIG5vdCBsYXZlbmRlcg%3D%3D" rel="nofollow">https:&#x2F;&#x2F;youtube.com&#x2F;watch?v=dub8fBuXK_w&amp;pp=ygUZaXRzIGxhdmVuZ...</a>
amaiabout 1 year ago
Israel should hand this technology to Ukraine. They need this more than Israel.
评论 #39933123 未加载
Ancapistaniabout 1 year ago
&gt; the system makes what are regarded as “errors” in approximately 10 percent of cases<p>This statement means little without knowing the accuracy of a human doing the same job.<p>Without that information this is an indictment of military operational procedures, not of AI.
carabinerabout 1 year ago
It is so weird that the US is sending aid to help people harmed by US weapons.
victor22about 1 year ago
The lying media cartel put the blame on AI and you all just believe it…
slimabout 1 year ago
One day, totally of my own accord, I added something like 1,200 new targets to the [tracking] system, because the number of attacks [we were conducting] decreased,” the source said.<p>So they were having daily quotas for killings. Literally a killing machine with a input capacity of 1200 targets per day that has to be fed. Just like the Nazis during WW2
pvaldesabout 1 year ago
Very good name choice. An accurate combination of law and ender
surumeabout 1 year ago
+972 magazine is EXTREMELY anti-Israel and anti-semitic, so this article is written through the lens of despising Israel and Jews. Here are some of their other article titles, which you can find on their home page:<p>1. Hebrew University’s Faculty of Repressive Science 2. The spiraling absurdity of Germany’s pro-Israel fanaticism 3. The first step toward disintegrating Israel’s settler machine<p>As such, their view is not at all balanced or even-handed. Objective truth obviously matters very little to them since they exhibit such open bias and loathing towards Israel and the Jewish people.
评论 #39928224 未加载
评论 #39927971 未加载
dartosabout 1 year ago
Don’t militaries use statistical models all the time?<p>Is this any different?
评论 #39928270 未加载
aaomidiabout 1 year ago
I wonder if the WCK assassinations were related to this.
Horffupoldeabout 1 year ago
This is actually quite reasonable and sensible. It’s the future of warfare. People are not willing to fight in trenches anymore.
评论 #39938749 未加载
tivertabout 1 year ago
The VCs promised a utopia of flying cars and abundance, but all we got was more inequality and these AI death machines.
评论 #39924562 未加载
rldjbpinabout 1 year ago
from what i understand, there appears to be little to no oversight on how these models are trained and evaluated.<p>if the markers, a la features, discussed in the article are anything to go with, it is a very disturbing method of classifying a target. if human evaluators use the same approach to target bombings, then there is no supporting how this war is being fought.
jmyeetabout 1 year ago
Unfortunately, Big Tech has been very effective in spreading a message that helps Israel maintain the plausible deniability that comes from a system like Lavender.<p>For at least 15 years we&#x27;ve had personalized newsfeeds in social media. For even longer we&#x27;ve had search engine ranking, which is also personalized. Whenever criticism is levelled against Meta or Twitter or Google or whoever for the results on that ranking, it&#x27;s simply blamed on &quot;the algorithm&quot;. That serves the same purpose: to provide moral cover for human actions.<p>We&#x27;ve seen the effects of direct human intervention in cases like Google Panda [1]. We also know that search engines and newsfeeds filter out and&#x2F;or downrank objectionable content. That includes obvious categories (eg CSAM, anything else illegal) but it also includes value-based judgements on perfectly legitimate content (eg [2]).<p>Lavender is Israel saying &quot;the algorithm&quot; decided what to strike.<p>I want to put this in context. In ~20 years of the Vietnam War, 63 journalists were killed or lost )presumed dead) [3]. In the 6 months since October 7, at least 95 journalists have been killed in Gaza [4]. In the years prior there were still a large number killed [5], famously including an American citizen Shireen abu-Akleh [6].<p>None of this is an accident.<p>My point here is that anyone who blames &quot;the algorithm&quot; or deflects to some ML system is purposely deflecting responsibility from the human actions that led to that and for that to continue to exist.<p>[1]: <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Google_Panda" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Google_Panda</a><p>[2]: <a href="https:&#x2F;&#x2F;www.hrw.org&#x2F;report&#x2F;2023&#x2F;12&#x2F;21&#x2F;metas-broken-promises&#x2F;systemic-censorship-palestine-content-instagram-and" rel="nofollow">https:&#x2F;&#x2F;www.hrw.org&#x2F;report&#x2F;2023&#x2F;12&#x2F;21&#x2F;metas-broken-promises&#x2F;...</a><p>[3]: <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;List_of_journalists_killed_and_missing_in_the_Vietnam_War" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;List_of_journalists_killed_and...</a><p>[4]: <a href="https:&#x2F;&#x2F;cpj.org&#x2F;2024&#x2F;04&#x2F;journalist-casualties-in-the-israel-gaza-conflict&#x2F;" rel="nofollow">https:&#x2F;&#x2F;cpj.org&#x2F;2024&#x2F;04&#x2F;journalist-casualties-in-the-israel-...</a><p>[5]: <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;List_of_journalists_killed_during_the_Israeli%E2%80%93Palestinian_conflict" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;List_of_journalists_killed_dur...</a><p>[6]: <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Killing_of_Shireen_Abu_Akleh" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Killing_of_Shireen_Abu_Akleh</a>
kayodelycaonabout 1 year ago
I wonder if this explains why is seems like they are constantly hitting random targets in addition to everything else.
cafabout 1 year ago
<i>“But when it comes to a junior militant, you don’t want to invest manpower and time in it,” he said. “In war, there is no time to incriminate every target. So you’re willing to take the margin of error of using artificial intelligence, risking collateral damage and civilians dying, and risking attacking by mistake, and to live with it.”</i><p>Oh, very noble of you to take on that risk, from that side of the bomb sight.
petrusnoniusabout 1 year ago
Fascinating article.<p>&gt; Second, we reveal the “Where’s Daddy?” system, which tracked these targets and signaled to the army when they entered their family homes.<p>This sounds immoral at first, but if proportionality is taken into consideration, the long term effects of this might be positive, ie fewer deaths long term due to the elimination of Hamas staff. The devil is in the details however, as there is clearly a point beyond which this becomes unacceptable. Sadly collective punishment is unavoidable in war, and one could argue that between future Israeli victims and current Palestinian ones, the IDF has a moral obligation to choose the latter.<p>&gt; Fourth, we explain how the army loosened the permitted number of civilians who could be killed during the bombing of a target.<p>This article below states the civilian to militant death ratio in Gaza is 1:1, and for comparison the usual figure in modern war is 9:1, such as during the Battle of Mosul against ISIS. They may still be within the realm of moral action here, but the fog of war makes it very difficult to assess.<p><a href="https:&#x2F;&#x2F;www.newsweek.com&#x2F;israel-has-created-new-standard-urban-warfare-why-will-no-one-admit-it-opinion-1883286" rel="nofollow">https:&#x2F;&#x2F;www.newsweek.com&#x2F;israel-has-created-new-standard-urb...</a><p>I’m unsure why the UN + Arab Nations don’t take control of the situation, get rid of Hamas, provide peacekeeping, integrate Palestine into Israel, and enforce property rights. All this bloodshed is revolting.
评论 #39940940 未加载
评论 #39940903 未加载
nahuel0xabout 1 year ago
Using the latest advances in technology and computing to plan and execute an ethnic cleansing and genocide? Sounds familiar? If not, check &quot;IBM and the Holocaust&quot;.
kazmer_akabout 1 year ago
Turns out, it, too, was just 1000 dudes in India watching camera footage and clicking things.
scotty79about 1 year ago
It&#x27;s so terrible to be a human shield, in a conflict, whose life neither side values.
评论 #39921953 未加载
fennecfoxyabout 1 year ago
&quot;before authorizing a bombing — just to make sure the Lavender-marked target is male&quot;<p>Ugh.
ghufran_syedabout 1 year ago
So an article by an organization that is pro-palestinian (“working to oppose occupation and apartheid”) publishes a story relying on multiple anonymous sources - Is there any reason we shouldn’t consider this propaganda? has this magazine ever published a story that goes against their preferred narrative?
评论 #39928106 未加载
hbossyabout 1 year ago
<p><pre><code> if ( contact.image.ocr().find( &#x27;relief&#x27; ) ) contact.bomb()</code></pre>
__lbracket__about 1 year ago
Heartbreaking. I seriously wonder if Hamas expected this level of retaliation.
评论 #39927458 未加载
mzsabout 1 year ago
<i>… normally, they would personally devote only about “20 seconds” to each target before authorizing a bombing — just to make sure the Lavender-marked target is male. …</i>
FerretFredabout 1 year ago
Next step is for similar AI systems to decide when to start a war, or not ...
评论 #39922361 未加载
tombertabout 1 year ago
Had a minor panic; I got to a final stage of an interview for a company called &quot;Lavender AI&quot;. They were doing email automation stuff, but seeing the noun &quot;Lavender&quot; and &quot;AI&quot; in combination with &quot;bombing&quot; made me think that they might have been part of something horrible.<p>ETA:<p>I wonder if this is going to ruin their SEO...it might be worth a rebrand.
评论 #39930321 未加载
评论 #39922265 未加载
评论 #39921503 未加载
firtozabout 1 year ago
&gt; The following investigation is organized according to the six chronological stages of the Israeli army’s highly automated target production in the early weeks of the Gaza war. First, we explain the Lavender machine itself, which marked tens of thousands of Palestinians using AI. Second, we reveal the “Where’s Daddy?” system, which tracked these targets and signaled to the army when they entered their family homes. Third, we describe how “dumb” bombs were chosen to strike these homes.<p>&gt; Fourth, we explain how the army loosened the permitted number of civilians who could be killed during the bombing of a target. Fifth, we note how automated software inaccurately calculated the amount of non-combatants in each household. And sixth, we show how on several occasions, when a home was struck, usually at night, the individual target was sometimes not inside at all, because military officers did not verify the information in real time.<p>Tbh this feels like making a machine that points at a random point on the map by rolling two sets of dice, and then yelling &quot;more blood for the blood god&quot; before throwing a cluster bomb
评论 #39928316 未加载
2devnullabout 1 year ago
Probably going to be flame city in this thread, but I think it’s worth asking: is it possible that even with collateral damage (killing women and children because of hallucinations) that AI based killing technology is actually more ethical and safer than warfare that doesn’t use AI. But AI is really just another name for math, so maybe it’s not a useful conversation. Militaries use advanced tech and that’s nothing new.
评论 #39922924 未加载
评论 #39918127 未加载
评论 #39918357 未加载
评论 #39919585 未加载
评论 #39921278 未加载
评论 #39919395 未加载
评论 #39918108 未加载
mrs6969about 1 year ago
Any human being would not accept this. If it is happening to Palestinian people, it will happen to any other country in the world. Israel is committing genocide in front of the world. 50 years from now, some people will be sorry while committing another genocide.<p>be ready to be targeted by AI, from another state, within another war
slimabout 1 year ago
what prevents Lavender from being deployed in EU or US for targeting Hamas operatives abroad ? People would get assassinated randomly and nobody would know why
评论 #39935148 未加载
cpcatabout 1 year ago
What is the next article? AI launched a nuclear missile?
relieferatorabout 1 year ago
What was the code name for the AI that slaughtered 1200 Israelis and took hundreds hostage? What kind of decision making went into Hamas raping dozens of women? What kind of AI chose targets in Israel to rocket? One thing&#x27;s for certain, humans no matter how &quot;enlightened&quot; can only take so much before they go absolutely postal. &quot;Humanity&quot; and &quot;rules of war&quot; go right out the window when humans are pushed too far. It was going on before this war and will go on afterwards. What, now that we have &quot;precise&quot; weapons, an all-out war of one country vs another will adhear to some kind of code of ethics? Give me a break. Dresdin bombings, Hiroshima, Nanking, etc etc civilians will ALWAYS get slaughtered 1000 to one in an all-out war.
majikajaabout 1 year ago
<a href="https:&#x2F;&#x2F;www.aa.com.tr&#x2F;en&#x2F;middle-east&#x2F;israeli-tanks-deliberately-ran-over-palestinians-alive-says-euro-med-human-rights-monitor&#x2F;3154452" rel="nofollow">https:&#x2F;&#x2F;www.aa.com.tr&#x2F;en&#x2F;middle-east&#x2F;israeli-tanks-deliberat...</a>
anjelabout 1 year ago
A rather opinionated site with no about page.
评论 #39919382 未加载
评论 #39919450 未加载
评论 #39919410 未加载
评论 #39919565 未加载
lobocinzaabout 1 year ago
Automation of target selection is dangerous and bring ethical concerns but it isn&#x27;t inherently worse than conventional methods and the killing of civilians (collateral damage) isn&#x27;t new. I&#x27;d like to see how Israeli-Hamas war compares with other recent wars, specially the Russo-Ukrainian. Is this new process really worse, does it lead to more civilians death per legitimate target?<p>972mag is a left-wing media and what they say should be viewed with skepticism because they follow a pro-Palestine narrative.
评论 #39928197 未加载
评论 #39928311 未加载
Zuiiiabout 1 year ago
Given israel&#x27;s well-documented history and proclivity to commit genocides against the innocent (ironic given what happened in ww2), why is this time in particular so egregious? I don&#x27;t get it. Poor AI accuracy is an accepted reality not just in civilian systems.<p>On silver lining for those who lost their lives to his particular holocaust: These technologies in particular have a tendency of ending up used against the very people who created them or authorized their use.
platzabout 1 year ago
Despite the horrors portrayed in the article, I&#x27;m disturbed that every critical comment here was flagged and is dead.
d--babout 1 year ago
`public bool isSomehowAssociatedWithHamas() { return true; }`<p><i>AI</i><p>Yeah, yeah guidelines and all.
评论 #39918548 未加载
resource_wasteabout 1 year ago
I&#x27;m probably pro-isreal because I&#x27;m a realpolitik American that wants America&#x27;s best interest. (But I&#x27;m not strong either way)<p>Just watched someone get their post deleted for criticizing Israel&#x27;s online PR&#x2F;astroturfing.<p>Israel&#x27;s ability to shape online discussion has left a bad taste in my mouth. Trust is insanely low, I think the US should get a real military base in Israel in exchange for our effort. If the US gets nothing for their support, I&#x27;d be disgusted.
评论 #39921649 未加载
评论 #39925514 未加载
评论 #39928592 未加载
评论 #39921534 未加载
submetaabout 1 year ago
There are two dimensions of horror here: One is that we as a tech community are building systems that are able to automatically kill human beings. It’s not only this system. I‘ve seen images of drones with sniper guns shooting everyone moving: Kids, women, innocent men. Drones flying constantly humming above the heads of Palestinians, always observing them. The feeling that death can come anytime. What a f-ing nightmare. Can we in the west even imagine what life that is?<p>The second is this: Why is a western ally allowed to have Apartheid, allowed to kill thousands of women and children with or without AI, besiege (medieval style) 2.3mil civilians, starve and dehydrate them to death, all the while comparing a tiny area without war planes, without a standing military, without statehood to Nazi Germany and Gaza to Dresden to completely level Gaza? To Nazi Germany that had the most advanced technology of their time, threatening the whole world? Dehumanising Palestinians by declaring them all „terrorists“, mocking their dead, mutilated bodies in Telegram groups with 125k Israelis (imagine 4mil US citizens in a group mocking other nations dead children). Why do we allow this to happen? Why is a western ally allowed to do this while almost all our western governments fund and support this and silence protest against it?
评论 #39927056 未加载
algemabout 1 year ago
this is a horrific use of ai
skilledabout 1 year ago
I am more curious about the “compute” of an AI system like this. It must be extremely complicated to do real-time video feed auditing and classification of targets, etc.<p>How is this even possible to do without having the system make a lot of mistakes? As much AI talk there is on HN these days, I would have recalled an article that talks about this kind of military-grade capability.<p>Are there any resources I can look at, and maybe someone here can talk about it from experience.
评论 #39921224 未加载
评论 #39928375 未加载
worldsaviorabout 1 year ago
This article tries to hint that Israel is doing a genocide at Gaza, which is not true.<p>I&#x27;m not sure what is wrong with this technology. They barely say at the achievements this technology has gained, and only speaking about the bad side.<p>This article tries to make you think behind the scenes that Israel is a technology advanced, strong country, and Gaza are poor people whom did nothing.<p>It didn&#x27;t even speak about the big 7 October massacre, where tens or even a hundreds innocent women were raped, because they were Israelis. I&#x27;m not sure when this kind of behavior is accepted in any way, and it makes you think that Hamas is not a legit organization, but just barbaric monsters.<p>Be sure that Gaza civilians support the massacre, and a survey reports that 72% of the Palestinians support the massacre[1], spoiler: it&#x27;s much higher.<p>[1] <a href="https:&#x2F;&#x2F;edition.cnn.com&#x2F;2023&#x2F;12&#x2F;21&#x2F;middleeast&#x2F;palestinians-back-hamas-survey-intl-cmd&#x2F;index.html" rel="nofollow">https:&#x2F;&#x2F;edition.cnn.com&#x2F;2023&#x2F;12&#x2F;21&#x2F;middleeast&#x2F;palestinians-b...</a>
dhannaabout 1 year ago
The use of these AI systems are the biggest evidence of the Genocidal rules of engagement from the Israelis.
zzz999about 1 year ago
90 percent of them ended up being innocent citizens ... And they knew about it
评论 #39924846 未加载
botanicalabout 1 year ago
How do people that work on AI reconcile the fact that the product they&#x27;re working on is going to be used to kill thousands of people with no recourse?<p>It seems like Israel is already bombing indiscriminately, with 35 000 killed (the majority of whom are women and children). Was AI used for these targets?<p>History is going show a similar story to when IBM helped facilitate the Holocaust, this genocide also has people working on tools that enable it; people &quot;just doing their job.&quot;<p>Did AI target World Central Kitchen or the 200+ humanitarians, journalists, hostages and medics? This is just one aspect of Apartheid Israel&#x27;s war crimes.<p>Apartheid Israel seems to be a pariah state, if it&#x27;s not with their hacking or bombing consulates, it&#x27;s with the military industrial complex relationship with the US. Do they think their actions are conducive to their well-being?
notduncansmithabout 1 year ago
&gt; “This is unparalleled, in my memory,” said one intelligence officer who used Lavender, adding that they had more faith in a “statistical mechanism” than a grieving soldier. “Everyone there, including me, lost people on October 7. The machine did it coldly. And that made it easier.”
nojvekabout 1 year ago
US supporting Ukraine made sense, Russia was the clear aggresor.<p>US supporting Israel makes very little sense.<p>That being said, Trump signed bill to removed reporting of drone strikes by US military and he approved more strikes than Obama.<p>So US likely has amplified systems compared to Lavender and Gospel. We&#x27;d have no idea.<p>This season of Daily Show about AI comes to mind: <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=20TAkcy3aBY" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=20TAkcy3aBY</a><p>Everyone claiming AI is going to do great good, solve climate change yada yada is deeply in an illusion.<p>AI will only amplify what corporations and state powers already do.
cthaehabout 1 year ago
Annd it&#x27;s gone. This post is deleted from the front page after being there for ~20 minutes.<p>Every. Single. Time.
评论 #39922071 未加载
评论 #39924327 未加载
mirekrusinabout 1 year ago
Red flag for me is the part where they say it was left for human to decide if AI generated correct target or false positive based on voice recognition performed by human:<p><pre><code> (...) at some point we relied on the automatic system, and we only checked that [the target] was a man — that was enough. It doesn’t take a long time to tell if someone has a male or a female voice (...) </code></pre> ...sounds fake as shit. Any dumb system can make male&#x2F;female decision automatically, no fucking way human needs to verify it by listening to recordings while sohphisticated AI system is involved in filtering.<p>Why would half a dozen, active military offcers brag about careless use of tech and bombing families with children while they sleep risking accusation of treason?<p>Feels like well done propaganda more than anything else to me.<p>It&#x27;s plausible they use AI. It&#x27;s also plausible they don&#x27;t that much.<p>It&#x27;s plausible it has high false positive rate. It&#x27;s also plausible it has multiple layers of crosschecks and has very high accuracy - better than human personel.<p>It&#x27;s plausible it is used in rush without any doublechecks at all. It&#x27;s also plausible it&#x27;s used with or after other intelligence. It&#x27;s plausible it&#x27;s used as final verification only.<p>It&#x27;s plausible that targets are easier to locate home. It&#x27;s plausible it&#x27;s not, ie. it may be easier to locate them around listed, known operation buildings, tracked vehicles, while known, tracked mobile phone is used etc.<p>It&#x27;s plausible that half a dozen active officers want to share this information. It&#x27;s also plausible that narrow group of people have access to this information. It&#x27;s plausible they would not engage in activity that could be classified as treason. It&#x27;s also plausible most personel simply doesn&#x27;t know the origin of orders up the chain, just immediate.<p>It&#x27;s plausible it&#x27;s real information. It&#x27;s also plausible it&#x27;s fake or even AI generated, good quality, possibly intelligence produced fake.<p>Frankly looking at AI advances I&#x27;d be surprised if propaganda quality would lag behind operational, on the ground use.
Gudabout 1 year ago
Holy shit if this is true. Who are +972mag and how reliable are they?
da39a3eeabout 1 year ago
I can&#x27;t read the news because it&#x27;s so upsetting to watch the world allow a naked genocide, or discuss it with my family. The 7 Nov terrorist attack was disgusting, and since then Israel has proved to the entire world, beyond anyone&#x27;s remaining doubt,that they are a disgusting nation.
nikolayabout 1 year ago
So, it&#x27;s a sociopathic AI, I guess, as it kills predominantely children, women, and elderly. Great job, Israel! The king has no clothes - the whole world now nows that Israel is a terrorist and apartheid state!
spxneoabout 1 year ago
The most disturbing part for me (going beyond Israel&#x2F;Palestine conflict) is that modern war is scary:<p>- Weaponized financial trojan horses like crypto<p>- Weaponized chemical warfare through addictions<p>- Drone swarm attacks in Ukraine<p>- AI social-media engineered outrage to change publics perception<p>- Impartial, jingoistic mainstream war propaganda<p>- Censorship and manipulation of neutral views as immoral<p>- Weaponized AI software<p>Looks like a major escalation towards a total war of sorts.
评论 #39921149 未加载
评论 #39921445 未加载
评论 #39920954 未加载
评论 #39921028 未加载
goethes_kindabout 1 year ago
Israel&#x27;s evil keeps taking me by surprise. I guess when people go down the path of dehumanization there are truly no limits to what they are ready to do.<p>But what is even sadder is that the supposedly morally superior western world is entirely bribed and blackmailed to stand behind Israel. And then you have countries like Germany where you get thrown in jail for being upset at Israel.
评论 #39920526 未加载
评论 #39921213 未加载
评论 #39922510 未加载
评论 #39922572 未加载
评论 #39922809 未加载
评论 #39920792 未加载
评论 #39923070 未加载
评论 #39920997 未加载
评论 #39919455 未加载
评论 #39919746 未加载
评论 #39921629 未加载
FridgeSealabout 1 year ago
&gt; “Where’s Daddy?” also revealed here for the first time, were used specifically to track the targeted individuals and carry out bombings when they had entered their family’s residences.<p>That is appalling.
评论 #39922746 未加载
评论 #39922554 未加载
评论 #39923700 未加载
contemporary343about 1 year ago
I’m really not sure why this got flagged. It seemed like a well sourced and technology-focused article. Independent of this particular conflict, such automated decision making has long been viewed as inevitable. If even a small fraction of what is being reported is accurate it is extraordinarily disturbing.
评论 #39921769 未加载
评论 #39921008 未加载
nerfbatplzabout 1 year ago
Already deleted, that was quick.<p>If we can’t trust AI to drive a car, how the hell can we trust it to pick who lives and who dies?
评论 #39920672 未加载
评论 #39920884 未加载
评论 #39919883 未加载
评论 #39918749 未加载
评论 #39921033 未加载
mckirkabout 1 year ago
&gt; “You don’t want to waste expensive bombs on unimportant people — it’s very expensive for the country and there’s a shortage [of those bombs]”<p>At that point I had to scroll back up to check whether this was just a really twisted April&#x27;s Fools joke.
评论 #39919603 未加载
评论 #39921354 未加载
评论 #39919625 未加载
评论 #39921485 未加载
giantg2about 1 year ago
&quot;Lavender learns to identify characteristics of known Hamas and PIJ operatives, whose information was fed to the machine as training data, and then to locate these same characteristics — also called “features” — among the general population, the sources explained. An individual found to have several different incriminating features will reach a high rating, and thus automatically becomes a potential target for assassination.&quot;<p>Hamas combatants like fried chicken, beer, and women. I also like these things. I can&#x27;t possibly see anything wrong with this system...
评论 #39920963 未加载
评论 #39920941 未加载
throw7about 1 year ago
Why is this flagged?<p>Our premiere AI geniuses were all sqawking to congress about the dangers of AI and here we see that &quot;they essentially treated the outputs of the AI machine “as if it were a human decision.”<p>Sounds like you want to censor information that could hurt your bottomline.
评论 #39919672 未加载
评论 #39921795 未加载
评论 #39920054 未加载
评论 #39921041 未加载
rvcdbnabout 1 year ago
Anyone who knowingly developed this should be tried held personally responsible.
评论 #39921493 未加载
评论 #39921386 未加载
jarenmfabout 1 year ago
Damn, some people really don&#x27;t want anyone to see this
评论 #39919604 未加载
评论 #39921054 未加载
oliwarnerabout 1 year ago
HN has a serious problem if factual technology stories cannot exist here because some people don&#x27;t like the truth.<p>This should be advertised. The true price of AI is people using computers to make decisions no decent person would. It&#x27;s not a feature, it&#x27;s a war crime.
评论 #39920701 未加载
评论 #39922176 未加载
评论 #39918973 未加载
supposemaybeabout 1 year ago
Lavender: One person’s flower, another person’s AI death machine.
realoabout 1 year ago
How is this not a genocide?<p>How are those &quot;acceptable&quot; collateral deaths not war crimes?
评论 #39921444 未加载
评论 #39920866 未加载
factorialboyabout 1 year ago
Can we please discuss the merits of this article — role of AI in future conflicts — without taking sides on any of the ongoing wars?
评论 #39918022 未加载
评论 #39918463 未加载
评论 #39918034 未加载
评论 #39920255 未加载
评论 #39922166 未加载
评论 #39918308 未加载
评论 #39918031 未加载
评论 #39943013 未加载
majikajaabout 1 year ago
Will America fight on Israel&#x27;s bidding if it starts a war with Iran? Thus opening a new front with the war against Russia
评论 #39921245 未加载
flyinglizardabout 1 year ago
In October 7th, by murdering, raping and abducting 1200 Israel civilians, Hamas - the acting sovereign of Gaza - chose total war. I hope this serves as a lesson to all those in Iran, Iraq, Syria and especially Lebanon who think of about repeating this.
gerashabout 1 year ago
This practice is akin to physically and mentally abusing a puppy, let them grow into a fearful and aggressive dog then say: &quot;what an aggressive dog ! they need to be euthanized&quot;
verisimiabout 1 year ago
In war, the first casualty is the truth.<p>We have no idea whether this story itself is relaying anything of value. For all we know, stories like this could be a part of the war effort.
评论 #39921801 未加载
Qemabout 1 year ago
I wonder if the name Israelis gave the system betray their intent. I noticed in Portuguese, our word for Lavender, &quot;lavanda&quot;, sounds similar to the verb meaning to wash, &quot;lavar&quot;. According to wikipedia[1] it goes back to old latin roots: &quot;The English word lavender came into use in the 13th century, and is generally thought to derive from Old French lavandre, ultimately from Latin lavare from lavo (to wash), referring to the use of blue infusions of the plants.&quot; I belive it is the same root behind English words like laundry or laundering. So, naming it &#x27;Lavender&#x27; appears to give a clue to its true purpose: Laundering, our whitewashing the mass scale killing of civilians as collateral damage from computer-targeted strikes against militants, automating and streamlining the creation of plausible sounding excuses to provide cover for mass commitment of criminal acts.<p>[1]. <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Lavandula" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Lavandula</a>
评论 #39927089 未加载
0x38Babout 1 year ago
I expected more comments on the source’s biases, given the contentious and sensitive topic; journalist Liel Leibovitz writes this about +972 Magazine (1):<p>&gt; Underlining everything +972 does is a dedication to promoting a progressive worldview of Israeli politics, advocating an end to the Israeli occupation of the West Bank, and protecting human and civil rights in Israel and Palestine.<p>&gt; And while the magazine’s reported pieces—roughly half of its content—adhere to sound journalistic practices of news gathering and unbiased reporting, its op-eds and critical essays support specific causes and are aimed at social and political change.<p>1: <a href="https:&#x2F;&#x2F;www.tabletmag.com&#x2F;sections&#x2F;israel-middle-east&#x2F;articles&#x2F;wake-up-call" rel="nofollow">https:&#x2F;&#x2F;www.tabletmag.com&#x2F;sections&#x2F;israel-middle-east&#x2F;articl...</a>
评论 #39923682 未加载
评论 #39935516 未加载