> The embarrassing security lapse is linked to a book he published on Amazon, which left a digital trail to a private Google account created in his name, along with his unique ID and links to the account’s maps and calendar profiles.<p>I'm interested to know more about the nature of the lapse here. Is there a bug in Amazon where self-publishers expose their account email addresses? Or was it more like, he shared Google Map and Google Calendar content in the book, not realizing that those features exposed his real email?
I see no lapse here.<p>Some people publish under a pseudonym to disguise their true identity, others to separate their public persona from their private one. This is pretty clearly the later case.<p>He published the book under his initials in combination with a hint to his job. That he did not take a serious effort to hide his email is just consequential.<p>This is as exciting as had they reported they'd found out the real name of JJ Lehto.<p>EDIT: Thinking about it, it is simultaneously a <i>nom de guerre</i> and a <i>nom de plume</i>.
Opsec is hard.<p>Also. No matter how bullish you are on the potential of AI, it's horrifying and categorically unethical to use them to decide who to kill. It creates the possibility of atrocity with no accountability.<p>Although the fact that this guy was anonymous to start with definitely indicates that accountability isn't the goal here.
It doesn't really matter if his identity gets leaked, when you are a director, you always become more visible no matter what you do. You cannot hide if you are a director.<p>In military matters, high ranking officers don't protect their identity, because they represent the military.<p>Although it is obviously true that lower and medium ranking soldiers and personnel ALWAYS need to protect their identity, because they should not be identified since they are the ones who carry orders and take the real risks.<p>Of course it would have been better for him to not have his identity disclosed, since it would expose him to assassins if he would travel, but honestly this is a non-story, this will just generate hate comments in anti-israel crowds.
Here's the book:<p><a href="https://www.amazon.com/Human-Machine-Team-Artificial-Intelligence-Revolutionize-ebook/dp/B09472K2YZ/" rel="nofollow">https://www.amazon.com/Human-Machine-Team-Artificial-Intelli...</a>
Personally, I don't care about his identity, but this article also discusses the contents of the book he published, which is <i>much</i> more interesting.
Brilliant 4D chess move by Yossi Sariel: he's cleverly feigning a mistake, aiming to exit his role without formally resigning. This "mistake" provide his superiors with a pretext for his departure, all while sidestepping the events of October 7.
> Sariel’s critics, the report said, believe Unit 8200’s prioritisation of “addictive and exciting” technology over more old-fashioned intelligence methods had led to the disaster.<p>Do they also use Kubernetes and the latest frontend frameworks?
> An electronic version of the book included an anonymous email address that can easily be traced to Sariel’s name and Google account. Contacted by the Guardian, an IDF spokesperson said the email address was not Sariel’s personal one, but “dedicated specifically for issues to do with the book itself”.<p>So it doesn't sound like his "real" email was leaked somehow through the book publishing workflow but more like there was a contact email listed (Maybe even in the book itself) in the book and it was not sufficiently private. Could be as easy as leaking some letters or a profile picture through the password reset workflow.
> The security blunder is likely to place further pressure on Sariel, who is said to “live and breathe” intelligence but whose tenure running the IDF’s elite cyber intelligence division has become mired in controversy.<p>It seems weird to me that the news outlet that ousted Sariel is foreshadowing that the disclosure could put pressure on him. If they say "we try to oust Sariel," and also "ousting Sariel will put pressure on him", then through the transitive property I interpret it as "we are trying to put pressure on him." And, that seems like politics instead of journalism.
Email account aside, he<p>- Published a book using his real initials<p>- Was one of four recipients of an IDF prize in 2018<p>- Did a Master’s Degree at the National Defense University in Washington when writing the book<p>This is either a blunder like no other or there is no Yossi Sariel. His last name being an anagram for Israel and the fact there's zero traces online for someone who's been in the IDF his entire life, this seems like the most likely explanation.<p>EDIT "Sariel" is actually the name of an angel from a Judaic tradition <a href="https://en.wikipedia.org/wiki/Sariel" rel="nofollow">https://en.wikipedia.org/wiki/Sariel</a>
>"One section of the book heralds the concept of an AI-powered “targets machine”"<p><a href="https://www.youtube.com/watch?v=nSQ5EsbT4cE" rel="nofollow">https://www.youtube.com/watch?v=nSQ5EsbT4cE</a><p><a href="https://en.wikipedia.org/wiki/Brazil_(1985_film)" rel="nofollow">https://en.wikipedia.org/wiki/Brazil_(1985_film)</a>
Adjacent for some time to those participating in the secret squirrel pantomime, I find this all so terribly tiresome.<p>Drawing a veil under the pretense that revealing what's behind would harm the country, the reality is that those participating are engaged in rather mundane drudgery and would simply prefer to not be made available to uninformed speculation by the peanut gallery.
I don't run any secret military units (that I'm willing to admit to), but I do leave occasional product reviews on Amazon under a pen name.<p>I discovered that I was doxxed by creating a Goodreads account from the same email address, now that Amazon owns Goodreads.
In the 1950s or 1960s, the American journalist Steward Alsop printed the name of the head of the British MI-5 (or maybe MI-6). He told someone that the man (Menzies) was somewhat placated because Alsop had referred to him as "legendary".
If the chief spy of a country doesn't understand the implications of publishing books from his personal account, then you cannot trust him whatsoever... The stupid are controlling the whole thing.
Ok but why did his identity need to be secret in the first place? <a href="https://en.m.wikipedia.org/wiki/Director_of_the_National_Security_Agency" rel="nofollow">https://en.m.wikipedia.org/wiki/Director_of_the_National_Sec...</a> Like hey look here’s the director of the NSA! Who cares?
Human: There are hundreds of Hamas hiding in Al Shifa hospital.<p>AI: Go in and kill them<p>--
Is that why the picture of "destruction near Al Shifa Hospital" is in there, to lead us to believe it was the heartless software that led to the raid?<p>(<a href="https://edition.cnn.com/middleeast/live-news/israel-hamas-war-gaza-news-03-26-24/h_106e282fb5a83867a568ec11c9d720f0" rel="nofollow">https://edition.cnn.com/middleeast/live-news/israel-hamas-wa...</a>, as if you can believe anything coming out of there)
Given how unreliable LLM is in general usage with its tendency to "hallucinate" -- a grossly anthropomorphic term, when it's just an alogrithm with no sense of what's "actual" or "fictional" -- to use this in government and law is irresponsible and criminaly negligent.<p>To use it in war is abhorrent.
In the subject article there is a link to Maariv article (In hebrew) <a href="https://www.maariv.co.il/journalists/Article-1078519" rel="nofollow">https://www.maariv.co.il/journalists/Article-1078519</a>.<p>Basically the guy drunk "Startup Nation" kool-aid and thought he was running a startup. 8200 built this data lake with billions of data points and started to rely on it, decreasing the importance of intelligence analysts. So they were getting signals but couldn't act upon them, something that senior analysts would have helped. That's why they ignored the senior analyst "V" alarms and called her delusional. And on that damned early morning when they started to get "low significance" signals from the system and the chief of staff, Mossad and the intelligence head decided not to do anything significant. At the same time, an analyst could have connected the dots.<p>This guy will be out and the rest as well.
> However, it has been criticised over its failure to foresee and prevent Hamas’s deadly 7 October assault last year on southern Israel, in which...<p>Does anyone believe that they didn't know about it?
"Making an entire book about my AI death machine is the best idea I ever had!" is the intelligence equivalent of The Simpsons "Videotaping this crime spree was the best idea we ever had!"<p>edit: Since this thread is getting turfed so hard that my other, serious comment is already gone. I would ask you to read this thread about said AI death machine and make your own thoughts about what is currently happening.<p><a href="https://news.ycombinator.com/item?id=39918245">https://news.ycombinator.com/item?id=39918245</a>
The major challenge in fighting terrorists is distinguishing them from the general population. Killing innocent people is against international law and harms the country doing it. While killing terrorists is acceptable and even desirable to many people.<p>If a targeting system is more intelligent, it should be better at reducing the loss of innocent life while still delivering the same kill rate of actual terrorists.<p>Israel’s system, based on reporting, does the opposite.<p>So we can see that their use of an AI system to do this targeting reflects an insight not into the operational benefits of AI, but the cultural benefits.<p>By spending tons of money and time and calling it “artificial intelligence,” the targeting team has developed a far more flexible pretext for violence. “This incredibly advanced AI system said he’s a target” is now enough to go drop a bomb… even if the incredible AI system is simply set to have its filters wide open.<p>This is what YS means by “A team consisting of machines and investigators can blast the bottleneck wide open.” The humans build the machines to be extremely credulous, and then tell everyone else that the machine ID’d each target as a legitimate target.
> In one chapter of the book, he provides a template for how to construct an effective targets machine drawing on “big data” that a human brain could not process. “The machine needs enough data regarding the battlefield, the population, visual information, cellular data, social media connections, pictures, cellphone contacts,” he writes. “The more data and the more varied it is, the better.”<p>They should hold more strategy-oriented courses at the Israel Military Academy, that is if they have a military academy worthy of that name and if this intelligence guy is a graduate of one. Because it looks like he can look no further than the tactical level ("how do we kill leader X of enemy group Y at time moment Z?"), which is a suicidal thing to have as your top-military commanders in the middle to long term (because, no, killing enemy leaders X1...Xn won't win you any wars in the middle to long term).
Statistical models are used for at least 80 years to calculate the size of bombs when hitting targets and to factor in civilian casualties. Today's statistical models have much better PR but it is not much different.<p>Also war is bad, but sometimes it is a necessity when the alternative is worse. It is always quite easy to cast yourself as a neutral moral judge on the sidelines, but we all saw what happened to the USA in the early 2000s when faced with a lesser threat than Israel.<p>With the way the world is heading what we are hearing are simply echoes of the pre-ww2 appeasement attitudes which of course are a great way to prevent future deaths and war