Everyone involved in this article has an incentive to overstate its abilities. The creators will say it is better than it is to get more work. Defense to create a deterrent. Journalists to get clicks.<p>Given the baseline for government understanding of AI is poor, my prior on how impressive this thing is in reality (as some sort of AGI pathway breakthrough) is pretty low.<p>I would bet they have some large database and some good, but mostly conventional IT around it.
It sounds like a new version of SIOP, the Cold War program to produce a plan for nuclear war based on computer simulations incorporating data on assets, capabilities, etc. of both sides. It was the inspiration for WOPR from <i>WarGames</i>, though the real plan required humans to actually run the simulations and make command decisions.<p>That name is stupid. It's like aspirational marketing. It reminds me of the artificial intelligence from <i>Team America: World Police</i>, called INTELLIGENCE.
Reminder: Way back when the AT&T 'secret rooms' that enabled the NSA to tap all traffic were revealed, we learned that the NSA relies heavily upon an extremely questionable legal opinion written by their own lawyers which says that communications do not count as 'collected' or 'intercepted' until a HUMAN operator reads the plaintext. That means no amount of automated processing, machine learning, statistical analysis, filtering, profiling, etc run on your communications or its metadata amounts to your communications being 'intercepted' as far as they are concerned. They know this legal opinion is dicey, and they will do absolutely anything, including dropping cases entirely, to avoid having it tested in court.
Everyone in this thread is talking about how it's fake, but this does have an actual implication for the dangers of AGI. If every world government is scrambling to pretend to have it, nobody will be able to tell when someone actually invents it.
My understanding is that the world's preeminent ML researchers are refusing to contribute to the defense industry, are extremely well compensated in the private sector, and prefer to publish their work over hiding it as a trade or national secret. Given this, how are we to believe that the NRO has developed the world's most capable ML system, somehow years ahead of a field that is already perhaps the most dynamic field of research in science, utilizing, at best, second rate talent? Mark me a skeptic.
I had assumed from the title it was about AI rather than a proxy brain, but as someone with a damaged meat bag and miserable QOL as a result, I long for a transfer into an artificial body/brain where once can repair/replace parts and be as good as new. I don't imagine its possible at all really, let alone in the short time I have left, but one can dream.
tl;dr It's an automated image classification system ("Tank division identified"), with some ability to identify and predict movement of said objects ("Tank division likely moving east"). Not sure how the Verge author jumped from predictive defense analytics to "it's a brain!."<p>I'm also highly skeptical of this system's predictive abilities. I recall a similar system (also described as "modeled after the human brain," whatever that means) from my time at a major defense contractor. It tried to predict the movements of the enemy and feelings of the non-combatant population via scrapings of news sites, social media, and other online sources. Never mind that the target battlefield was Afghanistan, where Internet adoption isn't quite 100%.
Keep in mind that America does not reveal such things until she has already invented the next generation, typically. I would assume that since this is out, they've got their next version up-and-running already.
If I were the editor of this publication, I'd argue that, while titles like this:<p>"IT’S SENTIENT Meet the classified artificial brain being developed by US intelligence programs"<p>Might get short term attention they are bad for the long term credibility of the publication. "SENTIENT" is a clever code name for the project, but I didn't see anything to suggest that the program in question had anything to do with sentience. It seems like it's a program to synthesize and present data from a wide array of sources, and that's neat, but why try to pitch it as something that it's clearly not.<p>Warren Buffett once said you can have a ballet and that's fine. You can have a rock concert and that's fine. But don't have a ballet and market it as a rock concert. If you want to write a story about software that presents data insights, please label it as such and I'll be interested in reading it. Don't label it as a story about artificial brain sentience because then I'll complain in the comments.
>Until now, Sentient has been treated as a government secret, except for vague allusions in a few speeches and presentations.<p>So we can be certain the Chinese have a complete copy of all the relevant software and files.