I read the paper, and read up about the techniques used to do that (because the paper is very light on details). I came back completely underwhelmed.<p>This makes (clever) use of hundreds, if not thousands, man hours of painstakingly entering expert rules if the form IF <some input value is above or below some threshold> THEN <put some output value in the so and so range>.<p>The mathematical model of Fuzzy Trees is nice, but this is completely ad-hoc to the specific modelization of the problem, and will fail to generalize to any other problem space.<p>This kind of techniques has some nice properties (its "reasonings" are understandable and thus kind of debuggable and kind of provable, it smoothes some logic rules that would otherwise naively lead to non smooth control, etc.) but despite the advances presented here that seem to make the computation of the model tractable, I don't see how it could make the actual definition of model anywhere near tractable.<p>Also, I dislike having to wade though multiple pages of advertising before I can find the (very light) scientific content.<p>--
Edit: I realize I am very negative here. I do not mean to disparage the work done by the authors. It's just that the way it is presented make it sound way more impressive than it is. It's still interesting and novative work.
For those who read this piece of news and don't understand why there is no mention of machine learning, neural networks and deep learning, that's because the system described is a typical fuzzy logic Expert System, a mainstay of Good, Old-Fashioned AI.<p>In short, it's a hand-crafted database of rules in a format similar to "IF Condition THEN Action" coupled to an inference procedure (or a few different ones).<p>That sort of thing is called an "expert system" because it's meant to encode the knowledge of experts. Some machine learning algorithms, particularly Decision Tree learners, were proposed as a way to automate this process of elicitation of expert knowledge and the construction of rules from it.<p>As to the "fuzzy logic" bit, that's a kind of logic where a fact is true or false by degrees. When a threshold is crossed, a fact becomes true (or false) or a rule "fires" and the system changes state, ish.<p>It all may sound a bit hairy but it's actually a pretty natural way of constructing knowledge-based systems that must implement complex rules. In fact, any programmer who has ever had to code complex business logic into a program has created a de facto expert system, even if they didn't call it that.<p>For those with a bit of time in their hand, this is a nice intro:<p><a href="http://www.inf.fu-berlin.de/lehre/SS09/KI/folien/merritt.pdf" rel="nofollow">http://www.inf.fu-berlin.de/lehre/SS09/KI/folien/merritt.pdf</a>
AI Fighter Pilots have been killing me in Flight Simulations for at least 30 years now using similar systems. From the paper, they basically use an expert system using something they call a Genetic Fuzzy Tree (GFT), which seems suspiciously like a Behavior Tree where the nodes are trained. They trained the GFT then had it go up against itself where Red team was the 'enhanced' AI and Blue was supposed to be the human (this part was odd to me).<p>After they completed the training they put it up against real veteran pilots and the AI basically did a few things. It would take evasive maneuvers when fired upon and fire when in optimal range. That's pretty much it. And you know what? That's really all modern pilots need to do. It's amazing what they did with Top Gun, making this stuff not look boring. In the end of the day it's just wait for some computer to tell you that you have target lock and press a button. If attacked, take evasive maneuvers and pray. Takeoff and landing on a Carrier is the scariest part.<p>I'm quite curious how this system would perform in WWII era dogfights where you had to worry about the stress on your plane, had to deal with engines that failed and stalled all the time and maneuvers that were much slower and closer to the enemy (plus no missiles).<p>Even so, I enjoyed reading the paper (not the article) so would recommend it if you're into Game AI at all.
If we assume the wars of the future to be fought by AI-driven warmachines, can we abstract the matter further and have virtual wars? Our AI versus your AI fighting on computational resources provided by, erm, Switzerland. Nobody gets hurt and no money is spent building and destroying warplanes. Everybody wins. And have a prize pot, so actual invasion of territory is not necessary. Bulletproof solution, may I say. What do you mean it won't work?
They did only one simulation? Strange to report on details of one single simulation when more makes sense.<p>Why not do hundreds of simulations, with different amounts of attacking and defending jets. Sounds like fun, must not be a problem to find pilots who want to do this simulation, it's merely hundreds of hours of gameplay :).<p>Or was it like, they did hundreds, but this is the only one where the AI won, and it had 4 planes while the humans had only 2?
The Pentagon is betting on human-AI teaming, called 'Centaurs'. The foundational story is this:<p>Back in the late 1990s, Deep Blue beat the best human chess player, a demonstration of the power of AI.<p>Around ten years later, a tournament of individual grandmasters and individual AIs was won by ... some amateur chess players teamed with AIs.<p>AIs aren't good at dealing with novel situations, humans are; they complement each other (and I'll add: unlike most other endeavors, in war the environment (the enemy) is desperately striving to confuse you and do the unexpected. Your self-parking car would have more trouble if someone was trying everything they could think of to stop it, as if their survival was at stake). Also, we strongly prefer humans make life-and-death decisions; hopefully that turns out to be realistic.
Huh, couple that with an aircraft not bound by human limits (no life support, much faster maneuvering with no loss in decision making) and it should be awesome. And terrifying.
Was this Raspberry Pi powered? This story makes that claim: <a href="http://www.newsweek.com/artificial-intelligence-raspberry-pi-pilot-ai-475291" rel="nofollow">http://www.newsweek.com/artificial-intelligence-raspberry-pi...</a><p>If that is true, it puts this achievement in a totally different class.
I imagine an AI pilot always has a path to victory since they aren't subject to red/black-out and can thus pull crazier maneuver than their human counterparts.
The Alpha paper, "Genetic Fuzzy based Artificial Intelligence for Unmanned Combat Aerial
Vehicle Control in Simulated Air Combat Missions" is open access and available online:<p><a href="http://www.omicsgroup.org/journals/genetic-fuzzy-based-artificial-intelligence-for-unmanned-combat-aerialvehicle-control-in-simulated-air-combat-missions-2167-0374-1000144.pdf" rel="nofollow">http://www.omicsgroup.org/journals/genetic-fuzzy-based-artif...</a>
What form of combat was this? It sounds as if they were dogfighting, something that is more myth than reality these days. Fighters fight but they don't engage on the equal terms, the duel we see in films. What were the BVR conditions? Was this a missile fight or with cannons?<p>The concept of two flights approaching each other, seeing each other, and not engaging until they are in dogfighting range is silly. To get two modern fighters close enough for a proper turning fight at least one side will have to be taken by surprise. Otherwise, the long-range missile fight will either decide the matter or place one side in such a poor position that they will withdraw. (Either they are down or will have so reduced their energy that a turning fight isn't an option.)
In every considerations, an AI pilot has all the advantages in a physical combat, no G force limit, precise maneuvers, instant reactions, full time awareness. The only question is, will the rules of war allow an AI to kill a human ? Or how a human decision can be inserted in the loop.
MAD is the future. And righteousness is the enemy. Don't mess with us. Don't mess with them.<p>Also, do the world a favor, and don't innovate new weapons. They leave an indelible affect on the collective mind.
> Because a simulated fighter jet produces so much data for interpretation, it is not always obvious which manoeuvre is most advantageous or, indeed, at what point a weapon should be fired.<p>This is changing very rapidly with hardware-accelerated RNN chips being researched by Google and facebook.<p>I wonder about communication though. All the enemy fighter needs to do is jam any signals used by the jets to communicate. I wonder if they could rely on laser/line-of-sight communication instead of RF frequencies.
They made a movie about this in 2005 (Stealth); looks like it's only taken 10 years for the first half of the plot to unfold.<p>Now we just need the AI to go rogue and target it's master ;)
ONE simulation? This is hardly news. It'd be more interesting if they did hundreds or thousands of simulation. One data point means nothing statistically.
I'd like to know how this system compares to TacAir-Soar: <a href="http://ai.eecs.umich.edu/people/laird/papers/AIMag99.html" rel="nofollow">http://ai.eecs.umich.edu/people/laird/papers/AIMag99.html</a>
I've been losing to the AI fighter pilots in DCS:World[0] for years.<p>[0]: <a href="https://en.wikipedia.org/wiki/Digital_Combat_Simulator" rel="nofollow">https://en.wikipedia.org/wiki/Digital_Combat_Simulator</a>
Fighter jets feel like something that could be effectively tackled using genetic algorithms. Algorithms that get shot down are weeded out. Algorithms that shoot down enemies are promoted. Yeah?
For many years John Boyd and the "Fighter Mafia" helped to plan, build, test, and then manufacture fighters that had optimal "performance envelopes" that enabled them to maintain dominance in the sky. Perhaps this concept means that the new "performance envelope" is going to be one of software. This argument is fleshed out here: <a href="http://warontherocks.com/2016/02/imagine-the-starling-peak-fighter-the-swarm-and-the-future-of-air-combat/" rel="nofollow">http://warontherocks.com/2016/02/imagine-the-starling-peak-f...</a>
I imagine in real life conditions adversaries would focus on sensory attack types then?<p>Are there sensors that are immune to scrambling and bad data?
News at 11. One robot pilot beats another robot pilot.<p>"The AI, known as Alpha, used four virtual jets to successfully defend a coastline against two attacking aircraft - and did not suffer any losses."<p>"Alpha, which was developed by a US team, also triumphed in simulation against a retired human fighter pilot."<p>Key words here are "also" and "simulation" and "retired".<p>Click bait much?
In the clip below, one of mankind's last manned aircraft pilots--flying his fighter with a mind interface--attempts to destroy his AI-controlled fighter replacement:<p><a href="https://www.youtube.com/watch?v=5hJepWBUqZk#t=0m20s" rel="nofollow">https://www.youtube.com/watch?v=5hJepWBUqZk#t=0m20s</a><p>Perhaps honor can't be programmed.
Does it go without saying that actually running a simulation is super easy? At times I feel locked in by my operating system, so I wonder how these guys did it.
Can be deadly, but if it's predictable it can be controlled.
For example, a gator. A gator is deadly, but can be manipulated because of its predictability.