The AI can minimize loss / maximize fitness by either moving to look for additional resources, or fire a laser.<p>Turns out that when resources are scarce, the optimal move is to knock the opponent away. I think this tells us more about the problem space than the AI itself; it's just optimizing for the specific problem.
I'm rather worried about the wording used, and AI being created in that context. Do we really not realize what we're doing? AI is not magic, it's not free from fundamental math, it's not free from corruption. It's just going to multiply it that much more.<p>Any AI that has been programmed to highly value winning is not going to be very cooperative. For it to be cooperative, especially in situations that simulate survival, it needs to have higher ideals than winning, just like humans. It needs to be able to see and be aware of the big picture. You don't need to look at AI for that, you can just look at the world.<p>Development of AI's of this nature will just lead to a super-powered Moloch. Cooperative ethics is a highly advanced concept, it's not going to show up on its own from mere game theory without a lot of time.
Not entirely spawned by this article, but the whole genre and some other comments on HN by other users: I wonder if part of the "mystery" of cooperation in these simulations is that these people keep investigating the question of cooperation using simulations too simplistic to model any form of trade. A fundamental of economics 101 is that valuations for things differ for different agents. Trade ceases to exist in a world where everybody values everything exactly the same, because the only trade that makes any sense is to two trade two things of equal value, and even then, since the outcome is a wash and neither side obtains any value from it, why bother? I'm not sure the simulation hasn't been simplified to the point that the phenomena we're trying to use the simulation to explain are not capable of manifesting within the simulation.<p>I'm not saying that Trade Is The Answer. I would be somewhat surprised if it doesn't form some of the solution eventually, but that's not the argument I'm making today. The argument I'm making is that if the simulation can't simulate trade at all, that's a sign that it may have been too simplified to be useful. There are probably other things you could say that about; "communication" being another one. The only mechanism for communication being the result of iteration is questionable too, for instance. Obviously in the real world, most cooperation doesn't involve human speech, but a lot of ecology can be seen to involve communication, if for no other reason than you can't have the very popular strategy of "deception" if you don't have "communication" with which to deceive.<p>Which may also explain the in-my-opinion overpopular and excessively studied "Prisoner's Dilemma", since it has the convenient characteristic of explicitly writing communication out of it. I fear its popularity may blind us to the fact that it wasn't ever really meant to be the focus of study of social science, but more a simplified word problem for game theory. Studying a word problem over and over and over may be like trying to understand the real world of train transportation systems by repeatedly studying "A train leaves from Albuquerque headed towards Boston at 1pm on Tuesday and a train leaves from Boston headed towards Albuquerque at 3pm on Wednesday, when do they pass each other?" over and over again.<p>(Or to put it <i>really</i> simply in machine learning terms, what's the point of trying to study cooperation in systems whose bias does not encompass cooperation behaviors in the first place?)
The folks at DeepMind continue to produce clever original work at an astounding pace, with no signs of slowing down.<p>Whenever I think I've finally gotten a handle on the state-of-the-art in AI research, they come up with something new that looks really interesting.<p>They're now training deep-reinforcement-learning agents to co-evolve in increasingly more complex settings, to see if, how, and when the agents learn to cooperate (or not). Should they find that agents learn to behave in ways that, say, contradict widely accepted economic theory, this line of work could easily lead to a Nobel prize in Economics.<p>Very cool.
You know, rather than being scared by this, I think it's an excellent opportunity to learn how and when aggression evolves, and maybe learn how we can set up systems that nudge people to collaborate, perhaps even when resources are scarce.
The article at first suggests that more intelligent versions of AI led to greed and sabotage.<p>But I do wonder if an even more intelligent AI (perhaps in a more complex environment) would take the long view instead and find a reason to co-habitate.<p>It's kind of like rocks, paper scissors - when you attempt to think several levels deeper than your opponent and guess which level <i>they</i> stopped at. At some intelligence level for AI, cohabitation seems optimal - at the next level, not so much, and so on.<p>We're probably going to end up building something so complex that we don't quite understand it and end up hurting somebody.
Why is this done on such a small level? I would have thought that with systems now in place that evolutionary game theory could be done in simulations on such a much larger scale (say 7bn agents +) ... if anything AI systems should be able to determine if certain strategies work (like items like blocking resources - such as a case of geopolitical theory) so see what cooperations occur at that level. Still amazing work but it should be applied to a larger scale for real meaning. More eager to see how RL applied to RTS games will explore and develop strategies more than anything.
"Scarce resources cause competition" and "Scarce but close to impossible to catch on own resources cause cooperation".
Is that really a discovery worth publishing?
>Self-interested people often work together to achieve great things. Why should this be the case, when it is in their best interest to just care about their own wellbeing and disregard that of others?<p>I think this is a kind of strong statement to take as a given, especially as an opening. This is taking social darwinism as law, and could use more scrutiny.
Is it just me, or is this article extremely light on content? The core of it seems to be<p><pre><code> > sequential social dilemmas, and us[ing] artificial agents trained by deep multi-agent reinforcement learning to study [them]
</code></pre>
But I didn't find out how to recognise a sequential social dilemma, nor their training method.
Mmmh, the problem of modeling social behavior is in defining the reward function, not in implementing optimal strategies to maximize the reward.<p>In a game where you are given the choice of killing 10,000 people or be killed yourself, which is the most rewarding outcome?
What an awful headline. "AI learns to compete in competitive situations" should be the precis.<p>Basically, it learned that it didn't need to fight until there was resource scarcity in a simulation.
This reads as click-bait, here's the original blog post and research paper by DeepMind:<p>"Understanding Agent Cooperation"
<a href="https://news.ycombinator.com/edit?id=13635218" rel="nofollow">https://news.ycombinator.com/edit?id=13635218</a>