TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Understanding Agent Cooperation

126 pointsby piokucover 8 years ago

18 comments

JamilDover 8 years ago
The AI can minimize loss &#x2F; maximize fitness by either moving to look for additional resources, or fire a laser.<p>Turns out that when resources are scarce, the optimal move is to knock the opponent away. I think this tells us more about the problem space than the AI itself; it&#x27;s just optimizing for the specific problem.
评论 #13635379 未加载
评论 #13637916 未加载
projektirover 8 years ago
I&#x27;m rather worried about the wording used, and AI being created in that context. Do we really not realize what we&#x27;re doing? AI is not magic, it&#x27;s not free from fundamental math, it&#x27;s not free from corruption. It&#x27;s just going to multiply it that much more.<p>Any AI that has been programmed to highly value winning is not going to be very cooperative. For it to be cooperative, especially in situations that simulate survival, it needs to have higher ideals than winning, just like humans. It needs to be able to see and be aware of the big picture. You don&#x27;t need to look at AI for that, you can just look at the world.<p>Development of AI&#x27;s of this nature will just lead to a super-powered Moloch. Cooperative ethics is a highly advanced concept, it&#x27;s not going to show up on its own from mere game theory without a lot of time.
评论 #13635586 未加载
jerfover 8 years ago
Not entirely spawned by this article, but the whole genre and some other comments on HN by other users: I wonder if part of the &quot;mystery&quot; of cooperation in these simulations is that these people keep investigating the question of cooperation using simulations too simplistic to model any form of trade. A fundamental of economics 101 is that valuations for things differ for different agents. Trade ceases to exist in a world where everybody values everything exactly the same, because the only trade that makes any sense is to two trade two things of equal value, and even then, since the outcome is a wash and neither side obtains any value from it, why bother? I&#x27;m not sure the simulation hasn&#x27;t been simplified to the point that the phenomena we&#x27;re trying to use the simulation to explain are not capable of manifesting within the simulation.<p>I&#x27;m not saying that Trade Is The Answer. I would be somewhat surprised if it doesn&#x27;t form some of the solution eventually, but that&#x27;s not the argument I&#x27;m making today. The argument I&#x27;m making is that if the simulation can&#x27;t simulate trade at all, that&#x27;s a sign that it may have been too simplified to be useful. There are probably other things you could say that about; &quot;communication&quot; being another one. The only mechanism for communication being the result of iteration is questionable too, for instance. Obviously in the real world, most cooperation doesn&#x27;t involve human speech, but a lot of ecology can be seen to involve communication, if for no other reason than you can&#x27;t have the very popular strategy of &quot;deception&quot; if you don&#x27;t have &quot;communication&quot; with which to deceive.<p>Which may also explain the in-my-opinion overpopular and excessively studied &quot;Prisoner&#x27;s Dilemma&quot;, since it has the convenient characteristic of explicitly writing communication out of it. I fear its popularity may blind us to the fact that it wasn&#x27;t ever really meant to be the focus of study of social science, but more a simplified word problem for game theory. Studying a word problem over and over and over may be like trying to understand the real world of train transportation systems by repeatedly studying &quot;A train leaves from Albuquerque headed towards Boston at 1pm on Tuesday and a train leaves from Boston headed towards Albuquerque at 3pm on Wednesday, when do they pass each other?&quot; over and over again.<p>(Or to put it <i>really</i> simply in machine learning terms, what&#x27;s the point of trying to study cooperation in systems whose bias does not encompass cooperation behaviors in the first place?)
评论 #13638693 未加载
评论 #13636468 未加载
cs702over 8 years ago
The folks at DeepMind continue to produce clever original work at an astounding pace, with no signs of slowing down.<p>Whenever I think I&#x27;ve finally gotten a handle on the state-of-the-art in AI research, they come up with something new that looks really interesting.<p>They&#x27;re now training deep-reinforcement-learning agents to co-evolve in increasingly more complex settings, to see if, how, and when the agents learn to cooperate (or not). Should they find that agents learn to behave in ways that, say, contradict widely accepted economic theory, this line of work could easily lead to a Nobel prize in Economics.<p>Very cool.
bitwizeover 8 years ago
Oh great.<p>It&#x27;s just a matter of time before it floods the Enrichment Center with deadly neurotoxin.
vanderZwanover 8 years ago
You know, rather than being scared by this, I think it&#x27;s an excellent opportunity to learn how and when aggression evolves, and maybe learn how we can set up systems that nudge people to collaborate, perhaps even when resources are scarce.
katzgrauover 8 years ago
The article at first suggests that more intelligent versions of AI led to greed and sabotage.<p>But I do wonder if an even more intelligent AI (perhaps in a more complex environment) would take the long view instead and find a reason to co-habitate.<p>It&#x27;s kind of like rocks, paper scissors - when you attempt to think several levels deeper than your opponent and guess which level <i>they</i> stopped at. At some intelligence level for AI, cohabitation seems optimal - at the next level, not so much, and so on.<p>We&#x27;re probably going to end up building something so complex that we don&#x27;t quite understand it and end up hurting somebody.
jonbaerover 8 years ago
Why is this done on such a small level? I would have thought that with systems now in place that evolutionary game theory could be done in simulations on such a much larger scale (say 7bn agents +) ... if anything AI systems should be able to determine if certain strategies work (like items like blocking resources - such as a case of geopolitical theory) so see what cooperations occur at that level. Still amazing work but it should be applied to a larger scale for real meaning. More eager to see how RL applied to RTS games will explore and develop strategies more than anything.
george_ciobanuover 8 years ago
&quot;Scarce resources cause competition&quot; and &quot;Scarce but close to impossible to catch on own resources cause cooperation&quot;. Is that really a discovery worth publishing?
tawpKekover 8 years ago
&gt;Self-interested people often work together to achieve great things. Why should this be the case, when it is in their best interest to just care about their own wellbeing and disregard that of others?<p>I think this is a kind of strong statement to take as a given, especially as an opening. This is taking social darwinism as law, and could use more scrutiny.
falsedanover 8 years ago
Is it just me, or is this article extremely light on content? The core of it seems to be<p><pre><code> &gt; sequential social dilemmas, and us[ing] artificial agents trained by deep multi-agent reinforcement learning to study [them] </code></pre> But I didn&#x27;t find out how to recognise a sequential social dilemma, nor their training method.
评论 #13636232 未加载
d--bover 8 years ago
Mmmh, the problem of modeling social behavior is in defining the reward function, not in implementing optimal strategies to maximize the reward.<p>In a game where you are given the choice of killing 10,000 people or be killed yourself, which is the most rewarding outcome?
tmcproover 8 years ago
I wonder how Deepmind will simulate game theory as it advances
评论 #13636025 未加载
c3534lover 8 years ago
I know what I&#x27;m writing my systems science paper on.
bencollier49over 8 years ago
What an awful headline. &quot;AI learns to compete in competitive situations&quot; should be the precis.<p>Basically, it learned that it didn&#x27;t need to fight until there was resource scarcity in a simulation.
评论 #13634969 未加载
评论 #13635207 未加载
saycheeseover 8 years ago
This reads as click-bait, here&#x27;s the original blog post and research paper by DeepMind:<p>&quot;Understanding Agent Cooperation&quot; <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;edit?id=13635218" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;edit?id=13635218</a>
评论 #13636975 未加载
doenerover 8 years ago
<a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=13620518" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=13620518</a>
creoover 8 years ago
Bait.