"In the short term, this could mean research into the economic effects of AI to stop smart systems putting millions of people out of work."<p>This seems unfortunate and somewhat challenging to do. Our current economic model encourages improving efficiency of systems. This seems like a good thing. Its really too bad that people "need" jobs. Jobs should be creating value or they shouldn't exist. Artificially "creating" jobs to prop up the systems feels like fighting against reality and a bad long term plan.
Why not do the same with all of technology?<p>Why do shareholders of big corporations profit from science in a grossly non-proportional way, while more than 50% of world's population has to live on under $2 a day?<p>It is time for the world's greatest minds to start thinking about how to fix capitalism, because it seems to be seriously broken.<p>And we need it fixed more than we need e.g. iPhone 7.0, or Google Adwords 2.0.
I wonder: Why should we care about what Musk and Hawking mean about AI? This article doesn't mention too much bad stuff, but earlier they have said that we should be afraid of AI/singularity.<p>With my thesis now in AI, I probably know far more than those two about this. And we're sooo far away from AI being a superforce destroying mankind.
AI would, and should, succeed human intelligence properly implemented. We are nowhere near even understanding it as a problem, much less solving it. I attended an AGI conference a couple of years ago (summer of 2012 iirc). The general feeling was that we are still a lifetime away from a solution.
This was discussed two days ago: <a href="https://news.ycombinator.com/item?id=8870456" rel="nofollow">https://news.ycombinator.com/item?id=8870456</a>
I don't know much about the philosophy of AI and I'm only familiar at a basic level with modern AI algorithms. From what I have been exposed to I don't see any reason to think AI is any more than a set of statistical frameworks. Is there any reason to believe that these statistical frameworks are comparable to biological intelligence?<p>Am I thinking about this the wrong way?
> <i>Research into AI, using a variety of approaches, had brought about great progress on speech recognition, image analysis, driverless cars, translation and robot motion, it said.</i><p>How much of this progress required training data generated by working humans? What would feed future statistical algorithms if this source of training data was greatly reduced?
So long as we can pull the plug or disconnect the interfaces we'll be ok with AI. Once we can't, then we have a problem.<p>In effect the scariest AI is distributed, self propogating and can't be unpowered. Effectively a virus. I have yet to see a meaningful distributed AI, even in concept.
This sounds like a publicity stunt...<p>My real concern with all of this is always the uncontrolled ecosystem of steadily evolving viruses and malware. We will never have control of that... and there is no telling what it can become in the future.<p>I think it will be a simple error induced by some random mutation in one of these malicious progams, not some vast artificial intelligence, that causes us problems in this arena first.