Regulation comes up later in the film. Ross Anderson, professor at Cambridge University recently wrote, <a href="https://m.cacm.acm.org/magazines/2018/3/225467-making-security-sustainable/fulltext" rel="nofollow">https://m.cacm.acm.org/magazines/2018/3/225467-making-securi...</a><p><i>"Once software becomes pervasive in devices that surround us, that are online, and that can kill us, the software industry will have to come of age. As security becomes ever more about safety rather than just privacy, we will have sharper policy debates about surveillance, competition, and consumer protection. The notion that software engineers are not responsible for things that go wrong will be put to rest for good, and we will have to work out how to develop and maintain code that will go on working dependably for decades in environments that change and evolve.</i>"
I enjoy the... irony? of visiting the site with a set of ad block and privacy related extensions and seeing a set of 'Sorry' messages that I can't see the trailer because of my privacy settings.<p><a href="https://imgur.com/a/4IEsx" rel="nofollow">https://imgur.com/a/4IEsx</a>
I was hoping for a bit more in-depth material - allowing the experts to explore their topics a bit more and potentially talking about potential solutions? Where is the call to action for the viewer? What now?<p>It's good for non-technical folks to watch, but nothing really new since the 'Humans need not apply' 15 min documentary [0]<p>[0] <a href="https://www.youtube.com/watch?v=7Pq-S557XQU" rel="nofollow">https://www.youtube.com/watch?v=7Pq-S557XQU</a> (2014)<p>Edit: added link to humans need not apply
I watched the first 20 minutes or so. What I really miss here is some naration. A documentary that is tied together by quotes from interviews and flashy stockfootage is hard for me to follow.
Very strongly recommended. The interviewed subjects are largely experts in AI, and many are concerned.<p>The film is likely a bit long to trigger much by way of discussion here, though that's not always bad.<p>Word is that free play / download is this weekend only. Grab a copy via yt-download if you can't watch immediately.
Honestly, I personally don't believe that regulations would solve or diminish the potential issues related to AI. We will be constantly challenged by people/machines trying to dominate others, and for that reason, I don't see regulations being the solution, but education instead. I think that people should better understand the trade-offs that AI can bring to our lives and act based on that. Therefore, democratizing AI and educating people about it should be a good starting point for this problem.<p>Also, I tend to agree with Mark Cuban[1] about the importance of philosophy degree in the near future. There will be so many issues to be assessed that such degree would bring much value to the society.<p>[1] <a href="https://www.cnbc.com/2018/02/20/mark-cuban-philosophy-degree-will-be-worth-more-than-computer-science.html" rel="nofollow">https://www.cnbc.com/2018/02/20/mark-cuban-philosophy-degree...</a>
I don't really get the argument for regulation. This is not nuclear material (something relatively easy to control). This is computers, something most 8yr olds in the western world at least have access to. If you regulate it in the USA or the USA+Europe will it be regulated in Russia? China? Can you even regulate it in the west if you want to without confiscating all computers?<p>I'm not saying we shouldn't try to make friendly AI (One of Musk's initiatives), rather I'm just saying I don't see how it's possible to remotely regulate this.
It looks good with editing, graphics stock footage and such but not really for HN demographic. Some of the AI commentaries are also very exaggerated in places.
Don't waste your time. It's fear mongering. This would be fine if it had any suggestions at all about how to avoid the horrors it imagines, but its argument boils down to completely unspecified 'regulation.'