Sundar Pichai's call for regulating artificial intelligence is a slap in the face to anyone working in the field of AI. There are obvious existential risks that he seems happy to skip over. His choice of AI is pure marketing driven FUD, mixed with a healthy dose of self promotion.<p>Don't wait for him to put up any cash to reign in Nuclear weapons, Global pandemic, Carbon emissions, Bioterrorism etc. Pichai will dismount this soapbox as soon as a new buzzword hits Twitter. <a href="https://killedbygoogle.com/" rel="nofollow">https://killedbygoogle.com/</a>
And then a different take: <a href="https://thenextweb.com/artificial-intelligence/2020/01/20/sundar-pichai-offers-a-cryptic-warning-against-over-regulating-ai/" rel="nofollow">https://thenextweb.com/artificial-intelligence/2020/01/20/su...</a>
How can we seriously begin to regulate anything tech related until we actually have people who have worked in tech or understand tech in government roles in some capacity, either consulting or full time ?
The thread is already filled with the usual "CEOs only call for regulation when they're ahead" posts.<p>I find this to be such a shallow generic libertarian take on tech. Not only does this always seem to come from people who eat up everything tech executives say <i>on every other topic except regulation</i>, it also totally treats Pichai's arguments in bad faith.<p>Here's another take. Pichai is just a smart guy who is genuinely worried about the abuse of the technology because there is indeed a lot of state actors or other potential to abuse the technology in privacy damaging ways. If you want to scare Google with regulation try anti-trust, not AI and privacy security.<p>Nobody is competing with Google or the large players on AI anyway, regulation or not. Their advantage is in data, scale and talent. If anything higher privacy standards might create ecosystems of privacy focussed companies in the space.
AI intrinsic socialist nature is showing in comments like these. It's so weird to hear companies ask for regulation while the government is saying don't worry about it. Either Google knows something mainstream AI research doesn't or it's some kind of weird deferential posturing about competencies.
Sure he does. They actually want global regulation so they don't have to worry about pesky little things like national entities regulating their stuff and perhaps fining them some billions down the road.<p>If only there was a global government to lobby to it would make things so much easier legally speaking. /s<p>Regulation is also great to kill would-be competitors. Classic pull the ladder move.<p>The government should start <i>taxing</i> data. Just like any asset data should be listed in their financial statement per type and quantity and taxed. Derived models taxed based on the input data.
Is it a case of either us regulating AI or AI ultimately regulating us? It would be tricky to implement such regulation, it would need to be a global effort like nuclear non-proliferation. Otherwise certain nations might pursue unregulated AI in hopes of some advantage to the detriment of all.<p>What times we live in.<p>EDIT: updated text to clarify position of curiosity.