TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

The Coming AI Hackers (2021)

70 pointsby josefslerkaover 2 years ago

8 comments

nonrandomstringover 2 years ago
Normally I finish Bruce&#x27;s essays with a sense of clarity and having read a sharp analysis. Not saying there&#x27;s anything technically amis with this one, it&#x27;s all terrifyingly clear, but I think he bit off a little more that he could chew. The result is indigestible. I personally would have split this into several shorter pieces.<p>When it&#x27;s all put together this way though, I can&#x27;t help think we&#x27;ve shot ourselves in the foot. As computer scientists and developers we&#x27;ve probably already lost control.<p>We owe a debt of honesty to the world to say so now and stop pretending otherwise. Then we can, as a society, revisit Postman&#x27;s Seven Questions:<p>1. What is the problem that this new technology solves?<p>2. Whose problem is it?<p>3. What new problems do we create by solving this problem?<p>4. Which people and institutions will be most impacted by a technological solution?<p>5. What changes in language occur as the result of technological change?<p>6. Which shifts in economic and political power might result when this technology is adopted?<p>7. What alternative (and unintended) uses might be made of this technology?
评论 #34665814 未加载
评论 #34664249 未加载
评论 #34665446 未加载
jillesvangurpover 2 years ago
Interesting, but long read. To summarize: people are easy to manipulate and AIs are learning how to do this at scale and with intent. AIs are still tools though. It&#x27;s easy to anthropomorphize AIs and think in terms of us and them. It&#x27;s how we are wired to think. But the reality is that they are expensive tools that are owned and wielded by other people.<p>So, you get all the unimaginative dystopian thinking. Which is of course largely the product of science fiction written decades ago. It&#x27;s hard to be innovative in this space. We already imagined all the bad outcomes a long time ago. But that doesn&#x27;t mean it will play out like that. Precisely because we&#x27;ve imagined all the bad stuff, that&#x27;s likely to not happen.<p>The short term reality is actually an arms race between companies and countries. And like most arms races, the people with the best toys and the most resources end up on top. The question is not what AIs will do but what the people wielding them will do with them. And how we can hack their purposes. The system these companies and countries operate in is of course hack-able. Democracies are a power hack. We once had these all powerful kings, emperors, and dictators. And then the people that gave them power got smart and organized. Democracies basically curb that power in the interest of self preservation.<p>Like with most technology, the answer to what will happen with AIs will be mostly harmless and beneficial stuff. That&#x27;s where the money is. But with some intentionally harmful stuff and maybe a little bit of unintentionally disastrous stuff. So, we as a society need to get better at countering&#x2F;preventing&#x2F;disincentivizing the bad stuff and preventing the unintentionally harmful stuff. Rejecting technology is not the answer. It just makes us more vulnerable. The more of us get smarter, the better it is for all of us.
评论 #34672106 未加载
评论 #34664328 未加载
machina_ex_deusover 2 years ago
This article is written as if scheier is going to make collective decisions on behalf of everyone, or as if someone making collective decisions for everyone is going to stop everything &quot;because it&#x27;s dangerous&quot;.<p>What&#x27;s actually going to happen is that regardless of the so called possible dangers, there will be talented humans who still view this as a net positive thing and they will continue to develop AI, wherever it goes.<p>Unless there&#x27;s a global traumatic event, I don&#x27;t see anything stopping.<p>AI developers have an interest to fake moral problems to better capitalize on their product (we don&#x27;t support you running models at home, it&#x27;s too dangerous), and that&#x27;s exactly where their ethics concerns will end.<p>We can&#x27;t deal with any of these things by telling people to stop doing things they obviously won&#x27;t stop doing.<p>I&#x27;d rather we deal with this transparently. How about requiring AI models deployed to be really open? Then at least humanity will always have an advantage over AI that AIs are whitebox while humans are blackbox.<p>And if some AI will do some hack to benefit its creators, everyone else will have a chance to understand it&#x27;s happening.<p>It shouldn&#x27;t be &quot;this is dangerous, make it closed so only we can abuse it&quot;, it should be, this is dangerous, so we&#x27;re doing this as transparently as possible.<p>I think in a theoretical war between Turing machines, the Turing machine which is given as input the code of the other turing machine should always be the winner.<p>Think about the halting problem and diagonalization: the diagonalizing counter example wins by having the source code of the program which supposedly solves the halting problem.
评论 #34665818 未加载
评论 #34672303 未加载
ameliusover 2 years ago
&gt; But some bugs in the tax code are also vulnerabilities. For example, there’s a corporate tax trick called the “Double Irish with a Dutch Sandwich.”<p>So how come financial hackers exploiting vulnerabilities in the tax code and making these sandwiches never do jail time, while computer hackers regularly do?
评论 #34664000 未加载
评论 #34664139 未加载
评论 #34664107 未加载
评论 #34666559 未加载
williamcottonover 2 years ago
It seems inevitable that we will need some kind of PKE in order to verify identities. My ideal system is using the CAC protocol from the DoD at each of the 50 DMVs in the US to issue identification.<p>I don’t think this form of state-issued PKE should be required for getting online but I would prefer interacting with people in an environment where they were needed to participate. Of course anonymous forums should be allowed but I don’t want anonymity in every interaction…<p>…especially over the next few years as an increasing number of actors will be non-human…
roxgibover 2 years ago
&gt; First, participants interacted with the robot in a normal setting to experience its performance, which was deliberately poor. Then, they had to decide whether or not to follow the robot’s commands in a simulated emergency. In the latter situation, all twenty-six participants obeyed the robot, despite having observed just moments before that the robot had lousy navigational skills. The degree of trust they placed in this machine was striking: when the robot pointed to a dark room with no clear exit, the majority of people obeyed it, rather than safely exiting by the door through which they had entered.<p>I see this all the time - people putting their faith in systems and rules and computer programs despite knowing that they&#x27;re more than likely wrong in the given situation. It&#x27;s bizarre.
nocsiover 2 years ago
DARPA had a program some years back to train AI on actual hackers to model their behaviors. Anyways, I still doubt AI&#x27;s ability to cause significant harm beyond automating spear-phishing, social engineering, mass-scanning codebases&#x2F;binaries and exploiting low-hanging fruits. Modern day exploits are still outside the realm of what AI can do - and I don&#x27;t even believe AI will ever exceed that realm. But I think society will do a better job w&#x2F; security once AI automates and annoys everyone
YeGoblynQueenneover 2 years ago
&gt;&gt; This hacking will come naturally, as AIs become more advanced at learning, understanding, and problem-solving.<p>Everyone is so sure &quot;AI&quot;s will continue improving (not to say, everyone is so sure they <i>have</i> improved). It&#x27;s going to be a little embarrassing if the foretold continuous AI improvement does not come to pass.<p>:grabs popcorn:
评论 #34672298 未加载