Normally I finish Bruce's essays with a sense of clarity and having
read a sharp analysis. Not saying there's anything technically amis
with this one, it's all terrifyingly clear, but I think he bit off a
little more that he could chew. The result is indigestible. I
personally would have split this into several shorter pieces.<p>When it's all put together this way though, I can't help think we've
shot ourselves in the foot. As computer scientists and developers
we've probably already lost control.<p>We owe a debt of honesty to the world to say so now and stop
pretending otherwise. Then we can, as a society, revisit Postman's
Seven Questions:<p>1. What is the problem that this new technology solves?<p>2. Whose problem is it?<p>3. What new problems do we create by solving this problem?<p>4. Which people and institutions will be most impacted by a technological solution?<p>5. What changes in language occur as the result of technological change?<p>6. Which shifts in economic and political power might result when this technology is
adopted?<p>7. What alternative (and unintended) uses might be made of this technology?
Interesting, but long read. To summarize: people are easy to manipulate and AIs are learning how to do this at scale and with intent. AIs are still tools though. It's easy to anthropomorphize AIs and think in terms of us and them. It's how we are wired to think. But the reality is that they are expensive tools that are owned and wielded by other people.<p>So, you get all the unimaginative dystopian thinking. Which is of course largely the product of science fiction written decades ago. It's hard to be innovative in this space. We already imagined all the bad outcomes a long time ago. But that doesn't mean it will play out like that. Precisely because we've imagined all the bad stuff, that's likely to not happen.<p>The short term reality is actually an arms race between companies and countries. And like most arms races, the people with the best toys and the most resources end up on top. The question is not what AIs will do but what the people wielding them will do with them. And how we can hack their purposes. The system these companies and countries operate in is of course hack-able. Democracies are a power hack. We once had these all powerful kings, emperors, and dictators. And then the people that gave them power got smart and organized. Democracies basically curb that power in the interest of self preservation.<p>Like with most technology, the answer to what will happen with AIs will be mostly harmless and beneficial stuff. That's where the money is. But with some intentionally harmful stuff and maybe a little bit of unintentionally disastrous stuff. So, we as a society need to get better at countering/preventing/disincentivizing the bad stuff and preventing the unintentionally harmful stuff. Rejecting technology is not the answer. It just makes us more vulnerable. The more of us get smarter, the better it is for all of us.
This article is written as if scheier is going to make collective decisions on behalf of everyone, or as if someone making collective decisions for everyone is going to stop everything "because it's dangerous".<p>What's actually going to happen is that regardless of the so called possible dangers, there will be talented humans who still view this as a net positive thing and they will continue to develop AI, wherever it goes.<p>Unless there's a global traumatic event, I don't see anything stopping.<p>AI developers have an interest to fake moral problems to better capitalize on their product (we don't support you running models at home, it's too dangerous), and that's exactly where their ethics concerns will end.<p>We can't deal with any of these things by telling people to stop doing things they obviously won't stop doing.<p>I'd rather we deal with this transparently. How about requiring AI models deployed to be really open? Then at least humanity will always have an advantage over AI that AIs are whitebox while humans are blackbox.<p>And if some AI will do some hack to benefit its creators, everyone else will have a chance to understand it's happening.<p>It shouldn't be "this is dangerous, make it closed so only we can abuse it", it should be, this is dangerous, so we're doing this as transparently as possible.<p>I think in a theoretical war between Turing machines, the Turing machine which is given as input the code of the other turing machine should always be the winner.<p>Think about the halting problem and diagonalization: the diagonalizing counter example wins by having the source code of the program which supposedly solves the halting problem.
> But some bugs in the tax code are also vulnerabilities. For example, there’s a corporate tax trick called the “Double Irish with a Dutch Sandwich.”<p>So how come financial hackers exploiting vulnerabilities in the tax code and making these sandwiches never do jail time, while computer hackers regularly do?
It seems inevitable that we will need some kind of PKE in order to verify identities. My ideal system is using the CAC protocol from the DoD at each of the 50 DMVs in the US to issue identification.<p>I don’t think this form of state-issued PKE should be required for getting online but I would prefer interacting with people in an environment where they were needed to participate. Of course anonymous forums should be allowed but I don’t want anonymity in every interaction…<p>…especially over the next few years as an increasing number of actors will be non-human…
> First, participants interacted with the robot in a normal setting to experience its performance, which was deliberately poor. Then, they had to decide whether or not to follow the robot’s commands in a simulated emergency. In the latter situation, all twenty-six participants obeyed the robot, despite having observed just moments before that the robot had lousy navigational skills. The degree of trust they placed in this machine was striking: when the robot pointed to a dark room with no clear exit, the majority of people obeyed it, rather than safely exiting by the door through which they had entered.<p>I see this all the time - people putting their faith in systems and rules and computer programs despite knowing that they're more than likely wrong in the given situation. It's bizarre.
DARPA had a program some years back to train AI on actual hackers to model their behaviors. Anyways, I still doubt AI's ability to cause significant harm beyond automating spear-phishing, social engineering, mass-scanning codebases/binaries and exploiting low-hanging fruits. Modern day exploits are still outside the realm of what AI can do - and I don't even believe AI will ever exceed that realm. But I think society will do a better job w/ security once AI automates and annoys everyone
>> This hacking will come naturally, as AIs become more advanced at learning, understanding, and problem-solving.<p>Everyone is so sure "AI"s will continue improving (not to say, everyone is so sure they <i>have</i> improved). It's going to be a little embarrassing if the foretold continuous AI improvement does not come to pass.<p>:grabs popcorn: