> After decades of investment, oversight, and standards development, we are not closer to total situational awareness through a computerized brain than we were in the 1970s.<p>Hard to see how that could be true. In just about any field, computers today provide much better situational awareness than was possible in 1970.<p>The article makes the usual complaints about self driving cars:<p>> Despite $16 billion in investment from the heavy hitters of Silicon Valley, we are decades away from self-driving cars.<p>Yet cars are much more intelligent today than they were in the 1970s. And we are not decades away from self-driving cars - Waymo runs self driving cars today in very specific locations.<p>Wondering if this article is written by GTP-3.
This author makes broad sweeping claims, supprting them with numerous references that (in all instances that I checked) actually counter their argument. I'm not even sure the author knows _what subject they want to talk about_, never mind what argument to present.
Hmmm...<p>>> Jeff Bezos’s Amazon operated on extremely tight margins and was not profitable<p><a href="https://www.sec.gov/Archives/edgar/data/1018724/000119312509014406/d10k.htm#tx74114_24" rel="nofollow">https://www.sec.gov/Archives/edgar/data/1018724/000119312509...</a><p>Amazon made $645 million net profit in 2008, $476 net profit in 2007, and $190 million in 2006.<p>Where did this myth of "Amazon doesn't make profits" come from? Why are people seemingly unable to check publicly shared historical 10k and fact-check themselves before making statements like this?
Super confusing article.<p>Title aside (which is silly since AI is a toolset for solving a variety of problems), it is just so poorly written that it's not until more than halfway through it that I think I see it's main points (that present day A.I. systems are too dependenct on 'clean' data, and some nebulous discussion of how AI contributes to decision making in organizations). And the main point wrt data quality is rather silly in itself, because plenty of research is done on learning techniques that take into account adversaries or bad data. And all the discussion wrt how AI should be used to improve decision making is just super vague and makes it seem like the author has little understanding of what AI is and how it is actually used.
It is interesting that the author assumes that the intent of industrial applied AI is to make better decisions - from my experience, in the vast majority of cases companies are applying various techniques (both AI/ML and hard-coded heuristics) with the explicit intent to get <i>cheaper</i> decisions, knowing very well that they aren't going to be as good as a dedicated, caring human could make them.<p>The goal is either business process automation (do the same thing with less people) or to enable processing at a scale where doing it manually is impractical. For example, nobody would assert that an automated email spam filtering system is going to better than a human filtering my email, but an automated filter is quite useful since most of us can't afford a personal secretary. The bar for "good enough to be useful" often is lower than "human equivalent".
>People don’t make better decisions when given more data, so why do we assume A.I. will?<p>Because humans aren't computers. Computers are much better at being able to handle processing large amounts of data than humans can.<p>>we are decades away from self-driving cars<p>Self driving cars already exist. In college I had a lab where everyone had to program essentially a miniature car with sensors on it to drive around by itself. Making a car drive by itself is not a hard thing to accomplish.<p>>the largest social media companies still rely heavily on armies of human beings to scrub the most horrific content off their platforms.<p>This content is often subjective. It's impossible for a computer to always make the correct subjective choice, no humans will always be necessary
I read the whole article and thought it was worth my time. I liked to broad strokes of goals of anti fragile AI.<p>I have been thinking of hybrid AI systems since I retired from managing a deep learning team a few years ago. My intuition is that hybrid AI systems will be much more expensive to build but should in general be more resilient, kind of like old fashioned multi agent systems with a control mechanism to decide which agent to use.
>People don’t make better decisions when given more data, so why do we assume A.I. will?<p>How much of this is just because it says something they do not want to hear or because there are incentives to not consider it?
> On a warm day in 2008, Silicon Valley’s titans-in-the-making found themselves packed around a bulky, blond-wood conference room table.<p>The author has read The New Yorker a lot. Some captivating details, made irrelevant at the end of the paragraph.
> more data<p><a href="https://www.youtube.com/watch?v=sTWD0j4tec4" rel="nofollow">https://www.youtube.com/watch?v=sTWD0j4tec4</a><p>Negativeland has never been more on-topic.
Unless science studies:<p>* analysis of trained neural network so they're not just black boxes.<p>* arrangement of real neurons in actual brains of ants, mice, flies and other small animals.<p>* some philosophical questioning of how conscience, intelligence, awareness emerge, including a good definition and differentiation on how the brain is able to recognize causality from correlation.<p>* some actual collaboration between psychology AND neurology to connect the dots between cognition and how an actual brain achieve it.<p>Unless there are more efforts towards those things, machine learning will just be "advanced statistical methods", and programming experts will keep over-selling their tools. Mimicking neural networks is just fancy advertising about a simple graph algorithm.