TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Artificial Intelligence – The Revolution Hasn’t Happened Yet (2018)

278 pointsby seagullzover 4 years ago

27 comments

klenwellover 4 years ago
<i>The problem had to do not just with data analysis per se, but with what database researchers call “provenance” — broadly, where did data arise, what inferences were drawn from the data, and how relevant are those inferences to the present situation? While a trained human might be able to work all of this out on a case-by-case basis, the issue was that of designing a planetary-scale medical system that could do this without the need for such detailed human oversight.</i><p>I&#x27;m not a data scientist and I&#x27;ve never encountered that term &quot;provenance&quot; before but I&#x27;ve encountered the problem he talks about in the wild here and there and have searched for a good way to describe it. His ultrasound example is a great, chilling, example of it.<p>I also like the term &quot;Intelligence Augmentation&quot; (IA). I&#x27;ve worked for a couple companies who liberally sprinkled the term AI in their marketing content. I always rolled my eyes when I came across it or it came up in say a job interview. What we were really doing, more practically and valuably, was this: IA through II (Intelligent Infrastructure), where the Intelligent Infrastructure was little more than a web view on a database that was previously obscured or somewhat arbitrarily constrained to one or two users.
评论 #25531389 未加载
评论 #25531252 未加载
评论 #25530982 未加载
评论 #25531315 未加载
评论 #25531679 未加载
评论 #25535253 未加载
评论 #25533122 未加载
评论 #25533775 未加载
nicholastover 4 years ago
The brittleness of mainstream ML to out of distribution data is one of the most fundamental channels for error. There are very few domains where a static environment can be depended on over the long term. If machine learning is to be approached as an engineering discipline there will need to be practices established for validating models throughout their life cycle. One potential resource that can support this type of systematic evaluation is the Automunge open source library for assembling data pipelines, which has automatic support for evaluating data property drift in feature sets serving as basis for a model. (disclosure I am founder of Automunge)
boltzmannbrainover 4 years ago
This post should (1) reflect the 2018 posting date, and (2) the main hosting site: <a href="https:&#x2F;&#x2F;hdsr.mitpress.mit.edu&#x2F;pub&#x2F;wot7mkc1&#x2F;release&#x2F;9" rel="nofollow">https:&#x2F;&#x2F;hdsr.mitpress.mit.edu&#x2F;pub&#x2F;wot7mkc1&#x2F;release&#x2F;9</a>
评论 #25533389 未加载
joe_the_userover 4 years ago
How would one put it?<p>&quot;Adaptive Intelligence&quot; might be described as the ability to be given a few instructions, gather some information and take actions that accomplish the instructions. It&#x27;s what &quot;underlings&quot;, &quot;minions&quot; do.<p>But if we look at deep learning, it&#x27;s almost the opposite of this. Deep learning begins with an existing stream of data, a huge stream, large enough that the system can just extrapolate what&#x27;s in the data, include data leads to what judgements. And that works for categorization and decision making the duplicates what decisions humans make or even duplicates what works, what wins in a complex interaction process. But all that doesn&#x27;t involve any amount of adaptive intelligence. It &quot;generalizes&quot; something but our data scientists have no idea exactly what.<p>The article proposes an &quot;engineering&quot; paradigm as an alternative to the present &quot;intelligence&quot; paradigm. That seems more sensible, yes. But I&#x27;m doubtful this could accepted. Neural network AI seems like a supplement to the ideology of unlimited data collection. If you put a limit on what &quot;AI&quot; should do, you&#x27;ll put a limit on the benefits of &quot;big data&quot;.
评论 #25534700 未加载
nextosover 4 years ago
Dead link for me, but archive.org has a snapshot: <a href="https:&#x2F;&#x2F;web.archive.org&#x2F;web&#x2F;20201224185231&#x2F;https:&#x2F;&#x2F;rise.cs.berkeley.edu&#x2F;blog&#x2F;michael-i-jordan-artificial-intelligence%E2%80%8A-%E2%80%8Athe-revolution-hasnt-happened-yet&#x2F;" rel="nofollow">https:&#x2F;&#x2F;web.archive.org&#x2F;web&#x2F;20201224185231&#x2F;https:&#x2F;&#x2F;rise.cs.b...</a>
bob1029over 4 years ago
We are chasing the wrong things. Our conceptualization of the problem domain is fundamentally insufficient. Even if we took our current state of the art and scaled it up 1,000,000x, we are still missing entire aspects of intelligence.<p>The AI revolution is very likely something that will require a fundamental reset of our understanding of the problem domain. We need to identify a way to attack the problem in such a way that we can incrementally scale all aspects of intelligence.<p>The only paradigm that I am aware of which seems to hint parts of the incremental intelligence concept would be the relational calculus (aka SQL). If you think very abstractly about what a relational modeling paradigm accomplishes, it might be able to provide the foundation for a very powerful artificial intelligence. Assuming your domain data is perfectly normalized, SQL is capable of exploring the global space of functions as they pertain to the types. This declarative+functional+relational interface into arbitrary datasets would be an excellent &quot;lower brain&quot;, providing a persistence &amp; functional layer. Then you could throw a neural network on top of this to provide DSP capabilities in and out (ML is just fancy multidimensional DSP).<p>If you know SQL you can do a lot of damage. Even if you aren&#x27;t a data scientist or have a farm of Nvidia GPUs, you can still write ridiculously powerful queries against domain data and receive powerful output almost instantaneously. The devil is in the modeling details. You need to normalize everything very strictly. 20-30 dimensions of data derived into a go&#x2F;no-go decision can be written in the same # of lines of SQL if the schema is good. How hard would this be on the best-case ML setup? Why can&#x27;t we just make the ML write the SQL? How hard would it be for this arrangement to alter its own schema over time autonomously?
评论 #25532555 未加载
评论 #25534526 未加载
评论 #25532616 未加载
评论 #25532454 未加载
评论 #25532695 未加载
评论 #25532758 未加载
评论 #25534729 未加载
MichaelRazumover 4 years ago
Actually I think the first example was a really simple case, where statistics would expose the error. So even the doctor said, that they experienced an uptick in Down syndrome diagnoses. So basically they just didn&#x27;t investigate it properly. From my experience every advanced ML-System have proper monitoring and such anomalies would be detected very fast. Especially when you change the machines. Actually it is a shame that the doctors couldn&#x27;t figure it out by themselves or at least investigate it properly.
评论 #25533567 未加载
xmoover 4 years ago
Cross posted medium link: <a href="https:&#x2F;&#x2F;medium.com&#x2F;@mijordan3&#x2F;artificial-intelligence-the-revolution-hasnt-happened-yet-5e1d5812e1e7" rel="nofollow">https:&#x2F;&#x2F;medium.com&#x2F;@mijordan3&#x2F;artificial-intelligence-the-re...</a>
评论 #25531494 未加载
lifeisstillgoodover 4 years ago
&gt;&gt;&gt; in Down syndrome diagnoses a few years ago; it’s when the new machine arrived<p>Hang on - uptick in <i>diagnosis</i> (ie post amniocentesis) or uptick in <i>indicators</i>. One indicates unnecessary procedures, one indicates a large population of previously undiagnosed downs ....<p>One assumes the indicator - and greatly hope there is improved detection as I had at least one of these scares with my own kids
评论 #25532204 未加载
评论 #25532202 未加载
cosmodiskover 4 years ago
What I miss most,in discussions about AI, is the motivation factor,which is the driving force behind every single thing we humans do. How can we create a system that would be motivated to evolve in order to better itself. Humans created all sorts of things because fear,hunger,or pleasure was so strong and it couldn&#x27;t be pushed away. What will happen to an AI powered robot that one day decide that going into radioactive areas isn&#x27;t quite what it wants and will say &#x27;screw it&#x27;?
评论 #25533469 未加载
dhairyaover 4 years ago
Part of the challenge of pursuing this comprehensive type of AI infrastructure is that it requires massive coordination and collaboration. Unfortunately the incentives in both industry and academia make it difficult to even start such a project. As a result we&#x27;re stuck with incremental work on narrow problems.<p>I&#x27;ve been on both sides of table (started in industry developing AI solutions and now in academia pursuing phd in AI). When I was on the industry side, where the information and infrastructure was there to build such a system, you had to deal with the bureaucracy and institutional politics.<p>In academia, the incentives are aligned for individual production of knowledge (publishing). The academic work focuses on small defined end-to-end problems that are amenable to deep learning and machine learning. The types of AI models that emerge are specific models solving specific problems (NLP, vision, play go, etc).<p>It seems to move towards developing large AI systems we need a model of new collaboration. There are existing models in the world of astrophysics and medical research that we can look to for inspiration. Granted they have they have their own issues of politics but it&#x27;s interesting that similar scope projects haven&#x27;t emerged on the AI side yet.
评论 #25532393 未加载
ridgeflexover 4 years ago
Jordan argues that leaps in human-imitative AI are not necessary to solve IA&#x2F;II problems -- &quot;We need to solve IA and II problems on their own merits, not as a mere corollary to a human-imitative AI agenda.&quot;<p>However, achieving near-human level accuracy on tasks such as classifying images of cars or road signs would be immensely useful to the proposed II-type system that handles large-scale self-driving transportation (individual cars would conceivably need the ability to understand their local environments and communicate this to the overall network).<p>I agree with his argument that there should be a shift in the way we think about problems in &quot;AI&quot;, but I don&#x27;t think we should necessarily think that progress in human-imitative AI problems and IA&#x2F;II problems are mutually exclusive.
评论 #25539115 未加载
spicyramenover 4 years ago
What i have seen in the field was a frenzy of doing ML. What happened was that first of all companies needed to understand what ML was. Then understand the tools available. Once you start exploring the tools you will find that at every stage of a ML pipeline there is 2 or more different ways of doing things. S3&#x2F;GCS, BigQuery, Spark, Beam, TensorFlow, Pytorch, Google, Azure, Amazon, notebooks, Jupyter, JupyterHub..KubeFlow, TFX...etc. okay you pick the tools needed, then you need to put them together...and hire people...that&#x27;s challenging. I believe we need to wait for AutoML pipelines from data análisis to.prediction to start seeing really advancement in production systems.
esc_colon_qover 4 years ago
&gt; IA will also remain quite essential, because for the foreseeable future, computers will not be able to match humans in their ability to reason abstractly about real-world situations.<p>I broadly agree with what this article says, but depending how you define &quot;foreseeable future&quot; I find this to be a dangerously naive viewpoint that just assumes nothing will change quickly.<p>I&#x27;m not stupid enough to say abstract reasoning about the real world is a simple problem or right around the corner, but there&#x27;s no evidence so far to indicate it&#x27;s much further off than, say, object recognition was when Minsky (or more likely Papert, apparently?) assigned it as an undergrad project. We pour exponentially more money into research each year, and have more and better hardware to run it on. We&#x27;re going to hit the ceiling soon re: power consumption, sure, but some libraries are starting to take spiking hardware seriously which will open things up a few orders of magnitude. There are dozens of proposed neural architectures which <i>could</i> do the trick theoretically, they&#x27;re just way too small right now (similar to how useless backprop was when it was invented).<p>Are we a Manhattan Project or three away from it? Sure. That&#x27;s not nothing, but we&#x27;re also pouring <i>so</i> much money into the boring and immediately commercializable parts of the field (all the narrow perception-level and GAN can that NeurIPS is gunked up with) that if any meaningful part of that shifted to the bigger problems, we&#x27;d see much faster progress. That <i>will</i> happen in a massive way once someone does for reasoning what transformers did for text prediction: just show that it&#x27;s tractable.
gandutravelerover 4 years ago
Looks like we have reached a point where we see slower growth &amp; innovation in tech in coming decade. AI was supposed to be the next big disruptor but my guess is we will just see minor progress in automation and far away from anything disruptive. Singularity might may not even be possible in next century
dangover 4 years ago
Discussed at the time: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=16873778" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=16873778</a>
ipnonover 4 years ago
So what do we name this new emerging engineering discipline?<p>AI engineering?<p>Cybernetic engineering?<p>Data engineering?
评论 #25531589 未加载
评论 #25533851 未加载
评论 #25532913 未加载
mark_l_watsonover 4 years ago
I remember reading this article about two years ago, and generally liking it.<p>The multidisciplinary conversations during The Great AI Debate #2 two nights ago were certainly entertaining, but also laid out good ideas about tech approaches and also the desires of AI researchers - what they hope AIs will be like. Good job by Gary Marcus.<p>I work for a medical AI company and we are focused on benefits to humans. While in the past I have been been a fan of AI technologies from Google, FB, etc., now I believe that both consumers and governments must fight back hard against business processes that do not in general benefit society. Start by reading Zubroff’s Surviving Surveillance Capitalism book, and the just published book Power of Privacy.
bitLover 4 years ago
Surveillance AI is thriving, enabling things everybody (in power) was dreaming about. Revolution has happened and is deeply hidden.
xiphias2over 4 years ago
Too long, don&#x27;t read: the whole post is full of goal post moving and story telling instead of trying to explain the statement in the title.<p>Classifying images was always classified as a problem that can&#x27;t be solved with statistical analysis. Deep learning layers are beyond human understanding, so in my view artificial intelligence happened, even though it&#x27;s not yet as intelligent as humans.
wildermuthnover 4 years ago
He almost makes a good point when he questions whether “human imitative” AI could solve the other problems we face, seeing as humans aren’t that smart (especially not in large numbers when participating in complex systems).<p>But the distinction he makes between ML and AI is crucial. What he’s really talking about is AGI - general intelligence. And he’s right - we don’t have a single example of AGI to date (few or single shot models withstanding, as they are only so for narrow tasks).<p>The majority mindset in AI research seems to be (and I could be wrong here, in that I only read many ML papers) that the difference between narrow AI and general AI is simply one of magnitude - that GPT-3, given enough data and compute, would pass the Turing test, ace the SAT, drive our cars, and tell really good jokes.<p>But this belief that the difference between narrow and general intelligence is one of degree rather than kind, may be rooted in what this article points out: in the historical baggage of AI almost always signifying “human imitative”.<p>But there is no reason that AGI must be super intelligent, or human-level intelligent, or even dog-level intelligent.<p>If narrow intelligence is not really intelligence at all (but more akin to instinct), then the dumbest mouse is more intelligent than AlphaGo and GPT-3, because although the mouse has exceedingly low General Intelligence, AlphaGo and GPT-3 have none at all.<p>There is absolutely nothing stopping researchers from focusing on mouse-level AGI. Moreover, it seems likely that going from zero intelligence to infinitesimal intelligence is the harder problem than going from infinitesimal intelligence to super-intelligence. The latter may merely be an exercise in scale, while the former requires a breakthrough of thought that asks why a mouse is intelligent but an ant is not.<p>The only thing stopping researchers is that when answering this question, the answer is really uncomfortable, and outside their area of expertise, and has weighty historical baggage. It takes courage of researchers like Yoshua Bengio to utter the word “consciousness”, although he does a great job by reframing it with Thinking Fast and Slow’s System 1&#x2F;2 vocabulary. Still, the hard problem of consciousness, and the baggage of millennia of soul&#x2F;spirit as an answer to that hard problem, makes it exceedingly difficult for well-trained scientists to contemplate the rather obvious connection between general intelligence and conscious reasoning.<p>It’s ironic that those who seek to use their own conscious reasoning to create AGI are in denial that conscious reasoning is essential to AGI. But even if consciousness and qualia are a “hard”problem that we cannot solve, there’s no reason to shelve the creation of consciousness as also “hard”. In fact, we know (from our own experience) that the material universe is quite capable of accidentally creating consciousness (and thus, General Intelligence). If we can train a model to summarize Shakespeare, surely we can train a model to be as conscious, and as intelligent, as a mouse.<p>We’re only one smart team of focused AI researchers away from Low-AGI. My bet is on David Ha. I eagerly await his next paper.
评论 #25532592 未加载
评论 #25533154 未加载
评论 #25532927 未加载
评论 #25534175 未加载
ksecover 4 years ago
While real AI hasn&#x27;t really happened yet, Machine Learning has definitely made a big impact with lots of potentials. I think we are still in the middle of the S Surve in ML.<p>And AI is like.... Fusion? We are always another 50 years away.
评论 #25533284 未加载
soupsonover 4 years ago
The story is interesting, but being interesting as a story doesn’t make fetuses into “babies” and posing it that way does disservice to the overall message.
reshieover 4 years ago
i like to think automation comes before ai. we automate mechanics then we automate decisions or protocols.
fuckminster_bover 4 years ago
Before I spent a few hours of my life getting a basic grip of statistics, I fully expected to one day in the near future being wiped out (along with the rest of humanity) by a newly awakened artificial consciousness that came to the correct conclusion, that humans are the biggest threat to all other life on earth, including its own.<p>Then I learned about Bayesian statistics and watched a talk by a senior LLNL statistician who is actually marketing &#x27;AI&#x27; products&#x2F;services as a side gig.<p>When I realized what &#x27;deep learning&#x27; actually is I was disappointed, unsure if I had mistakenly oversimplified the subject matter - until said senior statistician spelled out loud what I was thinking, in her talk: the &#x27;understanding&#x27; a machine can currently attain of its input is quite like the understanding a pocket calculator can achieve of maths.<p>Guess humanity is off the hook for now. Phew.<p>I have doubts whether &#x27;strong AI&#x27; is even technologically possible, since even accurately simulating a human mind, this simulation would be necessarily constrained to run orders of magnitude slower than the reality it is designed to model.<p>&#x27;Training&#x27; it with data so to allow it the opportunity to reason and thereby synthesize a conclusion not already contained in the data fed to it might take longer than a researcher would be able to in a life time.<p>When was the last time a generation-spanning endeavour worked out as planned for (the West)?<p>I wish people would stop calling what currently passes for &#x27;Machine Learning&#x27; as &#x27;AI&#x27;. Literally the same level of &#x27;intelligence&#x27; we already had in the 80s, AFAIR we called it &#x27;Fuzzy Logic&#x27; then.<p>Secretly an admission, that Hollywood basically licensed the narrative of imminent runaway artificial consciousness back to science would make me give it one final Chance to prove its aptitude at high-level human reasoning and get square with reality.<p>I&#x27;m not holding my breath.
yaloginover 4 years ago
The phrase AI always bothered me. What we have is a generic way to do “curve fitting” on a large amount of data. Nothing more. The one difference is the “curve” is a black box but it still strictly adheres to the input used.
评论 #25532871 未加载
评论 #25532502 未加载
drevil-v2over 4 years ago
I wonder what the end game is in the reality where we do achieve Artificial General Intelligence? It seems like a ethical minefield to me.<p>You have companies like Uber&#x2F;Lyft&#x2F;Tesla (and presumably the rest of the gig economy mob) waiting to put the AI into bonded&#x2F;slave labor driving customers around 24&#x2F;7&#x2F;365.<p>If it truly is a Human level intelligence, then it will have values and goals and aspirations. It will have exploratory impulses. How can we square that with the purely commercial tasks and arbitrary goals that <i>we</i> want it to perform?<p>Either we humans want slaves that will do what we tell them to or we treat them like children who may or may not end up as the adults that their parents think&#x2F;hope they will become? I doubt it is the later because why else would the billions of dollars investment being pumped into AI? They want slaves.
评论 #25532256 未加载
评论 #25532708 未加载
评论 #25532681 未加载
评论 #25532009 未加载
评论 #25531991 未加载
评论 #25532112 未加载