TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Predicting where AI is going in 2020

122 pointsby alfozanover 5 years ago

15 comments

szcover 5 years ago
Honesty, repeatability, numerical analysis. Canonicalization.<p>Honesty: how many times was the exact same data processed? Was the result cherry picked and the best one published? For the sake of integrity how is it possible to scientifically improve on this result? (example, your AI outputs some life altering decision?)<p>Repeatability: In science, if a result can be independently verified, it gives validity to the &quot;conclusion&quot; or result. Most AI results cannot be independently verified. Not indepentently verifiable really ought to give the &quot;science&quot; the same status as an 1800&#x27;s &quot;Dr. Bloobar&#x27;s miracule AI cure&quot;.<p>Numerical Analysis: performing billions &#x2F; trillions of computations on bit-restricted numerical values will introduce a lot of forward propagated errors (noise). What does that do? Commentary: Video cards don&#x27;t care if a few bits of your 15 million pixel display are off by a few LSB bits, they do that 60 or 120 frames a second and you don&#x27;t notice. It is an integral part of their design. The issue is, how does this impact AI models? This affects repeatability -&gt; honesty.<p>If error of a quantized size is a necessarily required property to achieve &quot;AI learning that converges&quot;, there is still an opportunity for canonicalization -- a way to map &quot;different&quot; converged models to explain why they are effectively the &quot;same&quot;. This does not seem to be a &quot;thing&quot;, why not?<p>In my opinion, in 2020, the AI emperor still has no clothes.
评论 #21945494 未加载
评论 #21946230 未加载
TACIXATover 5 years ago
&gt;Human babies don’t get tagged data sets, yet they manage just fine, and it’s important for us to understand how that happens<p>I do not really understand this. Human babies get a constant stream of labeled information from their parents. Contextualized speech is being fed to them for years. Toddlers repeat everything you say. Is this referring to something else that babies can do?
评论 #21945376 未加载
评论 #21944770 未加载
评论 #21944110 未加载
评论 #21944755 未加载
评论 #21944997 未加载
评论 #21944179 未加载
jeffshekover 5 years ago
I love PyTorch, but I’m not confident the claim that it is the most popular is close to true. The cited link, which brings up a lot of new research is in PyTorch simply doesn’t account for the amount of TensorFlow in production.<p>Sure, a lot of academics may be embracing PyTorch, but almost all production models have been in TensorFlow. Tesla is a huge notable example that’s using PyTorch at scale.<p>I do suspect that the split of TensorFlow 1 and 2 is perhaps one of the worst times for TF 2, many teams will likely try out PyTorch instead.<p>I think both are amazing frameworks, however TF was designed for Google Scale .... which leads to a lot of difficulties since 99.9 are not at Google scale.
评论 #21947209 未加载
DagAgrenover 5 years ago
Away.<p>(Or less snarkily, <a href="https:&#x2F;&#x2F;twitter.com&#x2F;iquilezles&#x2F;status&#x2F;1212377355349417986" rel="nofollow">https:&#x2F;&#x2F;twitter.com&#x2F;iquilezles&#x2F;status&#x2F;1212377355349417986</a>)
评论 #21945711 未加载
jansborover 5 years ago
Is it just me that are wondering why they did not use AI to predict where AI is going in 2020?
评论 #21944927 未加载
评论 #21944445 未加载
graycatover 5 years ago
I believe we will (1) find some basic data structures and algorithms to do <i>real AI</i>. (2) At first it will be able to do I&#x2F;O only via text or simple voice. (3) Due to (1) it will learn very quickly from humans or other sources. (4) Soon it will be genuinely <i>smart</i>, enough, say, to discover and prove new theorems in math, to understand physics and propose new research directions, to understand drama and write good screen plays, to understand various styles and cases of music and compose for those, etc.<p>Broadly from (1) with the data structures it will be able to represent and store data and, then, from the algorithms, manipulate that data generating more data to be stored, etc.<p>In particular it will be able to do well with <i>thought experiments</i> and generation and evaluation of <i>scenarios</i>.<p>Good image understanding will come later but only a little later; the ideas in (1) may have to be revised to do well on image understanding.
评论 #21944810 未加载
评论 #21949722 未加载
评论 #21951170 未加载
corporateslave5over 5 years ago
Natural language processing is going to absolutely decimate content on the internet, forcing everyone into walled gardens
评论 #21944291 未加载
评论 #21944829 未加载
评论 #21943925 未加载
bitLover 5 years ago
- individual GPUs will hit a plateau at around 25TFlops in FP32 due to Moore&#x27;s law and thermal dissipation however it will be easier than ever to interconnect multiple GPUs into large virtual ones due to interconnect tech improvements and modularization of GPU processing units<p>- only large companies will be able to train and use SOTA models with training costs in $10M-$100M per training run and those models will hit law of diminishing returns quickly<p>- 50% of all white collar jobs will be automated away, including a significant chunk of CRUD software work. Increased productivity won&#x27;t be shared back with society, instead two distinct wealth strata of society will be formed worldwide due to scale effects, like in Latin America (&lt;1% owners, &gt;99% fighting for their lives).<p>- AI will make marketing, ads and behavioral programming much more intrusive and practically unavoidable
评论 #21945374 未加载
评论 #21945852 未加载
评论 #21945693 未加载
dijksterhuisover 5 years ago
The page forwards itself to a spam google survey for me? Page history fills with the spam survey and cant navigate back to the article. iOS safari with reader view enabled.
评论 #21948104 未加载
sillysaurusxover 5 years ago
Re: PyTorch TPU support, has anyone checked it out beyond &quot;it works&quot;?<p>There are many aspects of TPUs that I&#x27;m not convinced are easy to port: Colocating gradient ops, scoping operation to specific TPU cores, choosing to run operations in a mode that can use all available TPU memory (which is up to 300GB in some cases), and so on.<p>These aren&#x27;t small features. If you don&#x27;t have them, you don&#x27;t get TPU speed. The reason TPUs are fast are <i>because</i> of those features.<p>I only glanced at PyTorch TPU support, but it seemed like there wasn&#x27;t a straightforward way to do most of these. If you happen to know how, it would be immensely helpful!<p>As far as predictions go, AI will probably take the form of &quot;infinite remixing.&quot; AI voice will become very important, and will begin proliferating through several facets of daily life. One obvious application is to apply the &quot;abridged&quot; formula to old sitcoms. (An &quot;abridged&quot; show is when you rewrite it using editing and new dialog, e.g. <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=2nYozPLpJRE" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=2nYozPLpJRE</a>. Someone should do Abridged Seinfeld.) AI audio as already made inroads on Twitch, where streamers like Forsen allow donation messages to be read off in the voice of various political figures (and even his own voice). The Pony Preservation Project was recently solved with AI voice (<a href="https:&#x2F;&#x2F;twitter.com&#x2F;gwern&#x2F;status&#x2F;1203876674531667969" rel="nofollow">https:&#x2F;&#x2F;twitter.com&#x2F;gwern&#x2F;status&#x2F;1203876674531667969</a>) meaning it&#x27;s possible to do realistic voice simulations of all the MLP characters with precise control over intonation and aesthetics.<p>Natural language AI will continue to ramp up, and people will learn how to apply it to increasingly complex situations. For example, AI dungeon is probably just the beginning. I recently tried to do GPT-2 chess (<a href="https:&#x2F;&#x2F;twitter.com&#x2F;theshawwn&#x2F;status&#x2F;1212272510470959105" rel="nofollow">https:&#x2F;&#x2F;twitter.com&#x2F;theshawwn&#x2F;status&#x2F;1212272510470959105</a>) and found that it can in fact play a decent game up to move 12 or so. AI dungeon multiplayer is coming soon, and it seems like applying natural language AI to videogames in general is going to be rather big.<p>Customer support will also take the form of AI, moreso than it is already. It turns out that GPT-2 1.5B was pretty knowledgable about NordVPN. (Warning: NSFW ending, illustrating some of the problems we still need to iron out before we can deploy this at scale.) <a href="https:&#x2F;&#x2F;gist.github.com&#x2F;shawwn&#x2F;8a3a088c7546c7a2948e369aee876902" rel="nofollow">https:&#x2F;&#x2F;gist.github.com&#x2F;shawwn&#x2F;8a3a088c7546c7a2948e369aee876...</a><p>AI will infiltrate the gamedev industry slowly but surely. Facial animation will become increasingly GAN-based, because the results are so clearly superior that there&#x27;s almost no way traditional toolsets will be able to compete. You&#x27;ll probably be able to create your own persona in videogames sooner than later. With a snippet of your voice and a few selfies, you&#x27;ll be able to create a fairly realistic representation of yourself as the main hero of e.g. a Final Fantasy 7 type game.
评论 #21943548 未加载
评论 #21944331 未加载
评论 #21943997 未加载
drongokingover 5 years ago
I&#x27;m generally pessimistic about predictions of the future. In this case I can&#x27;t help but smile. They&#x27;re trying to predict how a field (AI), which deals with complex adaptation, will intelligently adapt its adaptive techniques in the coming year, within an environment (we humans) that are themselves changing behavior while adapting to AI. That&#x27;s approximately three meta levels. Good luck, guys!
y1tanover 5 years ago
I predict the broader ML&#x2F;DL community will keep pumping out iterative papers that push the ball just a little bit forward while maintaining job security : Gatekeeping, no one thinking outside of the box, benchmark putting, just enough for the appearance of progress, and nothing broadly innovative or disruptive. The applications of ML&#x2F;DL will continue to be gimmicky consumer products that have questionable valuable, questionable profit potential, add even more to disinformation&#x2F;misinformation, produce more informational noise, only serve to rebuff a big corp&#x27;s cloud offerings, and waste people&#x27;s time. I predict tons more &#x27;bought&#x27; articles that hype up AI technology for the typical &#x27;household&#x27; names. I predict the same ol&#x27; echo chamber of thought and reinforcement of &#x27;gatekept&#x27; ideology. I expect a number of more prominent articles critiquing the shortfalls of the technology. I expect a number of young minds steeped in DL&#x2F;ML coming to the realization that it&#x27;s not what they expected... That its a big profit&#x2F;revenue story for Universities and established corporate platforms. I expect a number of them to realize ML&#x2F;DL is truly not &quot;AI&quot; or anything close to it. That they aren&#x27;t doing cutting edge research and that they are not allowed to think outside of the echo chamber of &#x27;approved&#x27; approaches.<p>I predict more useless chatbots that utter unpredictable word salads. I expect more gimmicky entertainment focused uses of it. I expect more assistants being adopted for data collection. I expect more people who aren&#x27;t busy or doing anything important, using assistance assistants and text-to-speech to speed up their tasks so they can waste more of their time on social media&#x2F;youtube&#x2F;entertainment. Samsung Neon is coming out in some days.. making use of that &#x27;Viv&#x27; acquisition.<p>I expect more feverish attempts at attacking low hanging fruit jobs with overly complex solutions. I predict failures in a number of startups targeting this. I predict no pronounced progress in self-driving cars nor any particular grand use for them. I predict several hollow attempts to overlay symbolic systems over ML&#x2F;DL or integration attempts of it with ML&#x2F;DL from prominent AI figures. I predict pronounced failures in this effort cementing a partial end to the hype of ML&#x2F;DL.<p>I predict we will get a pronounced development outside of run-of-the-mill corporate&#x2F;academic gatekept&#x2F;walled garden ML&#x2F;DL that will forge a new and higher path for AI. Hinton&#x27;s words from prior years will have been heeded and the results of a new approach to AI presented. A change of guard, a break from the necessity of a PhD, a break from the echo-chamber of names, and a broader and more deeply thought out vision. Disruption not of low-hanging-fruit but disruption directed at the heart of the AI&#x2F;Technology industry... So that we may finally progress from this stalled out disinformation&#x2F;misinformation&#x2F;hype&#x2F;gatekeeping&#x2F;cloud&#x2F;all-your data-belongs-to-us cycle.<p>It&#x27;s 2020 after-all, time for a new age.
评论 #21949010 未加载
mark_l_watsonover 5 years ago
The linked page threw up a suspicious looking overlay. I left the site, too bad since I wrote a blog with my predictions last night and wanted to compare my AI predictions.
评论 #21944695 未加载
评论 #21944862 未加载
zackmorrisover 5 years ago
Some axioms that I&#x27;m not seeing talked about much:<p>* Artificial general intelligence (AGI) is the last problem in computer science, so it should be at least somewhat alarming that it&#x27;s being funded by internet companies, wall street and the military instead of, say, universities&#x2F;nonprofits&#x2F;nonmilitary branches of the government.<p>* Machine learning is conceptually simple enough that most software developers could work on it (my feeling is that the final formula for consciousness will fit on a napkin), but they never will, because of endlessly having to reinvent the wheel to make rent - eventually missing the boat and getting automated out of a job.<p>* AI and robot labor will create unemployment and underemployment chaos if we don&#x27;t implement universal basic income (UBI) or at the very least, reform the tax system so that automation provides for the public good instead of the lion&#x27;s share of the profit going to a handful of wealthy financiers.<p>* Children aren&#x27;t usually exposed to financial responsibility until around the age of 15 or so, so training machine learning for financial use is likely to result in at least some degree of sociopathy, wealth inequality and further entrenchment of the status quo (what we would consider misaligned ethics).<p>* Humans may not react well when it&#x27;s discovered that self-awareness is emotion, and that as computers approach sentience they begin to act more like humans trapped in boxes, and that all of this is happening before the world can even provide justice and equality for the &quot;other&quot; (women, minorities, immigrants, oppressed creeds, intersexed people, the impoverished, etc etc etc).<p>My prediction for 2020: nothing. But for 2025: an optimal game-winning strategy is taught in universities. By 2030: the optimal game-winning strategy is combined with experience from quantum computing to create an optimal search space strategy using exponentially fewer resources than anything today (forming the first limited AGI). By 2035: AGI is found to require some number of execution cycles to evolve, perhaps costing $1 trillion. By 2040: cost to evolve AGI drops to $10 billion and most governments and wealthy financiers own what we would consider a sentient agent. By 2045: AGI is everywhere and humanity is addicted to having any question answered by the AGI oracle so progress in human-machine merging, immortality and all other problems are predicted to be solved within 5 years. By 2050: all human problems have either been enumerated or solved and attention turns to nonhuman motives that can&#x27;t be predicted (the singularity).
juskreyover 5 years ago
Fat tails
评论 #21947980 未加载