TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

AI as Normal Technology

239 pointsby randomwalkerabout 1 month ago

19 comments

roxolotlabout 1 month ago
This is a rare piece on AI which takes a coherent middle of the road viewpoint. Saying both that AI is “normal” and that it will be transformative is a radical statement in today’s discussions about AI.<p>Looking back on other normal but transformative technologies: steam power, electricity, nuclear physics, the transistor, etc you do actually see similarly stratified opinions. Most of those are surrounded by an initial burst of enthusiasm and pessimism and follow a hype cycle.<p>The reason this piece is compelling is because during the initial hype phase taking a nuanced middle of the road viewpoint is difficult. Maybe AI really is some “next step” but it is significantly more likely that belief is propped up by science fiction and it’s important to keep expectations inline historically.
评论 #43718044 未加载
评论 #43716597 未加载
评论 #43717099 未加载
评论 #43716567 未加载
评论 #43716287 未加载
评论 #43716474 未加载
xpeabout 1 month ago
&gt; The statement “AI is normal technology” is three things: a description of current AI, a prediction about the foreseeable future of AI, and a prescription about how we should treat it.<p>A question for the author(s), at least one of whom is participating in the discussion (thanks!): Why try to lump together description, prediction, and prescription under the &quot;normal&quot; adjective?<p>Discussing AI is fraught. My claim: conflating those three under the &quot;normal&quot; label seems likely to backfire and lead to unnecessary confusion. Why not instead keep these separate?<p>My main objection is this: it locks in a narrative that tries to neatly fuse description, prediction, and prescription. I recoil at this; it feels like an unnecessary coupling. Better to remain fluid and not lock in a narrative. The field is changing so fast, making description by itself very challenging. Predictions should update on new information, including how we frame the problem and our evolving values.<p>A little bit about my POV in case it gives useful context: I&#x27;ve found the authors (Narayanan and Kapoor) to be quite level-headed and sane w.r.t. AI discussions, unlike many others. I&#x27;ll mention Gary Marcus as one counterexample; I find it hard to pin Marcus down on the actual form of his arguments or concrete predictions. His pieces often feel like rants without a clear underlying logical backbone (at least in the year or so I&#x27;ve read his work).
评论 #43720435 未加载
评论 #43718651 未加载
pluto_modadicabout 1 month ago
Burning the planet for a ponzi scheme isn&#x27;t normal.<p>The healthiest thing for &#x2F;actual&#x2F; AI to develop is for the current addiction to LLMs to die off. For the current bets by OpenAI, Gemini, DeepSeek, etc to lose steam. Prompts are a distraction, and every single company trying to commodify this are facing an impossible problem in &#x2F;paying for the electricity&#x2F;. Currently they&#x27;re just insisting on building more power plants, more datacenters, which is like trying to do more compute with vacuum relays. They&#x27;re digging in the wrong place for breakthroughs, and all the current ventures will go bust and be losses for investors. If they start doing computation with photons or something like that, then call me back.
评论 #43720436 未加载
评论 #43720738 未加载
评论 #43720163 未加载
bux93about 1 month ago
&quot;We view AI as a tool that we can and should remain in control of, and we argue that this goal does not require drastic policy interventions&quot;<p>If you read the EU AI act, you&#x27;ll see it&#x27;s not really about AI at all, but about quality assurance of business processes that are scaled. (Look at pharma, where GMP rules about QA apply equally to people pipetting and making single-patient doses as it does to mass production of ibuprofen - those rules are eerily similar to the quality system prescribed by the AI act.)<p>Will a think piece like this be used to argue that regulation is bad, no matter how benificial to the citizenry, because the regulation has &#x27;AI&#x27; in the name, because the policy impedes someone who shouts &#x27;AI&#x27; as a buzzword, or just because it was introduced in the present in which AI exists? Yes.
评论 #43715224 未加载
lubujacksonabout 1 month ago
I like these &quot;worldview adjustment&quot; takes. I&#x27;m reminded of Jeff Bezos&#x27; TED Talk (from 18 years ago). I was curious what someone who started Amazon would choose to highlight in his talk and the topic alone was the most impactful thing for me - the adoption of electricity: <a href="https:&#x2F;&#x2F;www.ted.com&#x2F;talks&#x2F;jeff_bezos_the_electricity_metaphor_for_the_web_s_future?language=en" rel="nofollow">https:&#x2F;&#x2F;www.ted.com&#x2F;talks&#x2F;jeff_bezos_the_electricity_metapho...</a><p>He discussed the structural and cultural changes, the weird and dangerous period when things moved fast and broke badly and drew the obvious parallels between &quot;electricity is new&quot; to &quot;internet is new&quot; as a core paradigm shift for humanity. AI certainly feels like another similar potential shift.
xpeabout 1 month ago
&gt; One important caveat: We explicitly exclude military AI from our analysis, as it involves classified capabilities and unique dynamics that require a deeper analysis, which is beyond the scope of this essay.<p>Important is an understatement. Recursively self-improving AI with military applications does not mesh with the claim that &quot;Arms races are an old problem&quot;.<p>&gt; Again, our message is that this is not a new problem. The tradeoff between innovation and regulation is a recurring dilemma for the regulatory state.<p>I take the point, but the above statement is scoped to a _state_, not an international dynamic. The AI arms race is international in nature. There are relatively few examples of similar international agreements. The classic examples are bans on chemical weapons and genetic engineering.
评论 #43716981 未加载
sandspar30 days ago
Interesting ideas but terribly overwritten.<p>&quot;The normal technology frame is about the relationship between technology and society. It rejects technological determinism, especially the notion of AI itself as an agent in determining its future. It is guided by lessons from past technological revolutions, such as the slow and uncertain nature of technology adoption and diffusion. It also emphasizes continuity between the past and the future trajectory of AI in terms of societal impact and the role of institutions in shaping this trajectory.&quot;<p>Why write it so overblown like this? You can say the same thing much more cleanly like, &quot;AI doesn’t shape the future on its own. Society and institutions do, slowly, as with past technologies.&quot;
bilsbieabout 1 month ago
AI having the same impact as the internet. Changes everything and changes nothing at the same time.
评论 #43721224 未加载
Nevermarkabout 1 month ago
What a well reasoned stance!<p>They note that they don’t expect their view to address challenges without additional material, but one challenge struck me.<p>Slow diffusion, which gets bottlenecked by human beings learning to adapt to significant new technologies, drops considerably if a technology juices startups in other areas than the tech itself.<p>I.e. existing organizations may not be the bottleneck for change, if widely available AI makes disruptive (cheaper initially, higher quality eventually) startups much easier in general to start and to scale.
anonzzziesabout 1 month ago
I think seeing the world in current times but hoping for more advancements is the best way. I see what there is now as a tool that is useful, I hope and sometimes even assume it will improve, but that does not help me now so what is the point in thinking about that? I am a programmer, not a philosopher.<p>And of course there is no viable path at this moment to make AIs actually smart so he, we use it and know the issues.
cootsnuckabout 1 month ago
Very good read. They&#x27;ve articulated points I keep trying to express to people.<p>I think their stances and predictions will start to be held by more and more people as the illusion &#x2F; frenzy &#x2F; FUD from the current...&quot;fog&quot; created by all the AI hype and mystique subsides. It may take another year or two, but public discourse eventually adapts&#x2F;tires of repeated notions of &quot;the sky is falling&quot; once enough time has piled up without convincing evidence.
cainxinthabout 1 month ago
It already is for me. I&#x27;ve been using LLMs daily for years now. I don&#x27;t get the people claiming AGI every two minutes any more than the people claiming these tools are useless.<p>LLM reasoning abilities are very fragile and often overfitted to training data. But if still you haven&#x27;t figured out how to do anything useful with an LLM, warts and all, that says more about you than LLMs.
评论 #43719900 未加载
iamgopalabout 1 month ago
Small fast ( binary ? ) AI will be as simple as storing data in database and query it, in fact, very soon specialised software will come in to market to do so, guided by large LLM.
评论 #43720656 未加载
j45about 1 month ago
I find in all the hype, that it&#x27;s important to remember that AI is just software. A remarkable, and different kind of software. But software.
评论 #43719668 未加载
评论 #43721544 未加载
dakiolabout 1 month ago
AI won’t become “normal technology” until the open source versions are more powerful than the closed ones. Just like Linux is the “best” kernel out there, and that doesn’t prevent other kernels to be proprietary (but that doesn’t matter because they are not better than Linux).<p>Imagine for a moment what would happen if suddenly one company “buys” the Linux kernel, and suddenly you need to pay per the number of processes you run in your machine. Awful.
评论 #43715280 未加载
评论 #43715792 未加载
评论 #43715457 未加载
gazpachoabout 1 month ago
I liked this article. My hot take lately has been that AI is like Excel &#x2F; Word but deployed quicker. That can still cause some level of societal collapse if it displaces a large fraction of the workforce before it can retool and adapt , no AGI super intelligence required.
callcabout 1 month ago
&gt; The normal technology frame is about the relationship between technology and society.<p>There is a huge differentiating factor for LLMs that makes it not normal: the blatant disregard for the ownership rights of everyone in the world. What other &quot;normal&quot; technology has so callously stolen everything it can without consequence?<p>The music industry? Artists getting inspired and too closely imitating other artists? I genuinely want to know. And if there is such a suitable example, how did society react? Is there relevant history we can learn from here?<p>Putting aside other the other problems (capital ownership class salivating at the prospect of using LLM bots instead of humans, reduced critical thinking and traditional learning, environmental impact, other societal changes), this is my main turn-off for LLMs.<p>Give me a model trained on a responsible dataset (not something our grandparents would scold us for doing) and that I can on consumer hardware then I can use LLMs guilt free.
jollyllamaabout 1 month ago
Has the comparison to the domestication of the dog been made yet?
cess11about 1 month ago
AI is old. It has been everywhere for a long time. Once upon a time logic programmed expert systems were AI, and that&#x27;s how credit evaluation works.<p>The problem with logical AI is that it can in some sense be held accountable. There&#x27;s right and wrong and an explainable algorithmic path from input to result. Fuzzy, probabilistic vector spaces remove that inconvenience and make it easier for people with power to shrug and say &#x27;computer says no&#x27; when they deprive someone else of freedom or resources.<p>This is why it is so important to get technicians to accept and preferably get hooked on the newfangled AI. Without buy-in from them it&#x27;d be much harder to disseminate this regime in other parts of society since they&#x27;re likely to be the ones doing the actual dissemination. It&#x27;s not like there are enough of the people in power to do it themselves, and they also don&#x27;t know enough about computer stuff to be able to.<p>There will be things you like that comes out of it, but it&#x27;s likely incidental, much like dentistry and vaccines and food production in the wake of fossil fuel extraction.