TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

The Prospect of an AI Winter

144 点作者 erwald大约 2 年前

49 条评论

WheelsAtLarge大约 2 年前
I give it a 95% chance that an AI winter is coming. Winter in a sense that there won&#x27;t be any new ways to move forward towards AGI. The current crop of AIs will be very useful but it won&#x27;t lead to the scary AGI people predict.<p>Reasons:<p>1) We are currently mining just about all the internet data that&#x27;s available. We are heading towards a limit and the AIs aren&#x27;t getting much better.<p>2) There&#x27;s a limit to the processing power that can be used to assemble the LLM&#x27;s and the more that&#x27;s used the more it will cost.<p>3) People will guard their data more and will be less willing to share it.<p>4) The basic theory that got us to the current AI crop was defined decades ago and no new workable theories have been put forth that will move us closer to an AGI.<p>It won&#x27;t be a huge deal since we probably have decades of work to sort out what we have now. We need to figure out its impact on society. Things like how to best use it and how to limit its harm.<p>Like they say,&quot;interesting times are ahead.&quot;
评论 #35333903 未加载
评论 #35341362 未加载
评论 #35333574 未加载
评论 #35333570 未加载
评论 #35341349 未加载
评论 #35341424 未加载
评论 #35341256 未加载
评论 #35333931 未加载
评论 #35336529 未加载
评论 #35333515 未加载
评论 #35333963 未加载
评论 #35343069 未加载
评论 #35341919 未加载
nuancebydefault大约 2 年前
So. The article starts with &quot;I give it an estimate of 5 per cent chance...&quot; and then explains: what if...<p>Is this case really worth exploring? Or was the article written by a bored AI?<p>I find it striking that there are still so many people downplaying the latest developments of AI. We all feel that we are at the verge of a next revolution on par or even greater than the emergence of the www, while some people just can&#x27;t to seem to let it sink in.
评论 #35332879 未加载
评论 #35333660 未加载
评论 #35333482 未加载
评论 #35333364 未加载
评论 #35333709 未加载
评论 #35335241 未加载
评论 #35333050 未加载
评论 #35333013 未加载
评论 #35333301 未加载
评论 #35333888 未加载
评论 #35341624 未加载
评论 #35333843 未加载
评论 #35334086 未加载
评论 #35333084 未加载
karmasimida大约 2 年前
Previous AI winters are due to overpromise and no delivery: Big promise of what AI can do, but never able to actually even deliver a prototype of that in reality.<p>ChatGPT, at least for GPT-4, can already be considered as someone coined, baby AGI. It is already practical and useful, so it HAS to be REAL.<p>If it is already REAL, there is no need for another winter to ever come to reap the heads of liars. Instead AI will become applied technology, like cars, like chips. It will evolve continuously, and never go away.
评论 #35333625 未加载
评论 #35366963 未加载
评论 #35333910 未加载
RandomLensman大约 2 年前
After the massive hype around generative AI, seems likely there will be an AI winter when the promised transformation in many business areas just doesn&#x27;t happen as advertised.
评论 #35333129 未加载
评论 #35333033 未加载
评论 #35341698 未加载
评论 #35333011 未加载
codelord大约 2 年前
IMHO the prospect of an AI winter is 0%. As someone who has done research in ML I think ML technology is moving forward much faster than we anticipated. ChatGPT shouldn’t work based on what we knew. It’s incredible that it works. Makes you think what other things that shouldn’t work we should scale up and see if they would work. And then there are things that we think they should work. Each new paper or result opens the door for many more ideas. And there are massive opportunities in applying what we already have to all industries.<p>You can absolutely build high precision ML models. Using a transformer LM to sum numbers is dumb because the model makes little assumptions about the data by design, you can modify the architecture to optimize for this type of number manipulation or you can change the problem to generating code for summing values. In fact Google is using RL to optimize matmul implementations. That’s the right way of doing it.
评论 #35333396 未加载
评论 #35334041 未加载
numinary1大约 2 年前
Being old as dirt, my observation is that potential tech revolutions take ten years after the initial exuberance to be realized broadly, or three to five years to fizzle. Of those that fizzle, some were bad ideas and some were good ideas replaced by better ideas. FWIW
评论 #35333333 未加载
评论 #35341956 未加载
anonzzzies大约 2 年前
AI winter will arrive if we don’t get the models to depend on ‘just’ more training data to get better. There is no more training data. We need models the same or better than gpt-4 but trained on roughly what an 18 year old &gt;100 iq human would digest to get to that point. Which is vastly less than what gpt 4 gets fed.<p>If advancing means ever larger and more expensive systems and ever more data, we will enter a cold winter soon.
评论 #35333273 未加载
评论 #35333042 未加载
pixl97大约 2 年前
&gt; Eden writes, “[Which] areas of the economy can deal with 99% correct solutions? My answer is: ones that don’t create&#x2F;capture most of the value.”<p>And<p>&gt;Take for example the sorting of randomly generated single-digit integer lists.<p>These seem like very confused statements to me.<p>For example, lets take banking. It&#x27;s actually two (well far more) different parts. You have calculating things like interest rates and issues like &#x27;sorting integers&#x27; like above. This is very well solved in simple software at extremely low energy costs. If you&#x27;re having your AI model spend $20 trying to figure out if 45827 is prime, you&#x27;re doing it wrong. The other half of banking is figuring out where to invest your money for returns. If you&#x27;re having your AI read all the information you can feed it for consumer sentiment and passing that to other models, you&#x27;re probably much closer to doing it right.<p>And guess what, ask SVB about 99% correct correct solutions that do&#x2F;don&#x27;t capture value. Solutions that have correct answers are quickly commoditized and have little value in themselves.<p>Really the most important statement is the last one, mostly the article is telling is the reasons why AI could fail, not that those reasons are very likely.<p>&gt;I still think an AI winter looks really unlikely. At this point I would put only 5% on an AI winter happening by 2030, where AI winter is operationalised as a drawdown in annual global AI investment of ≥50%. This is unfortunate if you think, as I do, that we as a species are completely unprepared for TAI.
评论 #35333387 未加载
kromem大约 2 年前
I increasingly think we&#x27;re underestimating what&#x27;s ahead.<p>Two years ago was an opinion piece from NIST on the impact optoelectronics would bring specifically to neural networks and AGI, and watching as nearly every major research institution has collectively raised probably half a billion for AI photonics plays through their VC partnerships or internal resource allocations on the promise of order of magnitude improvements much closer than something like quantum computing, I think we really haven&#x27;t seen anything yet.<p>We&#x27;re probably just at the very beginning of this curve, not approaching its diminishing returns.<p>And that&#x27;s both very exciting and terrifying.<p>After decades in tech (including having published a prediction over a decade ago that mid 2020s would see roles shift away from programming towards emergence of specialized roles for knowing how to ask AI to perform work in natural language) I think this is the sort of change so large and breaking from precedent we really don&#x27;t know how to forecast it.
blintz大约 2 年前
I&#x27;m not an expert, but I see the main threat to continued improvement as running out of high-quality data. LLM&#x27;s are a cool thing you can produce only because there is a bunch of freely available high-quality text representing (essentially) the sum of human knowledge. What if GPT-4 (or 5, or 6) is the product of that, and then further improvements are difficult? This seems like the most likely way for improvement to slow or halt; the article cites synthetic data as a fix, but I&#x27;m suspicious that that could really work.
评论 #35333182 未加载
评论 #35333686 未加载
评论 #35332972 未加载
评论 #35333290 未加载
评论 #35333289 未加载
评论 #35333575 未加载
endisneigh大约 2 年前
I think the AI winter will come, but not for why the author asserts (quality, reliability, etc.).<p>I think the current crop of AI is good enough. It will happen because people will actually grow resentful of things that AI can do.<p>I anticipate a small, yet growing segment of populations worldwide to start minimizing internet usage. This, will result in fewer opportunities for AI to be used and thus the lack of investment and subsequent winter.
nico大约 2 年前
At the same time this AI revolution is happening, there is also a psychedelic revolution happening.<p>When this happened in the 60s-70s, the psychedelic revolution was crushed by the government. And we entered an AI winter.<p>I’m not implying causation. Just pointing out a curious correlation between the two things.<p>I wonder what will happen now.
评论 #35333077 未加载
macrolime大约 2 年前
Open AI Whisper was probably made to make transcripts of videos that will be used to train future AI models.<p>The text in itself won&#x27;t be that interesting, the magic happens once you essentially train three different token predictors, one that predicts image tokens (16x16 pixels) and then combine that to predict video frames, one that predicts audio tokens and one that predicts text tokens. Then you use cross-attention between these predictors. To train this model you first pre-train the text predictor, after that&#x27;s done you continue training the text predictor from the transcribed videos, while combining it with the video predictor and audio predictor with cross-attention.<p>Such a model will understand physics and actions in the real world much better than GPT-4, combined with all the knowledge from all the text on the internet it should turn out to be something quite interesting.<p>I think there probably doesn&#x27;t exist enough compute yet to train such a model on something like all of YouTube, but I wouldn&#x27;t be surprised if GPT-5 is a first step in this direction.
评论 #35341621 未加载
thomastjeffery大约 2 年前
Speculation on the future of inference model tech...<p>We can do better. All we have to do is be <i>constructive</i> when we write narratives about LLMs.<p>Unfortunately, that&#x27;s hard work: we basically have to start over. Why? Because every narrative we have today <i>personifies</i> LLMs.<p>It&#x27;s always a top-down perspective about the results, and never about how the actual thing works from the ground up.<p>The reality is that <i>we never left</i> AI winter. Inference models don&#x27;t make decisions or symbolically define subjects.<p>LLMs infer patterns of tokens, and exhibit those patterns. They don&#x27;t invent new patterns or new tokens. They rely entirely on the human act of writing: that is the only behavior they exhibit, and that behavior does not belong to the LLM itself.<p>We should definitely stop calling them AI. That may be the category of <i>pursuit</i>, but it misleadingly implies itself to be a <i>descriptive quality</i>.<p>I propose that we even stop calling them LLMs: they model tokens, which are intentionally <i>misaligned</i> with words. After tokenization, there is no symbolic categorization: no grammar definitions: just whatever patterns happen to be present between tokens.<p>That means a pattern that is <i>not</i> language will still show up in the model. Such a pattern may be considered by humans <i>after the fact</i> to be exciting, like the famous Othello game board study, or limiting, like prompts that circumvent guardrails. The LLM can&#x27;t draw any distinction between grammar-aligned, desirable, or undesirable patterns; yet that is exactly what most people expect to be possible after reading about a personified Artificially Intelligent Large Language Model.<p>I would rather call them &quot;Text Inference Models&quot;. Those are the clearest descriptors of what <i>the thing itself</i> is and does.
crop_rotation大约 2 年前
I think scaling limits and profitability are the only things that can stop the march of the AI. The utility is already there and even the current GPT4 utility is revolutionary.
mpsprd大约 2 年前
&gt;“[Which] areas of the economy can deal with 99% correct solutions? My answer is: ones that don’t create&#x2F;capture most of the value.”<p>The entertainment industry disagrees with this.<p>These systems are transformative for any creative works and in first world countries, this is no small part of the economy.
stuckinhell大约 2 年前
I strongly disagree. ChatGPT is bleeding into everything. Midjourney is too damn good see the example below.<p>The avengers if they had 90&#x27;s actors is going viral.<p><a href="https:&#x2F;&#x2F;cosmicbook.news&#x2F;avengers-90s-actors-ai-art" rel="nofollow">https:&#x2F;&#x2F;cosmicbook.news&#x2F;avengers-90s-actors-ai-art</a><p>Also the avengers as a dark sci fi <a href="https:&#x2F;&#x2F;www.tiktok.com&#x2F;@aimational&#x2F;video&#x2F;7186426442413215022" rel="nofollow">https:&#x2F;&#x2F;www.tiktok.com&#x2F;@aimational&#x2F;video&#x2F;7186426442413215022</a><p>AI art and generative text is just astounding, and it&#x27;s only getting better.
gumby大约 2 年前
The &quot;winter&quot; analogy (I remember the AAAI when marvin made that comment) was to the so-called &quot;nuclear winter&quot; that was widely discussed at the time: a devestating pullback. It did indeed come to pass. I don&#x27;t see that any time soon.<p>I think the rather breathless posts (which I also remember from the 80s and apparently used to be common in the 60s when computers just appeared) will die down as the limits of the LLMs become more widely understood, and they become ubiquitous where they make sense.
评论 #35333291 未加载
_nalply大约 2 年前
I think, this time it is different.<p>Of course, there&#x27;s a bubble, but after that bubble pops, people will realize that current models are useful enough, even with their quirks. People all have quirks and mostly they get along, so they will accept quirks from machines. Anthropomorphizing machines will help accepting models. I know, this is dangerous, but I have this mental image: a doll in form of a seal baby with soft white fur with a Whisper model helping lonely handicapped people (note that I myself am a person with a disability, so don&#x27;t cancel me, please). Or someone who technically is not very adept phoning for support and a Whisper model helping along and having a lot of time to chit-chat.<p>And technically I think, something will happen in about five years. A new floating number format, the posit (<a href="https:&#x2F;&#x2F;spectrum.ieee.org&#x2F;floating-point-numbers-posits-processor" rel="nofollow">https:&#x2F;&#x2F;spectrum.ieee.org&#x2F;floating-point-numbers-posits-proc...</a>), is too useful to be ignored. It will take years because we need new hardware. Why do I think that posits are very useful? Posits could encode weights using very little storage (down to 6 bit per weight). Models perhaps need to be retrained using these weights because 6 bits are not precise. After all, you have only 64 different values. And I think with the new hardware supporting posits they will also have more memory for the weights. Cell phones will be able to run large and complex models efficiently. In other words, Moore&#x27;s law is not dead yet. It just shifted to a different, more efficient computation implementation.<p>When this happens, immediate feedback could become feasible. With immediate feedback we do another step to achieve AGI. I could imagine that people get delivered a partially trained model and then they have a personal companion helping them through life.
评论 #35342197 未加载
greatwave1大约 2 年前
Can anyone give some color on to what extent advancements in AI are limited by the availability of compute, versus the availability of data?<p>I was under the impression that the size and quality of the training dataset had a much bigger impact on performance versus the sophistication of the model, but I could be mistaken.
评论 #35332959 未加载
评论 #35333021 未加载
评论 #35333154 未加载
skybrian大约 2 年前
Sometimes unreliability can be worked around with human supervision. You wouldn&#x27;t normally want to write a script that just chooses the first Google result, but that doesn&#x27;t mean search engines aren&#x27;t useful. The goal when improving a search engine is to put some good choices on the first page, not perfection, which isn&#x27;t even well-defined.<p>The AI generators work similarly. They&#x27;re like slot machines, literally built on random number generators. If you don&#x27;t like the result, try again. When you get something you like, you keep it. There are diminishing returns to re-running the same query once you got a good result, because most results are likely to be worse than what you have.<p>Randomness can make games more fun at first. But I wonder how much of a grind it will be once the novelty wears off?
评论 #35333523 未加载
评论 #35333529 未加载
nojvek大约 2 年前
There&#x27;s a few things at play here.<p>LLMs - OpenAI, Google Brain, Meta FAIR, HuggingFace and others are now routinely training models with the entire corpus of the internet in a few months. The models are getting larger and more efficient.<p>Diffusion models - MidJourney, StableDiffusion, Dall-E and it&#x27;s control net cousins - Trained on terabytes of images, almost entire corpus of internet.<p>Same with voice and other multimodal models.<p>The transformer algorithm is magical but we&#x27;re just getting started.<p>There are now multiple competing players who can drop millions of dollars on compute and have access to internet sized datasets.<p>The compute, the datasets, the algorithms, the human reinforcement loops, all are getting better week over week. Millions of users are interacting with these systems daily. A large subset even paying for it.<p>There is the gold rush.
muyuu大约 2 年前
idk if there will be that much of a winter, but i would welcome it<p>in the late 90s and early 2000s, neural network had a significant stigma for being dead ends and were unpromising grads were sent - people didn&#x27;t want to go there because it was a self-fulfilled prophecy that if you went to research ANNs then you were a loser, and you were seen as such, and in academia that is all you need to be one<p>but, in real life, they worked<p>sure, not for everything because of hardware limitations among other things, but these things worked and they were a useful arrow in your quiver as everybody else just did whatever was fashionable at the time (simulated annealing, SVMs, conditional random fields, you name it)<p>hype or no hype, if you know what you are doing and the stuff you do works, you will be okay
ChatGTP大约 2 年前
AI Winter will likely come because we&#x27;ve not addressed climate change...instead of blowing billions &#x2F; trillions on our survival, we&#x27;re yet again blowing it on moonshots. We have the brains collectively, already to solve the problems should we <i>want to</i>, we don&#x27;t because that&#x27;s not where &quot;the money&quot; is.<p>Silicon Valley Tech is already promising that AI will be the likely solution to climate change..., if there is any more disruption to the economy it&#x27;s just going to yet again slow down mitigation steps for climate change, thus having negative affects on the amount of capital available for these projects.<p>Printing money works, until it doesn&#x27;t.
评论 #35346833 未加载
beepbooptheory大约 2 年前
All this stuff can&#x27;t transform anything if you can&#x27;t afford to keep the computer on. Which is really, to me, the bigger&#x2F;most convincing point in the thread this article links at the top.<p>If there <i>isn&#x27;t</i> a winter, will ChatGPT et al be able solve the energy crises they might be implicated in? Is there something in its magic text completion that can stop global warming? Coming famines?<p>Is perhaps the fixation on these LLMs right now, however smart and full of Reason they are, not paying the fullest attention to the existential threats of our meat world, and how they might interfere with what ever speculative dystopia&#x2F;utopia we can imagine at the moment?
totoglazer大约 2 年前
Another big concern will be regulatory. It seems unlikely a couple billion people whose livelihood is significantly impacted will just chill as it happens?<p>I think it’s unlikely, but no less likely than the compute issues mentioned.
评论 #35332805 未加载
评论 #35332806 未加载
评论 #35332839 未加载
mikewarot大约 2 年前
It is entirely possible that Moore&#x27;s Law gets assassinated by supply chain destruction as deglobalization continues.<p>There are too many single source suppliers in the chain up to EUV lithography. We may in fact be at peak IC.
评论 #35341351 未加载
brucethemoose2大约 2 年前
I think the hardware&#x2F;cost factor is also a business one, eg how dominant does Nvidia stay in the space.<p>If they effectively shut out other hardware companies, that is going to slow scaling and price&#x2F;perf reduction.
评论 #35332969 未加载
chess_buster大约 2 年前
Write a counterpoint to the article posted. Your goal is to refute all claims by giving correct facts with references. Cite your sources. Make it 3 paragraphes. As a poem. In Klingon.
HarHarVeryFunny大约 2 年前
Let&#x27;s not forget that GPT-4 was finished over 6 months ago, with OpenAI now presumably well into 4.5 or 5, and Altman appearing confident on what&#x27;s to come ...<p>In the meantime we&#x27;ve got LangChain showing what&#x27;s possible when you give systems like this a chance to think more than one step ahead ...<p>I don&#x27;t see an AI winter coming anytime soon... this seems more like an industry changing iPhone or AlexNet moment, or maybe something more. ChatGPT may be the ENIAC of the AI age we are entering.
Hizonner大约 2 年前
Well, I&#x27;m HOPING for that, but not RELYING on it...
评论 #35333169 未加载
collaborative大约 2 年前
&gt; cheap text data has been abundant<p>The winter before the AI winter will consist in all the cheap data disappearing. What fun will it be to write a blog post so that it can be scraped by a bot and regurgitated without attribution? Dito for code<p>Or, how will info sites survive without ad revenue? Last I checked bots don&#x27;t consume ads<p>When the internet winter comes, all remaining sites will be behind login screens and a strict ToS popup
stephc_int13大约 2 年前
The current expectations around AI are extremely high and frankly quite a few of them are bordering into speculative territory.<p>That said, I don&#x27;t think we&#x27;re going to see a new AI winter anytime soon, what we&#x27;re seeing is already useful and potentially transformative with a few iterative improvements and infrastructure.
javaunsafe2019大约 2 年前
I don’t even understand why we call models that predict text output to a question AI.<p>For sure we will get a lot stuff automated with it in the near future but this is far away from anything real intelligent.<p>It just doesn’t really understand and or feel things. It’s dead cause it just outputs data based on it’s model.<p>Intelligence contains a will and chaos.
superb-owl大约 2 年前
We&#x27;re only just seeing expectations for the tech inflate now. VCs will probably pump money into LLM-related companies for at least a couple years, and it&#x27;ll be a couple years after that before things really start to decline.<p>It&#x27;s late spring right now, a strange time to start forecasting winter.
happycube大约 2 年前
There&#x27;ll be an AI Winter from a VC standpoint... but even in the 90&#x27;s there was some (GOF)AI stuff still going on after that.<p>There are too many <i>actually useful</i> things coming out of this for a true winter. And for there <i>not</i> to be a bubble.
karmasimida大约 2 年前
There will never be another winter moving forward.<p>ChatGPT as is, is already transformative. It CAN do human level reasoning really well.<p>The only winter I can see, is the AI gets so good, there is little incentive to improve upon it.
评论 #35333950 未加载
评论 #35333327 未加载
zitterbewegung大约 2 年前
I think many technologies go from spring , summer and eventually winter. The last one focused on good old fashioned AI . The next one was big data with ML and this one is large language models.
ahofmann大约 2 年前
I think the assumption that companies are willing to spend 10 billion dollars on AI training is unrealistic. Even the biggest companies would find such an investment to be a financial burden.
评论 #35333083 未加载
评论 #35335363 未加载
boringuser1大约 2 年前
The reason why these types of claims are baseless is because of the key fact that if AI tech stopped progressing right now, it&#x27;s already a game-changer once companies adopt.
Havoc大约 2 年前
&gt; reliability<p>Humans are unreliable AF and we employ them just fine. Better reliability would certainly be nice but I don’t think it is strictly speaking necessary
christkv大约 2 年前
We are all ready using GPT 4 for a ton of BS documents we have to write for our planning permission and other semi legal paper work.<p>My lawyer has been doing pretty much every public filing for civil cases and licenses assisted by GPT. So much bureaucracy could probably be removed by just having GPT validated permissions and manage the correctness of the submissions leaving a human to rubber stamp the final result if at all.
andsoitis大约 2 年前
If LLMs are so great at learning by themselves, why does OpenAI need to resort to the plug-in model? Is it because it can’t actually do logic (hence the Wolfram Alpha plugin)? Or is it because that’s the way it gets access to more data? Or both?<p><a href="https:&#x2F;&#x2F;openai.com&#x2F;blog&#x2F;chatgpt-plugins" rel="nofollow">https:&#x2F;&#x2F;openai.com&#x2F;blog&#x2F;chatgpt-plugins</a>
arisAlexis大约 2 年前
What? This is an article where author puts 5% of happening and others are commenting on his opinion?
atleastoptimal大约 2 年前
More so that there&#x27;s a 5% chance there won&#x27;t be a human winter in the next 5 years
virtual_nikola大约 2 年前
The other way is saying there is a gold rush coming
macawfish大约 2 年前
People love talking about AI winter.
thelazydogsback大约 2 年前
&gt; I put 5% on an AI winter happening by 2030<p>lol. 5%? - that&#x27;s really laying it on the line
HervalFreire大约 2 年前
It&#x27;s different this time. Because this time AI is hugely more popular in the public and corporate sphere. The previous AI winters were more academic winters with few people pushing the envelope.<p>I don&#x27;t think compute is the issue. It&#x27;s an issue with LLMs. Current LLMs are just a stepping stone for true AGI. I think there&#x27;s enough momentum right now that we can avoid a winter and find something better through sheer innovation.
评论 #35332933 未加载