TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: Why is it taken for granted that LLM models will keep improving?

49 点作者 slalomskiing超过 1 年前
Whenever I see discussion of stuff like ChatGPT it seems like there is this common assumption that it will get better every year.<p>And in 10-20 years it’ll be capable of some crazy stuff<p>I might be ignorant of the field but why do we assume this?<p>How do we know it won’t just plateau in performance at some point?<p>Or that say the compute requirements become impractically high

23 条评论

benlivengood超过 1 年前
The scaling laws (the original Kaplan paper, Chinchilla, and OpenAI&#x27;s very opaque scaling graphs for GPT-4) suggest indefinite improvement for the current style of transformers with additional pre-training data and parameters.<p>No one has hit a model&#x2F;dataset size where the curves break down, and they&#x27;re fairly smooth. Usually simple models that accurately predict performance work pretty well nearby existing performance, so I expect trillion or 10-trillion parameter models to be on the same curve.<p>What we haven&#x27;t seen yet (that I&#x27;m aware of) is whether the specializations to existing models (LoRa, RLHF, different attention methods, etc.) follow similar scaling laws, since most of the efforts have been focused on achieving similar performance on smaller&#x2F;sparser models and not investing the large amounts of money into huge experiments. It will be interesting to see what Deepmind Gemini reveals.
评论 #38535941 未加载
评论 #38536636 未加载
imranq超过 1 年前
LLMs are comprised of just three elements<p>Data<p>Compute<p>Algorithms<p>All three are just scratching the surface of what is possible.<p>Data: What has been scraped off the internet is just &lt;0.001% of human knowledge as most platforms cannot be scraped so easily, are in formats that are not in text like video, audio, or just plain old pieces of paper undigitized. Finally there are probably techniques to increase data through synthetic means, which is purportedly OpenAI&#x27;s secret sauce to GPT-4&#x27;s quality.<p>Compute: While 3nm processes are approaching an atomic limit (0.21nm for Si), there is still room to explore more densely packed transistors or other materials like Gallium Nitride or optical computing. Not only that but there is a lot of room in hardware architecture to allow more parallelism and 3-D stacked transistors.<p>Algorithms: The transformer and other attention mechanisms have several sub-optimal components to them like how arbitrary the Transformer is in terms of design decisions, and quadratic time complexity for attention. There also seems to be a large space of LLM augmentations like RLHF for instruction following and improvements in factuality and other mechanisms.<p>And these ideas are just from my own limited experience. So I think its fair to say that LLMs have plenty of room to improve.
评论 #38535979 未加载
评论 #38536343 未加载
评论 #38535458 未加载
fragmede超过 1 年前
Because a lot of smart people are spending a lot of time, money, and effort on this. It&#x27;s as simple as that. We could go into <i>all</i> sorts of details, like how increase in GPU capabilities will improve training capabilities, both in size and speed, or how GPU(&#x2F;TPU) capabilities will improve, or how better techniques will make training on the same data set result in better models, or where other improvements will make better use of existing models or make them better or where we&#x27;re seeing additions to training data sets and how that will improve models using existing techniques. But it really all boils down to a lot of smart people, some with a lot of money, that are personally invested (with time and money) in making them better.<p>That doesn&#x27;t mean there isn&#x27;t possibly a plateau somewhere but it&#x27;s somewhere way off in the distance.
评论 #38535508 未加载
ironlake超过 1 年前
LLMs might hit a wall. Any technology could hit a wall. ChatGPT could be the next Segway. But, like the Segway, LLMs are useful now. I think the impact of &quot;stuff like ChatGPT&quot; on software engineering will equal the impact of the compiler in that eventually no one will consider writing software without a &quot;stuff like ChatGPT&quot; in the tool chain, in the same way that no one works without a compiler now. LLMs are useful now and they&#x27;ve only existed for a few years.<p>But that&#x27;s just my opinion and no one knows the future. If you read papers on arxiv.org, progress is being made. Papers are being written, low-hanging fruit consumed. So we&#x27;re going to try because PhDs are there for the taking on the academic side, and generational wealth is there for the taking on the business side.<p>E. F. Codd invented the relational database and won the Turing Award. Larry Ellison founded Oracle to sell relational databases and that worked out well for him, too.<p>There&#x27;s plenty of motivation to go around.
CamperBob2超过 1 年前
I don&#x27;t know about the specifics of mikewarot&#x27;s point below, but I think he&#x27;s close to verbalizing a fairly-important truth: there is no reason whatsoever to think that Von Neumann machines are the best way to implement neural networks. There are lots of reasons to think they aren&#x27;t, starting with the VRAM bottleneck. The impressive results that have been achieved so far have almost certainly come from using the wrong tools. That&#x27;s cause for optimism IMHO.<p>Digital computer architecture evolved the way it did because there was no other practical way to get the job done besides enforcing a strict separation of powers between the ALU, memory, mass storage, and I&#x2F;O. We are no longer held to those constraints, technically, but they still constitute a big comfort zone. Maybe someone tinkering with a bunch of FPGAs duct-taped together in their basement will be the first to break out of it in a meaningful way.
aijoe5pack超过 1 年前
It appears as if improving data corpus quality and size and improving processing capacity are still driving performance gains. I have no idea of the functional relationship, and its likely not a Moore&#x27;s law kind of thing, although that would be an underlying driver of available capacity to saturation.
naet超过 1 年前
I don&#x27;t think it&#x27;s a universal assumption. Some people do think it will hit a wall (and maybe do so soon), others think it can keep improving easily by scaling up the compute or the training data.<p>Good LLMs like ChatGPT are a relatively new technology so I think it&#x27;s hard to say either way. There might be big unrealized gains by just adding more compute, or adding&#x2F;improving training data. There might be other gains in implementation, like some kind of self-improvement training, a better training algorithm, a different kind of neural net, etc. I think it&#x27;s not unreasonable to believe there are unrealized improvements given the newness of the technology.<p>On the other hand, there might be limitations to the approach. We might never be able to solve for frequent hallucinations, and we might not find much more good training data as things get polluted by LLM output. Data could even end up being further restricted by new laws meaning this is about the best version we will have and future versions will have worse input data. LLMs might not have as many &quot;emergent&quot; behaviors as we thought and may be more reliant on past training data than previously understood, meaning they struggle to synthesize new ideas (but do well at existing problems they&#x27;ve trained on). I think it&#x27;s also not unreasonable to believe LLMs can&#x27;t just improve infinitely to AGI without more significant developments.<p>Speculation is always just speculation, not a guarantee. We can <i>sometimes</i> extrapolate from what we&#x27;ve seen, but sometimes we haven&#x27;t seen enough to know the long term trend.
karaterobot超过 1 年前
Not an expert, but have wondered the same thing. From what I&#x27;ve read, it comes down to optimism and extrapolation from current trends. Both of these have problems of course, but what else can you do? My working hypothesis is that we&#x27;ll reach a practical limit on the quality of what we can get from the current class of models, and to extend beyond that would require a new approach, rather than just more data and more horsepower. The new breakthrough would have to be as significant as the last, but would be more likely to happen in a short time span because there is so much more activity in AI research now than even 5 years ago. Again, I&#x27;m a dummy about this stuff, not claiming more than that.
jrm4超过 1 年前
Your skepticism is, I think, very well founded -- especially with such unclear definitions of &quot;improvement.&quot;<p>I think I have a corollary type idea: Why are LLM&#x27;s not perhaps like &quot;Linux,&quot; something than never really needs to be REWRITTEN from scratch, merely added to or improved on? In other words, isn&#x27;t it fair to think that LoRA&#x27;s are the <i>really important</i> thing to pay attention to?<p>(And perhaps, like Google Fuschia or whatever, new LLMs might just be mostly a waste of time from an innovators POV?)
评论 #38539486 未加载
xnx超过 1 年前
The recent history of bigger LLMs suddenly being capable of new things is kind of miraculous. This blog post is a decent overview: <a href="https:&#x2F;&#x2F;blog.research.google&#x2F;2022&#x2F;11&#x2F;characterizing-emergent-phenomena-in.html" rel="nofollow noreferrer">https:&#x2F;&#x2F;blog.research.google&#x2F;2022&#x2F;11&#x2F;characterizing-emergent...</a> &quot; In many cases, the performance of a large language model can be predicted by extrapolating the performance trend of smaller models.&quot;
ActorNightly超过 1 年前
I dunno if LLMs will get better, but ML in general is a task of compression, and there is definitely a whole bunch of human knowledge and history that neural nets can compress.<p>Its not unfeasable in the future to have a box at home that you can ask a fairly complicated question, like &quot;how do I build a flying car&quot;, and it will have the ability to<p>- tell you step by step instructions of what you need to order<p>- write and run code to simulate certain things<p>- analyze you work from video streams and provide feedback<p>- possibly even have a robotic arm with attachments that can do some work.
评论 #38536503 未加载
russellbeattie超过 1 年前
Like others in this thread have said, we&#x27;re just starting to explore the technology. I view it as akin to early CPUs like the 6502 which only did the absolute minimum to today&#x27;s monsters with large memory caches, predictive logic, dedicated circuits, thousands of binary calculation shortcuts and more all built in. Each small improvement adds up.<p>From a software perspective, I&#x27;ve wondered for a while if as LLM usage matures, there will be an effort to optimize hotspots like what happened with VMs, or auto indexing like in relational DBs. I&#x27;m sure there are common data paths which get more usage, which could somehow be prioritized, either through pre-processing or dynamically, helping speed up inference.<p>Also, GPT4 seems to include multiple LLMs working in concert. There&#x27;s bound to be way more fruit to picked along that route as well. In short, there&#x27;s tons of areas where improvements large and small can be made.<p>As always in computer science, the maxim, &quot;Make it work, make it work well, then make it work fast,&quot; applies here as well. We&#x27;re collectively still at step one.
评论 #38537837 未加载
jackschultz超过 1 年前
What&#x27;s not mentioned here is test-time compute. Idea being that, sure, you can spend a ton of compute power on pre-training and fine-tuning, but generation is difficult. So instead of spending all time and power more focused on that, how about spending some time and power on it for the model to generate a bunch of possibilities, and then spend the rest of time having a model verify what&#x27;s been generated for correctness. That&#x27;s the Let&#x27;s Verify Step by step.<p>Great video to talk about this: <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=ARf0WyFau0A" rel="nofollow noreferrer">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=ARf0WyFau0A</a><p>In threads on LLMs, this point doesn&#x27;t get brought up as much as I&#x27;d expect, so I&#x27;m curious if I&#x27;m missing talks on this or maybe it&#x27;s wrong. But I see this as the way forward. Models generating tons of answers, and other models being able to pick out the correct ones, and the combinations being beyond human ability, where after, humans can do their own verification.<p>Edit:<p>Think of it this way. Trying to create something isn&#x27;t easy. If I was to write a short story, it&#x27;d be very difficult, even if I spent years reading what others have written to learn their patterns. If I then tried to write and publish a single one myself, no chance it&#x27;d be any good.<p>But _judging_ short stories is much easier to do. So if I said screw it, I&#x27;ll read a couple stories to get the initial framework, then write 100 stories in the same amount of time I&#x27;d have spent reading and learning more about short stories, I can then go through the 100 and pick out the one I think is the best and publish that.<p>That&#x27;s where I see LLMs going and what the video and papers mentioned in the video say.
评论 #38536686 未加载
agentultra超过 1 年前
I&#x27;m curious if limits of like, thermodynamics, won&#x27;t play a part here. Or maybe also ecological limits: how long will we allow corporations to use essential, scarce resources to train models without paying their fair share? [0]<p>I&#x27;m not an expert here either but I wonder if there will be the same &quot;leap&quot; we saw from ChatGPT3-4 or if there&#x27;s a diminishing curve to performance, ie: adding another trillion parameters has less of a noticeable effect than the first few hundred billion.<p>[0] <a href="https:&#x2F;&#x2F;fortune.com&#x2F;2023&#x2F;09&#x2F;09&#x2F;ai-chatgpt-usage-fuels-spike-in-microsoft-water-consumption&#x2F;" rel="nofollow noreferrer">https:&#x2F;&#x2F;fortune.com&#x2F;2023&#x2F;09&#x2F;09&#x2F;ai-chatgpt-usage-fuels-spike-...</a> -- I am fairly certain they paid for that water, it was not a commensurate price given the circumstances, and if they had to ask to use it first the answer would have been, <i>no</i>, by a reasonable environmental stewardship organization.
haltist超过 1 年前
The main assumption of techno-optimism is that a large enough computer can do anything people can do and it can do it better. The goal of techno-optimism is to create a mechanical god that will rule the planet and scaling LLMs is a stepping stone to that goal.<p>I, of course, already know how to do all this for a mere $80B.
评论 #38534117 未加载
评论 #38537264 未加载
评论 #38533905 未加载
评论 #38536476 未加载
xboxnolifes超过 1 年前
Happens every time. LLMs, crypto value, stocks, CPU performance, GPU performance, etc.<p>Anything that has seen continual growth will be assumed to have further continual growth at a similar rate.<p>Or, how I mentally model it even if it&#x27;s a bit incorrect: People see sigmoidal growth as exponential.
h2odragon超过 1 年前
&gt; it won’t just plateau in performance at some point?<p>I suspect that we&#x27;ve already seen the shape of the curve: a 1B parameter model can index a book; a 4B model can converse, but a 14B model can be a little more eloquent. Beyond that no real gains will be seen.<p>The &quot;technology advancement&quot; phase has already happened mostly, but the greater understanding of theory, that would discourage foolish investments hasn&#x27;t propagated yet. So there&#x27;s probably at least another full year of hype cycle before the next buzzword is brought out to start hoovering up excess investment funds.
ortusdux超过 1 年前
I enjoyed Tom Scott&#x27;s YT video monologue about this. To summarize, he postulates that most major innovations follow a sigmoid growth curve, wherein they ramp up, explode, and then level off. The question then becomes, where are we on this curve? He concludes that we will probably only know in hindsight.<p><a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=jPhJbKBuNnA" rel="nofollow noreferrer">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=jPhJbKBuNnA</a>
afjeafaj848超过 1 年前
I think part of it is that some people say a person is 20 petaflops of compute<p>So if we have that much compute power already why can&#x27;t we just configure it in the right way to match a human brain?<p>I&#x27;m not sure I totally buy that logic though, since I would think the architecture&#x2F;efficiency of a brain is way different from a computer
评论 #38536592 未加载
jrpt超过 1 年前
They are going to add more abilities onto the system, for example, toolformers or goal planning (like the recent Q* stuff at OpenAI people are talking about). This will make the overall product very powerful.<p>But even if you’re looking just at the LLM it seems like there’s a lot of ways it can be improved still.
serf超过 1 年前
because no class of software was as good as it&#x27;ll ever be upon launch -- in other words : there is a normal expectation of improvement after introduction in the software world.
Syonyk超过 1 年前
&gt; <i>How do we know it won’t just plateau in performance at some point?</i><p>We don&#x27;t.<p>But that&#x27;s also the sort of thing you can&#x27;t say when seeking huge amounts of funding for your LLM company.
aristofun超过 1 年前
It will plateau at best. But crowd is never smart, yet can scream.