TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: How do new models differ from small update of the same model?

2 点作者 vednig2 个月前

1 comment

dontoni2 个月前
GPT-4 not only has orders of magnitude more parameters than GPT-3.5. It also has a different architecture, using a Mixture of Experts approach rather than a raw GPT.<p>What is interesting to me is the fact that they haven’t developed (or at least publicly disclosed so) this idea further. What if you have 37 “experts”, but each being notoriously small? Is it a requisite that each expert is a fully functional LLM on its own? Can’t they interconnect like the brain does with its lobes?
评论 #43207038 未加载