TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: Why aren't there Open source embedding models with context length > 512?

3 点作者 rawsh超过 1 年前

2 条评论

james-revisoai超过 1 年前
There are as mentioned, but additionally, for many models, you can split content up into several vectors (say one for each sentence or paragraph depending on how the model is trained) and pool the vectors together to get a representation that will span the content overall well.<p>Since the models trained to work on single sentences (like Mini-V2, the SBERT default) work worse at length, pooling representations of sentences is typically more useful.<p>For deliberately longer representations, generative model embeddings or document embeddings are the right answer sometimes.
caprock超过 1 年前
There are some:<p><a href="https:&#x2F;&#x2F;huggingface.co&#x2F;spaces&#x2F;mteb&#x2F;leaderboard" rel="nofollow noreferrer">https:&#x2F;&#x2F;huggingface.co&#x2F;spaces&#x2F;mteb&#x2F;leaderboard</a>