There are as mentioned, but additionally, for many models, you can split content up into several vectors (say one for each sentence or paragraph depending on how the model is trained) and pool the vectors together to get a representation that will span the content overall well.<p>Since the models trained to work on single sentences (like Mini-V2, the SBERT default) work worse at length, pooling representations of sentences is typically more useful.<p>For deliberately longer representations, generative model embeddings or document embeddings are the right answer sometimes.
There are some:<p><a href="https://huggingface.co/spaces/mteb/leaderboard" rel="nofollow noreferrer">https://huggingface.co/spaces/mteb/leaderboard</a>