TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Alternatives to cosine similarity

61 pointsby tomhazledine7 months ago

10 comments

seanhunter7 months ago
It pains me that the author (and many others) refer to cosine similarity as a distance metric. The cosine of an angle is a quick measure of how close the direction between two vectors is, however it is not a distance metric (which would measure the distance between their endpoint).<p>Thought experiment: If I go outside and point at the moon, the cosine of the angle between the position vector of my finger and the position vector of the moon relative to me is 1.[1] However my finger is very much not on the moon. The distance between the two vectors is very large even though their angle is zero.<p>That&#x27;s why it&#x27;s cosine <i>similarity</i> not cosine <i>distance</i>. If your embedding methodology is trained such that the angle between vectors is a good enough proxy for distance then it will be[2]. But it&#x27;s not a distance measure.<p>[1] because the angle is 0 and cos 0 = 1.<p>[2] A self-fulfilling prophesy but this actually is in the power of the people making the embedding to make true, presumably because the training will disperse the embeddings such that their magnitude is roughly equal so you&#x27;ll have a kind of high-dimensional sphere of embeddings with most of the actual vectors ending on the outside of the sphere and not too many points far on the interior and not too many points spiking way out the sides. It seems OpenAI also normalize all the vectors so they are all unit vectors so the magnitude doesn&#x27;t matter. But it&#x27;s still not a distance measure.
评论 #41783321 未加载
评论 #41780436 未加载
评论 #41795742 未加载
评论 #41780916 未加载
ntonozzi7 months ago
One important factor this article neglects to mention is that modern text embedding models are trained to maximize distance of dissimilar texts under a specific metric. This means that the embedding vector is not just latent weights plucked from the last layer of a model, but instead specifically trained to be used with a particular distance function, which is the cosine distance for all the models I&#x27;m familiar with.<p>You can learn more about how modern embedding models are trained from papers like Towards General Text Embeddings with Multi-stage Contrastive Learning (<a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2308.03281" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2308.03281</a>).
评论 #41780562 未加载
thesehands7 months ago
This paper is always useful to remember when someone tells you to just use the cosine similarity: <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2403.05440" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2403.05440</a><p>Sure, it’s useful but make sure it’s appropriate for your embeddings and remember to run evals to check that things are ‘similar’ in the way that you want them to appear similar.<p>Eg is red similar to green just because they are colors
hansvm7 months ago
The biggest argument for using cosine similarity is that hardware, software, and research have co-evolved to make it fast, robust, and well-understood.<p>As one simple example of that, most modern compilers can recognize false data dependency sharing and add some extra accumulators to the generated assembly for anything that looks like an inner product. For even slightly more complicated patterns though, that optimization is unlikely to have been implemented at a compiler level, so you&#x27;ll have to do it yourself.<p>The author benchmarked, among other things, chebyshev distance. Here&#x27;s two example (zig) implementations, one with an extra accumulator to avoid false sharing, making it better than 3x faster on my machine.<p><pre><code> &#x2F;&#x2F; 742ns per vec (1536-dim random uniform data) fn chebyshev_scalar_traditional_ignoreerrs(F: type, a: []const F, b: []const F) F { @setFloatMode(.optimized); var result: F = 0; for (a, b) |_a, _b| result = @max(result, @abs(_a - _b)); return result; } &#x2F;&#x2F; 226ns per vec (1536-dim random uniform data) fn chebyshev_scalar_sharing2_ignoreerrs(F: type, a: []const F, b: []const F) F { @setFloatMode(.optimized); var result0: F = 0; var result1: F = 0; var i: usize = 0; while (i + 1 &lt; a.len) : (i += 2) { result0 = @max(result0, @abs(a[i] - b[i])); result1 = @max(result1, @abs(a[i + 1] - b[i + 1])); } if (a.len &amp; 1 == 1) result0 = @max(result0, @abs(a[a.len - 1] - b[b.len - 1])); return @max(result0, result1); } </code></pre> This is apples to oranges, but if their chebyshev implementation were 3x faster after jitting it&#x27;d handily beat everything else.
scentoni7 months ago
Cosine similarity of two normalized vectors is just the length of the projection of one vector on the other. That gives a very intuitive geometric meaning to cosine similarity. <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Vector_projection" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Vector_projection</a>
kordlessagain7 months ago
Good to see a mention of the Jaccard index for sets here. I did a lot of work on generating keyterm sets using knowledge transfer from the texts and calculating similarity of texts based on their Jaccard similarity, or the Jaccard distance, and their cosine similarity. I used Ada and Instructor. In the tests I ran, the values were frequently similar and useful for ranking where one or the other resulted in similar values for the result set, allowing them to be reweighted slightly if needed.<p>Code: <a href="https:&#x2F;&#x2F;github.com&#x2F;FeatureBaseDB&#x2F;DoctorGPT">https:&#x2F;&#x2F;github.com&#x2F;FeatureBaseDB&#x2F;DoctorGPT</a>
kristjansson7 months ago
&gt; I was expecting a little variation in execution time for each comparison, but what I wasn&#x27;t expecting was the bimodal nature of the results. For each function, there were two distinct groups of execution times. These peaks existed for all the function types at consistent relative spacings, which suggests that certain vector comparisons took longer than others no matter which distance function was being used<p>Surely pre- and post- JIT?
评论 #41780495 未加载
Lerc7 months ago
Also recently.<p><i>Surpassing Cosine Similarity for Multidimensional Comparisons: Dimension Insensitive Euclidean Metric (DIEM)</i> <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2407.08623v2" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2407.08623v2</a>
janalsncm7 months ago
You technically could use other distance metrics but embeddings are generated from models trained to maximize similarity under specific metrics. Usually that is cosine similarity.<p>A trivial example of how it matters is the vectors (0,1) and (0,2) which have cosine distance 0 but euclidean distance 1.<p>Finally, it’s notable that the author is testing via JavaScript. I am not sure if you’ll be able to take advantage of vectorized (SIMD&#x2F;BLAS) optimizations.
paulfharrison7 months ago
The article mentions equivalent ranking from cosine similarity and Euclidean distance. The derivation is very simple. For vectors A and B, the squared Euclidean distance is:<p>(A-B).(A-B) = A.A-2A.B+B.B<p>A and B only interact through a dot product, just like cosine similarity. If A and B are normalized, A.A=B.B=1.<p>For Pearson Correlation, we would just need to center and scale A and B as a pre-processing step.