TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Vectors are over, hashes are the future of AI

66 pointsby jsilversover 3 years ago

10 comments

sdenton4over 3 years ago
hm. I&#x27;d like to believe, but the arguments here seem a bit obtuse.<p>No one measures vector distance using the hamming distance on binary representations. That&#x27;s silly. We use L1 or L2, usually, and the binary encoding of the numbers is irrelevant.<p>It sounds like the LSH is maaaaybe equivalent to vector quantization. In which case this would be a form of regularization, which sometimes works well, and sometimes meh.
评论 #28678901 未加载
评论 #28678758 未加载
评论 #28678300 未加载
foxesover 3 years ago
I like to speculate for reasons this might or might not make sense at several levels, although mostly just conjecturing. The fact everything works is very interesting, but it seems so hard to come up with something concrete.<p>You have a map from some high dimensional vector space ~ k^N -&gt; H, some space of hashes. H sort of looks one dimensional. I assume that actually the interesting geometry of your training data lies on a relatively low dimensional subvariety&#x2F;subset in k^N, so maybe its not actually that bad? It could be a really twisted and complicated curve.<p>However you still need to somehow preserve the relative structure right? Things that are far apart in k^N need to be far apart in H. Seems like you want things to at least approximately be an isometry. Although there are things like space filling curves that might do this for some degree.<p>Also maybe even though H looks low dimensional, it can actually capture quite a bit (if your data is encoded as a coefficient of a power of 2, you could think of powers of 2 as some sort of basis, so maybe it is also pretty high dimensional).
usefulover 3 years ago
Contrastive and triplet loss is pretty cool for generating hashes. I&#x27;d imagine the trick they are alluding to is a rewrite the loss function to be more aware of locality instead of trying to minimize&#x2F;maximize distance.<p>Or they are just shingling different ML hash functions, which is kinda lazy.
LuisMondragonover 3 years ago
Hi, my interest got piqued. I&#x27;m developing a similarity feature where I compare embeddings of a sentence and its translation. I wanted to know if the hashing method would be faster that the pytorch multiplication by which I get the sentence similarities. Going from strings to bytes, hashing and comparing is very fast. But if I get the embeddings, turn them into bytes, hash them and compare them, both methods take almost the same time.<p>I used this Python library: <a href="https:&#x2F;&#x2F;github.com&#x2F;trendmicro&#x2F;tlsh" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;trendmicro&#x2F;tlsh</a>.
评论 #28697233 未加载
andyxorover 3 years ago
This idea goes back to &quot;sparse distributed memory&quot; developed by NASA research in the 80s. It&#x27;s a content-addressable memory where content hashes are encoded &amp; decoded by neural network and similar items are in proximity to each other in the embedding space, and similarity measured via Hamming distance. <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Sparse_distributed_memory" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Sparse_distributed_memory</a>
a-dubover 3 years ago
using fancy neural nets for learning hash functions from data is indeed pretty cool, but hash functions fit to data isn&#x27;t new. see &quot;perfect hash functions.&quot;<p>lsh is most famously used for approximating jaccard distances, which even if you&#x27;re not doing stuff like looking at lengths or distances in l1 or l2, is still a vector operation.<p>lsh is best described in jeff ullman&#x27;s mining massive datasets textbook (available free online), which describes how it was used for webpage deduplication in the early days at google.
jonbaerover 3 years ago
I feel like I have been talking about LSH for years
评论 #28678109 未加载
jgalt212over 3 years ago
let&#x27;s just take it to its logical extension and make every model just one big look up table (with hashes as keys). &#x2F;s
sayonaramanover 3 years ago
Whoever wrote the article must have done a cursory search at best, I&#x27;m surprised they didn&#x27;t mention semantic hashing by Salakhutdinov &amp; Hinton (2007) <a href="https:&#x2F;&#x2F;www.cs.utoronto.ca&#x2F;~rsalakhu&#x2F;papers&#x2F;semantic_final.pdf" rel="nofollow">https:&#x2F;&#x2F;www.cs.utoronto.ca&#x2F;~rsalakhu&#x2F;papers&#x2F;semantic_final.p...</a><p>Edit: also, talking about LSH, must check out FAISS library <a href="https:&#x2F;&#x2F;github.com&#x2F;facebookresearch&#x2F;faiss" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;facebookresearch&#x2F;faiss</a> and the current SOTA <a href="http:&#x2F;&#x2F;ann-benchmarks.com&#x2F;" rel="nofollow">http:&#x2F;&#x2F;ann-benchmarks.com&#x2F;</a>
评论 #28685612 未加载
评论 #28696672 未加载
评论 #28678450 未加载
nanisover 3 years ago
&gt; &quot;_If this peaked your interest_&quot;<p>It didn&#x27;t.[1]<p>[1]: <a href="https:&#x2F;&#x2F;www.merriam-webster.com&#x2F;words-at-play&#x2F;pique-vs-peak-vs-peek" rel="nofollow">https:&#x2F;&#x2F;www.merriam-webster.com&#x2F;words-at-play&#x2F;pique-vs-peak-...</a>