TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

An Intuitive Explanation of Hashing

1 pointsby mieubrisseover 1 year ago

1 comment

ggmover 1 year ago
I don&#x27;t think this is entirely it.<p>Another important quality of a hash function is that you can re-create it. It becomes testable that the input makes the hash. Therefore the hash can stand both as a proof the input is &quot;the same&quot; seen another time, and be the short identity which itself distributes in a &quot;semi random&quot; manner. Its not fully random because given the input text anyone can derive the same hash. You&#x27;re equating random to the distribution in the number field, but truly random things can&#x27;t be repeated.<p>you focussed on the random quality, which goes to distribution of the hash as a key and the collision side of things, but the other side, being able to test the hash, implies access to the source, and an ability to run &quot;the same&quot; function to derive it.<p>hash collisions are contextual. If the ability to construct a collision is too low, then the hash becomes weaker. But it may not matter. An example here is that google photo hashes appear to be weak, because a small (sub fractional %) of people report seeing other people&#x27;s photos in their library. ok, that does matter, its a breach of privacy. But at google scale, its noise (to them)<p>and most older hash-index models in C used to deal with hash collisions with a small serial walk to find the unique instance. the hash reduced lookup cost into a data structure but didnt actually guarantee uniqueness, it was contextual. Maybe more like sharding in modern terms?