TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Pastry: A distributed hash table in Go

97 pointsby paddyforanover 12 years ago

10 comments

jeremyjhover 12 years ago
&#62;We couldn’t seem to find any way to subscribe to events and publish events without a single point of failure.<p>Really? You think this doesn't exist? In fact, this is nothing but a <i>deployment</i> concern for any mature message broker.<p><a href="http://www.rabbitmq.com/ha.html" rel="nofollow">http://www.rabbitmq.com/ha.html</a><p><a href="http://activemq.apache.org/masterslave.html" rel="nofollow">http://activemq.apache.org/masterslave.html</a><p><a href="http://hornetq.sourceforge.net/docs/hornetq-2.0.0.GA/user-manual/en/html/ha.html" rel="nofollow">http://hornetq.sourceforge.net/docs/hornetq-2.0.0.GA/user-ma...</a><p>There are lots more options than these, and you can also use heartbeat/lvs to take something like redis and make it HA.<p>I'm glad you had fun inventing your own distributed hash infrastructure, but please do not attempt to convince other people that there are no other options out there for reliable and highly available messaging.
评论 #4677011 未加载
评论 #4677208 未加载
Terrettaover 12 years ago
&#62; <i>We couldn’t seem to find any way to subscribe to events and publish events without a single point of failure. We looked at everything we could find, and they all seemed to have this bottleneck.</i><p>“The Spread toolkit provides a high performance messaging service that is resilient to faults across local and wide area networks. Spread functions as a unified message bus for distributed applications, and provides highly tuned application-level multicast, group communication, and point to point support. Spread services range from reliable messaging to fully ordered messages with delivery guarantees.”<p><a href="http://www.spread.org/" rel="nofollow">http://www.spread.org/</a><p>We use this for LAN event communication among standalone servers acting together, and WAN event communication among POPs acting together, for billions of events per month. Any server in a group dies, they elect a new master, so no SPoF.<p>What was it about your use case that made this feel like a single point of failure or bottleneck?
评论 #4678440 未加载
评论 #4679880 未加载
ukd1over 12 years ago
Cool idea, but I wonder why they didn't use something like RabbitMQ which already exists and is proven?
评论 #4676629 未加载
Saavedroover 12 years ago
Should be aware of possible confusion with <a href="http://en.wikipedia.org/wiki/Pastry_(DHT)" rel="nofollow">http://en.wikipedia.org/wiki/Pastry_(DHT)</a>
评论 #4676704 未加载
matticakesover 12 years ago
The focal point is discovery. Not other queues (or other libraries that can build queues). This is an interesting way to approach it, thanks for open sourcing.<p>We chose to solve the discovery problem a bit differently in NSQ (<a href="https://github.com/bitly/nsq" rel="nofollow">https://github.com/bitly/nsq</a>) but I could certainly see some interesting opportunities to experiment with a distribute hash table approach as well.
评论 #4677161 未加载
realrockerover 12 years ago
oh quiet you naysayers. just nod in appreciation of the hard work.
评论 #4677690 未加载
joelthelionover 12 years ago
Hijacking the topic on P2P: is there a good library (any language will do) for P2P message passing (as opposed to information storing in a DHT)?<p>I'd like to experiment a decentralized twitter/reddit-like system using p2p message flooding and machine learning to weed out spam.
评论 #4676782 未加载
评论 #4676816 未加载
spullaraover 12 years ago
They were already using Redis and made this? They could have just partitioned and replicated them for scaling.
评论 #4677578 未加载
igrekelover 12 years ago
Next step: a tupplespace in Go?
drivebyacct2over 12 years ago
Oh man, I was really, really just hoping for this. I've been avoiding putting one last piece into my server because I didn't want to use redis. I'm going to play with this in a couple hours.
评论 #4676562 未加载
评论 #4676686 未加载