TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Redis Cluster Tutorial

126 点作者 anuragramdasan超过 11 年前

6 条评论

benarent超过 11 年前
Great post. I uploaded Salvatore keynote from Redis Conf 2013 today. <a href="http://blog.togo.io/redisconf/a-short-term-plan-for-redis-by-antirez/" rel="nofollow">http:&#x2F;&#x2F;blog.togo.io&#x2F;redisconf&#x2F;a-short-term-plan-for-redis-by...</a>
评论 #6822585 未加载
ransom1538超过 11 年前
I have a few serious concerns: Suppose have nodes A,B,C,D running with slaves and you have a user u1 start retrieving and storing data. The odds will be high that the user u1 stores keys on all nodes A,B,C,D since all the keys are based on a numerical hashed slot.<p>1) The user u1 will end up connecting to all nodes. Thus as you scale adding an &quot;E&quot; you always end up connecting to each cluster having the same amount of connections. Example, let&#x27;s say A,B,C,D have 5k connections open and you add &quot;E&quot;. Now &quot;E&quot; has 5k connections. Now you are blocked by the number of requests per second and no amount of hardware can save you.<p>2) Let&#x27;s say B and B1 die. (It happens.) Now your system is completely out. All 5k connections are blocked. It would be nice if A,C,D,E continue to run.<p>I usually get around things like this using an index server between user u1 and A,B,C,D. When u1 starts retrieving keys the system needs to be designed such that u1 only connects to single node for it&#x27;s keys.<p>How do you get around 1,2? Do other people use index servers?
评论 #6822894 未加载
评论 #6823469 未加载
评论 #6825549 未加载
stiff超过 11 年前
If Redis would now just use names for databases instead of numbers and get rid of the limit of number of databases... Managing multiple application instances (production, test, staging for multiple countries) using a single shared Redis instance but requiring separate databases is a huge pain.
评论 #6822667 未加载
评论 #6823650 未加载
评论 #6823742 未加载
heterogenic超过 11 年前
One thing I&#x27;d love to see (perhaps in a future iteration?) is a way to have overlapping ranges for redundancy, instead of mirrors.<p>The big benefit to this is when balancing write-heavy workloads, when the slave would not be getting its share of the total load.<p>I believe this could be accomplished by running a second DB on each machine as a slave to the next in the cycle, and that&#x27;s what I&#x27;ll probably try when we scale up to needing a cluster. But having it be a supported scenario (or facilitated somehow) would make me feel much better.<p>(<i>edit</i>: Just a feature request of course, exciting to see this release! Thanks for all your hard work antirez!)
taf2超过 11 年前
I wish redis had synchronous write option to guarantee that when a write response is received that all nodes in the cluster have also received a copy. Even if this was just an ack that data is replicated into memory instead of disk. Sure there are cases where you&#x27;d rather operate with async, but really I would prefer to put haproxy in front of 3 - 6 redis nodes and let it load balance both reads and writes similar to galera cluster with keepalived and a vip I would not need to be concerned with split brain assuming synchronous writes...
评论 #6823729 未加载
dberg超过 11 年前
seems like redis cluster is about turning redis into riak
评论 #6823181 未加载
评论 #6823764 未加载