TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Redis at Disqus

153 pointsby tswicegoodabout 14 years ago

11 comments

antirezabout 14 years ago
Thank you for this post. This is what we need as a community to improve: use cases, and useful criticisms when things don't work well, so that we can find new strategies.<p>It's cool to see that Redis works well for many things, but it will be even cooler if diskstore, or any other approach, can made Redis more accessible even when the performance gain of being in-RAM is not enough for some kind of applications to justify the costs.<p>We are also working at cluster and faster .rdb persistence. So there are interesting things going, but fortunately we will have something new and stable in a few hours, as 2.2.0 stable is going live in very little time :)
评论 #2247430 未加载
bretthoernerabout 14 years ago
I'm thinking about doing a second post with some actual code (some parts may be specific to Python, Django, and Celery) if anyone is interested.
评论 #2247371 未加载
评论 #2247368 未加载
评论 #2247359 未加载
评论 #2249347 未加载
评论 #2248449 未加载
评论 #2249777 未加载
r00kabout 14 years ago
HN folks: would any of you be interested in a 'Getting Started With Redis' screencast?<p>Edit: if it were non-free :)
评论 #2247572 未加载
geoffcabout 14 years ago
The most interesting thing about Redis is that it removes the impedance mismatch between in code data structures and the data store. It is doing for data stores what server side javascript does for AJAX applications. OO persistence was the first step in this direction but Redis nails the real world use cases a lot better.
评论 #2247612 未加载
xalabout 14 years ago
We have been using it for sessions (amongst tons of other stuff) at Shopify for half a year and found that we didn't have problems with increasing memory after we started setting expiration bits on the session keys.
评论 #2247486 未加载
nikzabout 14 years ago
When aggregating stats in this manner (by Day) how do people deal with Time Zones?<p>For instance, if I have one user in, say, NZST, their "Tuesday, 22 February" is still "Monday, 21 February" in PST - and the real issue is that the buckets are off. So you can't just store in UTC and then move it by whatever timezone offset, as then you are grabbing different "buckets".<p>I don't think that explanation is very clear (I had to draw a diagram to figure it out myself). Hopefully someone smarter than I am can figure it out anyway.<p>We've worked around it by just storing hour aggregates, but I'm interested in case someone else has a smart solution :)
评论 #2247960 未加载
评论 #2259232 未加载
评论 #2248121 未加载
sigilabout 14 years ago
Sharding: "We just take the modulo of the owning user's ID against the number of nodes we have to decide which node to read/write from/to."<p>What's your procedure for adding new nodes to increase capacity? Would you have to take your redis cluster offline to redistribute data from all nodes over the new keyspace?<p>I like the simplicity of your approach, but wonder if consistent hashing might be a bigger win in the long run.
评论 #2247771 未加载
koreabout 14 years ago
&#62; While the VM backend helped, we found that it still wouldn't stay within the bounds we set, and would continually grow no matter what we set. We did report the issue but never came to a good solution in time. For example, we could give Redis an entire 12GB server and set the VM to 4GB, and given enough time (under high load, mind you) it would climb well above 12GB and start to swap, more or less killing our site.<p>We came across this same issue while implementing a Redis-based solution to improve the scalability of our own systems. Someone filed an issue reporting this: <a href="http://code.google.com/p/redis/issues/detail?id=248" rel="nofollow">http://code.google.com/p/redis/issues/detail?id=248</a>.<p>Basically, antirez confirms that Redis does a poor job estimating the amount of memory used, so you'll need to adjust your redis.conf VM settings to take this into account. For anybody relying on Redis's VM, I'd recommend writing a script to load your server with realistic data structures with sizes you expect in production. You can then profile Redis's configured memory usage vs. the actual memory usage point at which swapping starts occurring, and set your redis.conf according to the limitations of your box. For example, we run Redis 2.0.2, and using list structures with ~50 items of moderate size, we found configuring Redis to use 400MB actually resulted in it using up to 1.4GB before swapping. We configure our settings to take this into account. Mind you, this may all change with diskstore, and later versions of Redis which are supposed to be more memory efficient.<p>For those curious, our Redis-based solution is helping us scale some write-heavy activities quite nicely, and has been running stably.
DEinspanjerabout 14 years ago
So frustrating to see all these people doing cool things with Redis and not having people free to do that stuff here. :) Any Redis hackers looking for a job or even some contract time? Big advantage is all the work will be open source and able to be shared and blogged.
评论 #2247984 未加载
dougk7about 14 years ago
I started experimenting with Redis for the last couple of weeks and I'm really loving it's power. I most rely on posts like these to find new ways to use it.<p>And your way of sharding definitely gave me more insight into distributing Redis across many nodes
dffabout 14 years ago
dfgdfg