TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Redis with an SSD swap – not what you want

87 pointsby janerikabout 12 years ago

6 comments

antirezabout 12 years ago
I saw a few comments in the post asking what the point of the blog post was. After all I already expected very poor results, and I already tested it less formally in the past.<p>The point is simply to show how SSDs can't be considered, currently, as a bit slower version of memory. Their performance characteristics are a lot more about, simply, "faster disks".<p>Now those new disks are fast enough that if you design a database specifically to use SSDs, you can get interesting performances, compared to old disks. However the idea to use the disk as something you can use to allocate memory will not work well, and complex data structures requiring many random access writes, will not work either.<p>Code can be optimized for SSD usage of course, but this poses huge restrictions to what you can do and what you can't. This shows how the current Redis strategy of providing complex fast operations using just memory makes sense. In the future as SSDs will converge more with memory, this may change.
评论 #5338315 未加载
评论 #5339152 未加载
评论 #5338361 未加载
评论 #5338384 未加载
sniglomabout 12 years ago
Is this really something that needs testing?<p>Compare a good SSD, Samsung 840 to a normal PC using dual channel 1600MHz DDR3.<p>Maximum sequential read speed 0.5GB/s vs 25GB/s<p>Random read speed 0.01-0.1GB/s vs 3GB/s<p>Latency 30000-40000ns vs 6-65ns<p>So we're dealing with (best case) a bandwidth difference of factor 30 and a latency differnce of factor 500.<p>Now this isn't taking other things in to consideration, such as SSD performance degradation and the requirement of running garbage collection or trim.
评论 #5338667 未加载
gingerlimeabout 12 years ago
I'm risking showing lack of understanding, but I think it would be really nice to have some kind of a redis API that allows archiving certain keys (to disk). Perhaps the same way that keys can EXPIRE, they can get archived into secondary storage. Another API would allow retrieving keys from secondary storage.<p>Of course you can do this in your code, but then you step out of redis. I think it would be nice to bake this into redis, knowing that once loaded back from secondary storage, you get exactly the same object, and avoiding the whole (de)serialization process. Of course you won't achieve the same performance, but this is at least a known penalty.
评论 #5340423 未加载
nkurzabout 12 years ago
<i>As soon as it started to have a few GB swapped performances started to be simply too poor to be acceptable.</i><p>Acceptable is a fuzzy standard. Different applications have different needs, and not all applications require thousands of transactions per second. I'd presume there is an I/O rate below which the performance remains stable. Do you know what this rate is, and how it compares to transfer speed or latency of the SSD?
orijingabout 12 years ago
This is interesting and expected for evenly distributed request patterns. How about for more typical request patterns that follow power-law distributions? I would guess that it'd lead to much fewer page faults. I could write some math but does the benchmark tool let you choose a distribution of keys, which would help check that type of pattern?<p>Great analysis BTW.
JulianMorrisonabout 12 years ago
Sensible places to use SSD for Redis: RDB persistence, AOF persistence.<p>Not sensible places to use SSD: swapfiles.<p>Swapping to SSD will also trash your SSD, because you are continually rewriting it.
评论 #5338444 未加载
评论 #5338471 未加载