According to the linode FAQ [1], current linode servers have roughly 20GB of RAM shared out. Does anyone know the spec of the new servers and what the max memory would be? The max for the CPU seems to be 750GB [2]. Presumably servers would be far less than the theoretical max, but fingers crossed they'll be able to bump the memory at some point soon so that they're a bit more competitive with others.<p>I was really hoping that was going to be this announcement, however the fact they've titled it 'The Hardware' hints that there might be nothing new on RAM in the next announcement, which would be a shame. Upgrades to CPU are nice, but it'd be nicer to see a RAM upgrade, as almost everyone is constrained on RAM rather than CPU or disk, particularly on newer hardware and with their new bandwidth limits.<p>I feel rather ungrateful now having said all that. Thanks anyway Linode!<p>[1] <a href="http://www.linode.com/faq.cfm#how-many-linodes-share-a-host" rel="nofollow">http://www.linode.com/faq.cfm#how-many-linodes-share-a-host</a>
[2] <a href="http://ark.intel.com/products/64595/Intel-Xeon-Processor-E5-2670-20M-Cache-2_60-GHz-8_00-GTs-Intel-QPI" rel="nofollow">http://ark.intel.com/products/64595/Intel-Xeon-Processor-E5-...</a>
> So what about SSDs? There’s no question SSDs are in Linode’s future, however enterprise-class SSDs (SLC or eMLC based) are prohibitively expensive. And although MLC-based drives are cheaper we just don’t feel right about using consumer grade laptop drives to power your Linodes. So we will wait until capacities for enterprise SSDs increase.<p>Well that is disappointing and arguably more important for many people. Not to be a debbie downer or anything.
I'm not super up on os level virtualization providers, but I'm curious why exposing 8 cores is an advantage. Based on the ram sizing it seems like they expect to run 100+ containers per processor - so surely they're restricting cpu time per process (instance) somehow. If you use a bunch of cpu time won't you just get starved out when you hit an invisible cpu wall? If you're only getting 1/10th or less of a cpu core wouldn't it make more sense to pin each instance to just a couple of cores minimizing context switches?
Just like a good showman would, Linode will be saving their RAM announcement for last. Network, CPU, disk again (maybe?), anything else (new services/options/load balancing/etc), and finally, memory.
I wonder if they considered allowing 4 cores, but double the CPU allotment? 4x1000MHz CPU instead of 8x500MHz CPU, for example.<p>8 cores does give you the highest best-case performance, since you can use 100% of each core if the other Linodes on your host are not using them.<p>They show the 1024 Linode with 8 CPU's as an example. But they also state that there are on average 20 1024 Linodes on each host machine, making 20x8 = 160 vCPU's being used.<p>Marketing-wise, more cores sounds impressive. But I wonder if performance per physical host is reduced splitting them into so many vCPU's
The E5 came out last March, is based on the previous Sandy Bridge architecture, and is hardly new. It was also practically a year late, launching the same time as the Ivy Bridge based E3 v2's. The 55xx series was launched in Q1 2009, and the 56xx series was launched in Q1 2010, so their existing hardware is 4 years old at this point. Yes, this is a major improvement, but in reality, they are simply catching up to last year's hardware, not exactly being innovative. The claim that their average hardware will be less than a year old when their upgrade is complete comes off as deceitful PR spin. The E5's are already a year old now, so they must be basing the age of the hardware on when it was purchased, and not when it was released to market.<p>The Ivy Bridge based E5 v2 is also coming out in Q3 this year, and supposedly with 12 core models, so it seems that this is a somewhat poorly timed upgrade on Linode's part. They should have either upgraded to the E5 v1's much sooner, or just wait it out for the E5 v2's.
I must admit Linode IO is pretty bad,<p>:~# dd if=/dev/zero of=test bs=64k count=3k oflag=dsync && rm test
3072+0 records in
3072+0 records out
201326592 bytes (201 MB) copied, 41.3478 s, 4.9 MB/s<p>this a 512MB Linode but still...<p>I hope they fix this with the new hard drives
"We’re investing millions to make your Linodes faster. Crazy faster. "<p>Skeptical. I'm wondering if this is just advertising hyperbole because Linode doesn't seem to be large enough (and there is no evidence to indicate any "funding") to be able to "invest millions".<p>They operate out of a suite in an office park outside Atlantic City NJ.<p>(I think it's a great company by the way I just don't think they are investing millions it doesn't make any sense given what I know about them.)
Pfff. Try harder linode.<p><pre><code> # grep MHz /proc/cpuinfo
cpu MHz : 3922519.116
cpu MHz : 3922519.116
cpu MHz : 3922519.116
cpu MHz : 3922519.116
</code></pre>
That's on a rackspace small instance. 4000 GHz, baby!
Anyone knows whats the clock speed for each virtual cores? Isn't that a relevant and important information?<p>For instance if the previous virtual cores were all 1ghz and new cores are all 500mhz, doubling the cores won't do much good if you don't know their clock speed.<p>I was critical of last Linode's NextGen post discussion, because I didn't think upgrade was impressive compare to others, but this is one is a nice bump, memory is probably what most people will care about more though. So maybe next refresh is the memory?