My main concern with the hype around memristors is how quickly the technology will scale. Yes, they have great promise. But if technical problems make them 4 times bigger and 8 times slower than existing technology, nobody will adopt them. And without volume adoption, they won't have enough investment dollars to be able to exceed Moore's Law. Which means that we won't care about them until well after Moore's Law runs out of steam.<p>This is not a theoretical failure mode. It has happened before in the computer industry. Multiple times.<p>For example it is why Transmeta died. Their goal was to have a simple chip that was so fast that they could emulate the x86 faster than the x86 could run. They failed. However one of the design goals was less heat (because heat was a major scaling barrier), which translated into having an emulated x86 chip that with much lower power. Given that they had a simpler architecture and had already solved heat problems that were killing Intel and AMD they hoped to iterate faster, and eventually win. But the investment asymmetry was so huge that they couldn't execute on the plan. And Intel was able to reduce their power enough to undercut Transmeta on the niche they had found, and Transmeta couldn't survive. (Intel was aggressive because they understood the strategy and the potential. Transmeta was always going to be something that either wiped out the existing industry or died with a whimper, and Intel knew which outcome they'd prefer.)
This is an excellent article. I agree that memristors will change everything.<p>I'm a bit perplexed about the timeframe, though. I'd like to think 5 years, but my gut says it's more like 12-15 years before it's all different. And it will be <i>very</i> different.
There's a presentation by R. Stanley Williams linked to in the article, and it is well worth watching.<p><a href="http://www.youtube.com/watch?v=bKGhvKyjgLY#" rel="nofollow">http://www.youtube.com/watch?v=bKGhvKyjgLY#</a><p>It has some tidbits I didn't expect. Teaser: "one of the guys in my group has...built a compiler to compile C code using implication logic rather than NAND logic and, interestingly enough, when we play with that the compiled code we get is always more condensed...by about a factor of 3"
<i>You can't just drop a memristor chip or RAM module into an existing system and have it work. It will take a system redesign.</i><p>If these truly are a viable storage system, building an interface that's reasonably easily adaptable to the current computing paradigm shouldn't be too difficult. Traditional HDDs and SDDs both play nice with SATA, for instance, despite using wildly disparate methods of storing bits.
<i>With memristors you can decide if you want some block to be memory, a switching network, or logic. Williams claims that dynamically changing memristors between memory and logic operations constitutes a new computing paradigm enabling calculations to be performed in the same chips where data is stored, rather than in a specialized central processing unit. Quite a different picture than the Tower of Babel memory hierarchy that exists today.</i><p>That part is mind-blowing.<p>And I'm wondering, if this all works out, will the whole multi-core thing and all the trouble that comes with it (from a software standpoint) be pushed back for another decade?<p>And what implications would that have for programming languages? Seems like it would mean JavaScript wouldn't be so much worse than Erlang after all (cf. <a href="http://news.ycombinator.com/item?id=1304599" rel="nofollow">http://news.ycombinator.com/item?id=1304599</a> ).<p>(Yeah, I know, it's a little bit of a stretch.)
Positronic? (the brains of asimov's robots).<p>CPU with data is a bit like objects - except asynchronous, with true message passing. Like smalltalk or Erlang (or web services for that matter).<p>Brain images, with parts lighting up depending on how active they are, suggests that much of our brains aren't being used most of the time, but come online as needed. It's as if one part calls another part, except the "call" doesn't block. I'm not being very coherent here.
Wow. When I asked this question I wish I had been as through and thinking critically as this guy.<p>It's so hard to guess what this means but I wonder if I should start writing a memristor VM just to see what could been done.<p>Even if they are here in 5 years I bet i'll bet longer than that before we really, truly know what to do with them.
The theoretical density they give neglects some overhead. If you divide the area of a chip by the size of a transistor, you should get 100 billion transistors on a chip, but in fact you can only get about 1 billion. The rest is overhead: wiring, power, isolation, etc. Probably a similar overhead will apply to memristors.<p>Also, in the 5+ years it takes to make them practical and reliable, transistors will make progress too. So it's unfair to compare the theoretical density of a new technology to the achieved density of an existing technology.
For one thing, orthogonal persistence will become the norm. A computer that needs to boot up will become a quaint thing of the past. Capability systems might become widespread because of this, resulting in finer grained security throughout the entire computing world. Small, cheap, but voluminous and low power memory stores will allow for greatly increased capabilities for computerized tags of physical objects. Vernor Vinge's localizers or Bruce Schneier's Internet of Things could come about because of this technology.
I've lost the link but someone here posted a PDF on propagation networks a while ago. I can see that married to this technology somehow and become the ideal computing fabric.<p>I'll dig around a bit to see if I can find the link.
<i>This allows multiple petabits of memory (1 petabit = 178TB) to be addressed in one square centimeter of space.</i><p>Given the sugar-cube reference earlier, and that nothing's 2D, much less tons of stacked circuits, I assume they mean one <i>cubic</i> centimeter. Taking that assumption, I point out this problem with that storage claim:<p>Heat. Good luck dissipating that little cube. Heat's one of the biggest reasons we don't have <i>way</i> higher power machines now, you just can't continually stack things together.<p>* goes back to reading <i></i> * very interesting article, though. I'll have to watch the video too. Homework first, though :|
Even if things don't pan out as Stanley Williams says, there will be enough innovation atop the technology to make many things a reality.<p>I have known about the memristor from the beginning--before all the "hype"--as I've interned at HP Labs and I drool at the thought of their vision for intelligent systems with in-memory processing. I mentioned this in another post, but I predict the memristors will make computer vision possible. This means autonomous vehicles, better airport security, etc. Similarly, anything involving sensors and machine learning, will lead to unimaginable progress.<p>Nanotech is real folks. The question isn't "if?", it's "when?" It will go through many iterations, but it's now real.
Could someone explain the author's conversion of petabits to terabytes?<p><i>(1 petabit = 178TB)</i><p><a href="http://www.wolframalpha.com/input/?i=petabits+to+terabytes" rel="nofollow">http://www.wolframalpha.com/input/?i=petabits+to+terabytes</a>
Can anyone explain at a high level how these things work?
Even the wikipedia article loses me fairly quickly. From
what I understand, they were predicted to exist based on a theory about the relationship between capacitors, inductors, and resistors. What variables would be of interest in this relationship?
Imagine the security rift this will create between old-style cpus and new. Memristers would tear through current encryption like a knife through butter.