Regardless of what people think of facebook and their business, this is a pretty big deal. As a very large tech company, they have the time and talent to develop their own switches. If they are releasing the reference implementations, unlike google, this helps anyone else trying to build the next big web company. The more information the merrier.
I work in a somewhat related team (Traffic/CDN) at Facebook, and I'm very excited about what this is going to allow us to do in future.<p>Current switches just don't support the deployment, monitoring, and configuration power we have for servers. While we've done a lot (probably close to the most than can be) to bring them somewhat close to par, Wedge should not only leapfrog to equality, but also use the same infrastructure - and gain whenever the server processes improve.<p>The opportunities opened by being able to quickly canary some new features (without doing a firmware upgrade before turning on and again after turning it off), have detailed logging and monitoring and reusing our existing tools for correlation and comparison, and to do some things we currently are forced to do on separate machines now are fairly large.
I really wish one of the big hardware vendors would just start shipping validated, certified and warrantied Open Compute Project hardware at the substantial savings that can be had from it. Or maybe I'm ignorant here and there are little savings to be had.<p>The rest of this post is just a rant from my perspective in the SMB space.<p>In my space, everything that's worth getting is too expensive, and everything else is crap. The switch and storage market is a racket, as near as I can tell, where every opportunity to get you to pay another 20 or 30 percent premium over what you had before is taken with selling you features you don't want or can't use. Software defined storage is, ultimately, limited by your network. Software defined networking is here (OpenFlow, network virtualization) but SDN is being used as a value-add to get customers to pay <i>even more</i>. The result is that software designed storage is a crapshoot (only as high quality as the network) and whether or not you save money is debateable.<p>Shared storage is a tremendous racket because adding "SAS" to anything doubles or triples its price. Consumer SSDs are advancing the state of the art much faster than enterprise tech (which tends to accommodate slower purchasing cycles and longer service lifetimes), but to get an older, slower SSD for a shared SAS JBOD means paying five or six times as much per gigabyte.<p>I really want a virtual SAN that doesn't suck, and a network that doesn't cost $1000 dollars per port to connect a handful of servers. Alas, it doesn't look like anything like that is coming soon.
It's somewhat hilarious that providers like Cisco are working so hard on nonsense like the Internet of Things™ while ignoring the work that will actually define the future of networking (and should have been done a decade ago)
I am so excited about this, while I realize switches are perhaps one of the last bastions of over priced software I would <i>love</i> to have a switch where it is just a freakin' switch. It isn't trying to be all things to all people at some level and doing that badly. I've got Blade, HP, Cisco, Supermicro, and Mellanox switches that have been in this role (Top of Rack) and so often they bite the big one when it comes to some random protocol going nuts. Every single site outage in nearly 4 years of 'launch' has been due to a switch bug.