A funny and somewhat off topic story -- back in 2003, before the Google IPO, Google was doing a recruiting event at Berkeley. They brought a few of their folks with them: their founder Larry, one of their female engineers, Marissa, and some others. They did a little talk, and during the Q&A, professor Brewer told Larry that there was an opening in the PhD program and he was welcome to it. Larry politely declined.<p>Afterwards I asked Larry, "so, do you think you'll ever finish your PhD, either here or at Stanford?". He said, "If this Google thing doesn't work out I might, but I have a feeling it will work out ok."<p>It amuses me that Professor Brewer is now working for Larry. :)
Great Interview!<p>I worked as a contractor at Google in 2013 and loved their infrastructure. It was amazing to fire off a Borg job that used hundreds to thousands of servers, and the web based tools for tracking the job, fantastic logging to drill into problems, etc.<p>And, Borg was two generations ago!<p>Even though I am very happy doing what I am now, sometimes I literally wake up in the morning thinking about Google's infrastructure. I now use lesser but public services like AppEngine, Heroku, nitrous.io (like Google's web based IDE Cider, a bit) but it is not the same.<p>BTW, not to be negative, but while Google is a great home for someone like Eric Brewer, it is a shame that many hundreds of future students at UC Berkeley will not have him as a professeur.
One thing that bothers me about the article is that it shows a recurring problem: IT not knowing what it knows. The NoSQL movement didn't notice that NonStop Architecture scaled linearly to thousands of cores with strong-consistency, five 9's, and SQL support. In the mid-80's. Instead of making a low-cost knockoff, like cluster movement did for NUMA's, they ditched consistency altogether and launched NoSQL movement. Now, I see man who invented CAP theorem discuss it while referencing all kinds of NoSQL options to show us the tradeoffs. Yet, there's Google services in production and tech such as FoundationDB doing strong consistency with distributed, high throughput and availability.<p><a href="http://www.theregister.co.uk/2012/11/22/foundationdb_fear_of_cap_theorem/" rel="nofollow">http://www.theregister.co.uk/2012/11/22/foundationdb_fear_of...</a><p>So, why aren't such techs mentioned in these discussions? I liked his explanation of the partitioning problem. Yet, he and NoSQL advocates seem unaware that numerous companies surmounted much of the problem with good design. We might turn CAP theorem into barely an issue if we can get the industry to put the amount of innovation into non-traditional, strong-consistency architectures as they did into weak-consistency architectures. There is hope: Google went from a famous, NoSQL player to inventing an amazing, strong-consistency RDBMS (F1). Let's hope more follow.<p><a href="https://static.googleusercontent.com/media/research.google.com/en/us/pubs/archive/41344.pdf" rel="nofollow">https://static.googleusercontent.com/media/research.google.c...</a>
I'd like a real explanation for why containers are better than unikernels. Yes, unikernals are still early, and containers are convenient, because you have all of linux there... but it seems that running several linuxes on a linux machine is a bit much. One operating system plus XEN plus several applications in unikernels seems more efficient, and more exciting.<p>But it's the less common choice.<p>I am guessing convenience is more important than the better solution that would ultimately be more just as convenient and more efficient if it gets enough eyeballs?
Containers, like virtual machines before them, aren't the future of computing. They're how we manage legacy apps.<p>The future of computing is not this horrible kludge.
I saw a kubernete talk at a local meetup.<p>Google have 40 programmers dedicated to that project. It's still very beta btw all programmed in Go.<p>There's also mesos and I think you can use both in tandem since they're targeting a different thing.<p>Anyway if anybody is doing or thinking about containers check Kubernete and Mesos out. Also of course docker and rocket. Kubernete officially support docker and will be supporting rocket.<p>There are also article about how rump kernel are better than containers. Just fyi.
I have been using Amazon Web Services and other cloud platforms for over a year now, and I never really felt that VMs were the bottleneck in any way. Can someone explain to me the advantage of containers here?<p>I know that containers are faster because they don't virtualize the hardware, however it comes at the cost of security.
Can someone explain or provide an educated guess about what is the google's strategy with kubernetes here? Surely containers are hot now and it is nice to have a stake in the game but borg has been one of their key competitive advantages. What is the profit in making an open-source alternative?
I love the idea of using containers. Due to linux popularity and google's backing, containers will be next.<p>But FREEBSD had jails since back in the day. What's the benefit of containers over bsd jails?
imo, if containers are the future ( very plausible ), then, things like Aws lambda are just as plausible if only just a bit further out.<p>I think this is the case due to granularity of workloads and what apears to be a continuum in the workload container from metal > vm > containers > lambdas (as first class workloads).<p>fun stuff
With all these container talks people forget Solaris zones which were pretty advanced sort of containers in Solaris. However Sun was honest about not mentioning zones as the solution for everything. The most important problem with containerization is that due to dependence of multiple containers on the same kernel of the host or VM OS , any kind of upgrade specially the security patching is virtually impossible to do without taking full downtime.