TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

LXC, Docker, and the future of software delivery (LinuxCon)

13 pointsby julien421over 11 years ago

2 comments

peterwwillisover 11 years ago
I still don&#x27;t buy the idea of the Linux container as a universal way to do anything. It depends on your kernel [opposite of what the slides claim], it depends on your apps, it depends on your dependencies, it depends on your architecture, etc.<p>The one phrase that&#x27;s correct is <i>&quot;it&#x27;s chroot on steroids&quot;</i>. That is in fact exactly what it is. The exception is, it&#x27;s even <i>less</i> portable than just a chroot environment. Docker adds extra features on top of the chroot, but that&#x27;s basically its core functionality.<p>So the first thing you have to ask yourself is: does my software need to be run in a chroot environment or a VM isolated from all other applications? If no, you very well may not need this <i>at all</i> for your software deployment. If anything, Docker images create a bigger burden on your deployment as you have these large images to distribute, modify, manage. Of course they built in some fancy network transmission magic to make it only copy changed parts of an image, but this is still wildly less efficient than traditional means, and you still have to fuck around with the image to make it incorporate your changes before you push it.<p>If the big selling point is &quot;commoditization&quot;, keep in mind that basically everyone rolls their own environment and customizes their deployment. It&#x27;s the natural order of having your own architecture that fits your application. The one thing that&#x27;s never going to happen, is you taking Docker images from the internet and never modifying them. This universal container system goes out the window the first minute you have to start modifying everything to fit edge cases, which is always going to happen.
评论 #6396658 未加载
评论 #6396642 未加载
contingenciesover 11 years ago
Slide #40: <i>Typical Workflow</i> is, very much like some other aspects of docker (use of specific filesystems, use of entire filesystems within containers, etc.), a false general case that is in fact unsuited to many people&#x27;s requirements.<p>Slide #44: <i>Docker roadmap</i> towards <i>1.0</i> seems to dodge the question of significant differences in function with regards the apparent plan to adopt a variety of storage backends with different capabilities, use of different virtualization environments as targets, etc.<p>I support docker as a project but I still really think you guys need to stop and ponder your architecture and goals before charging along too far. For projects to survive long term and be useful sometimes separating concerns is necessary, and I would suggest that&#x27;s perhaps not being done well at present with some one-size-fits-all assumptions that are pretty anti unix philosophy (<i>do one thing and do it well</i>). What is the one thing? Is that really a general need? In all cases? What does a user lose with this abstraction? Rather than increasing scope, what would happen if you tried lopping those bits off entirely?