Packages are great because they simplify automation. Once you've got a package built and uploaded to a repository, you can install it across a large fleet of machines one-line of Chef or Puppet code.<p>There are some pitfalls, however:<p>* It can be time-consuming dealing with the arcane details of Debian package metadata or RPM spec files. If you're deploying your own application code, you're likely better off using fpm to generate a package from a directory tree:<p><a href="https://github.com/jordansissel/fpm" rel="nofollow">https://github.com/jordansissel/fpm</a><p>* If you have a complex stack, e.g., a specific version of Ruby and a large number of gem dependencies, you should avoid trying to separate things into individual packages. Just create a single Omnibus-style package that installs everything into a directory in /opt:<p><a href="https://github.com/opscode/omnibus-ruby" rel="nofollow">https://github.com/opscode/omnibus-ruby</a><p>* Maintaining build machines and repository servers takes ongoing effort. Shameless plug: This is why I created Package Lab---a hosted service for building packages and managing repositories. Currently in private beta and would love feedback:<p><a href="https://packagelab.com/" rel="nofollow">https://packagelab.com/</a>
Yeah, sure. Because it is so easy to build packages, as the shortlyness of the article and the amount of the involved commands prove, it is surely a fast and good way to produce those packages to deploy your code.<p>For reference, it is not. Packages solve a different problem, and he even writes it: Well made packages with dependencies enable everyone to use the software, regardless of the involved system, given some constraints. They don't need to be fast and they don't need to be easy (as much as I would like them to), because they are used by specialists in a lengthy process.<p>But if one deploys code on a system, we know a bit more of the system than "it is a computer". Maybe it is a standardized production instance, maybe it is a vm - in any case, we have direct access. So it is possible to use easier and faster methods to deploy code directly, without having to resort to arcane voodoo.<p>If you really want to use debs for deployment, at least use checkinstall and handle the dependencies manually. Then you need at most 3 command (./configure, make, checkinstall).
I've done this for a few years, and when it's all setup the integration with the underlying system is absolutely wonderful. In particular, your app is "just" another package - there's no magical special-casing you ever need to think of.<p>You can also make your app quite modular - you can build multiple binary packages from one source which is perfect for different server roles that share a lot of code or configuration.<p>The only drawbacks are the fair amount of knowledge you need to share within your team, as well as quite a bit of machinery needed to get everything up and running once you move beyond a single "dpkg -i"-able .deb (some sort of APT repo, signing keys, blah blah).
How does this work with:<p>1) Clusters of application servers, where I will only want operations on shared resources to fire from one of the servers? E.g. database updates, shared file changes, etc.<p>2) When I want to deploy the code to a different location on the server so that I can have multiple versions of the application available? Do I have to spin up new servers for each version?<p>3) You mention roll back by just specifying an earlier package but I don't see how this would work with stuff like database changes either.
An (ex) colleague of mine blogged about the valid reasons behind this some time ago: <a href="http://www.thoughtworks.com/insights/blog/deploy-package-not-just-tag-branch-or-binary" rel="nofollow">http://www.thoughtworks.com/insights/blog/deploy-package-not...</a>
"Just avoid Debian, and everything else related to .deb packages" seems a fitting solution to me.<p>(even more so after rewriting Erlang packages to get something that a) works and b) is not stale)