TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

How We Deploy Python Code

250 pointsby spangalmost 10 years ago

29 comments

svieiraalmost 10 years ago
Back when I was doing Python deployments (~2009-2013) I was:<p>* Downloading any new dependencies to a cached folder on the server (this was before wheels had really taken off) * Running pip install -r requirements.txt from that cached folder into a new virtual environment for that deployment (`&#x2F;opt&#x2F;company&#x2F;app-name&#x2F;YYYY-MM-DD-HH-MM-SS`) * Switching a symlink (`&#x2F;some&#x2F;path&#x2F;app-name`) to point at the latest virtual env. * Running a graceful restart of Apache.<p>Fast, zero downtime deployments, multiple times a day, and if anything failed, the build simply didn&#x27;t go out and I&#x27;d try again after fixing the issue. Rollbacks were also very easy (just switch the symlink back and restart Apache again).<p>These days the things I&#x27;d definitely change would be:<p>* Use a local PyPi rather than a per-server cache * Use wheels wherever possible to avoid re-compilation on the servers.<p>Things I would consider:<p>* Packaging (deb &#x2F; fat-package &#x2F; docker) to avoid having any extra work done over per-machine + easy promotions from one environment to the next.
评论 #9863359 未加载
评论 #9862683 未加载
评论 #9862739 未加载
morgantealmost 10 years ago
Their reason for dismissing Docker are rather shallow, considering that it&#x27;s pretty much the perfect solution to this problem.<p>Their first reason (not wanting to upgrade a kernel) is terrible considering that they&#x27;ll eventually be upgrading it anyways.<p>Their second is slightly better, but it&#x27;s really not that hard. There are plenty of hosted services for storing Docker images, not to mention that &quot;there&#x27;s a Dockerfile for that.&quot;<p>Their final reason (not wanting to learn and convert to a new infrastructure paradigm) is the most legitimate, but ultimately misguided. Moving to Docker doesn&#x27;t have to be an all-or-nothing affair. You don&#x27;t have to do random shuffling of containers and automated shipping of new images—there are certainly benefits of going wholesale Docker, but it&#x27;s by no means required. At the simplest level, you can just treat the Docker contain as an app and run it as you normally would, with all your normal systems. (ie. replace &quot;python example.py&quot; with &quot;docker run example&quot;)
评论 #9862131 未加载
Cieplakalmost 10 years ago
Highly recommend FPM for creating packages (deb, rpm, osx .pkg, tar) from gems, python modules, and pears.<p><a href="https:&#x2F;&#x2F;github.com&#x2F;jordansissel&#x2F;fpm" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;jordansissel&#x2F;fpm</a>
评论 #9862170 未加载
评论 #9861837 未加载
评论 #9861298 未加载
doki_penalmost 10 years ago
We do something similar at embedly, except instead of dh-virtualenv we have our own homegrown solution. I wish I new about dh-virtualenv before we created it.<p>Basically, what it comes down to a build script that builds a deb with the virtualenv of your project versioned properly(build number, git tag), along with any other files that need to be installed (think init scripts and some about file describing the build). It also should do things like create users for daemons. We also use it to enforce consistent package structure.<p>We use devpi to host our python libraries (as opposed to applications), reprepro to host our deb packages, standard python tools to build the virtualenv and fpm to package it all up into a deb.<p>All in all, the bash build script is 177 LoC and is driven by a standard build script we include in every applications repository defining variables, and optionally overriding build steps (if you&#x27;ve used portage...).<p>The most important thing is that you have a standard way to create python libraries and application to reduce friction on starting new projects and getting them into production quickly.
remhalmost 10 years ago
We fixed that issue at Datadog by using Chef Omnibus:<p><a href="https:&#x2F;&#x2F;www.datadoghq.com&#x2F;blog&#x2F;new-datadog-agent-omnibus-ticket-dependency-hell&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.datadoghq.com&#x2F;blog&#x2F;new-datadog-agent-omnibus-tic...</a><p>It&#x27;s more complicated than the proposed solution by nylas but ultimately it gives you full control of the whole environment and ensure that you won&#x27;t hit ANY dependency issue when shipping your code to weird systems.
评论 #9866555 未加载
评论 #9862416 未加载
kbar13almost 10 years ago
<a href="http:&#x2F;&#x2F;pythonwheels.com&#x2F;" rel="nofollow">http:&#x2F;&#x2F;pythonwheels.com&#x2F;</a> solves the problem of building c extensions on installation.
评论 #9861441 未加载
tschellenbachalmost 10 years ago
Yes, someone should build the one way to ship your app. No reason for everybody to be inventing this stuff over and over again.<p>Deploys are harder if you have a large codebase to ship. rSync works really well in those cases. It requires a bit of extra infrastructure, but is super fast.
评论 #9862985 未加载
评论 #9861803 未加载
评论 #9862355 未加载
sandGorgonalmost 10 years ago
The fact that we had a weird combination of python and libraries took us towards Docker. And we have never looked back.<p>For someone trying out building python deployment packages using deb, rpm, etc. I really recommend Docker.
评论 #9861300 未加载
评论 #9861893 未加载
评论 #9861277 未加载
sophaclesalmost 10 years ago
We use a devpi server, and just push the new package version, including wheels built for our server environment, for distribution.<p>On the app end we just build a new virtualenv, and launch. If something fails, we switch back to the old virtualenv. This is managed by a simple fabric script.
nZacalmost 10 years ago
We just commit our dependencies into our project repository in wheel format and install into a virtual env on prod from that directory eliminating PyPi. Though I don&#x27;t know many other that do this. Do you?<p>Bitbucket and GitHub are reliable enough for how often we deploy that we aren&#x27;t all that worried about downtime from those services. We could also pull from a dev&#x27;s machine should the situation be that dire.<p>We have looked into Docker but that tool has a lot more growing before &quot;I&quot; would feel comfortable putting it into production. I would rather ship a packaged VM than Docker at this point, there are to many gotchas that we don&#x27;t have time to figure out.
评论 #9862989 未加载
评论 #9862030 未加载
viraptoralmost 10 years ago
&gt; curl “<a href="https:&#x2F;&#x2F;artifacts.nylas.net&#x2F;sync-engine-3k48dls.deb”" rel="nofollow">https:&#x2F;&#x2F;artifacts.nylas.net&#x2F;sync-engine-3k48dls.deb”</a> -o $temp ; dpkg -i $temp<p>It&#x27;s really not hard to deploy a package repository. Either a &quot;proper&quot; one with a tool like `reprepro`, or a stripped one which is basically just .deb files in one directory. There&#x27;s really no need for curl+dpkg. And a proper repository gives you dependency handling for free.
评论 #9862256 未加载
评论 #9862462 未加载
perlgeekalmost 10 years ago
Note that the base path &#x2F;usr&#x2F;share&#x2F;python (that dh-virtualenv ships with) is a bad choice; see <a href="https:&#x2F;&#x2F;github.com&#x2F;spotify&#x2F;dh-virtualenv&#x2F;issues&#x2F;82" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;spotify&#x2F;dh-virtualenv&#x2F;issues&#x2F;82</a> for a discussion.<p>You can set a different base path in debian&#x2F;rules with export DH_VIRTUALENV_INSTALL_ROOT=&#x2F;your&#x2F;path&#x2F;here
serkanhalmost 10 years ago
&quot;Distributing Docker images within a private network also requires a separate service which we would need to configure, test, and maintain.&quot; What does this mean? Setting up a private docker registry is trivial at best and having it deploy on remote servers via chef, puppet; hell even fabric should do the job.
评论 #9865848 未加载
erikbalmost 10 years ago
No No No No! Or maybe?<p>Do people really do that? Git pull their own projects into the production servers? I spent a lot of time to put all my code in versioned wheels when I deploy, even if I&#x27;m the only coder and the only user. Application and development are and should be two different worlds.
objectifiedalmost 10 years ago
I recently created vdist (<a href="https:&#x2F;&#x2F;vdist.readthedocs.org&#x2F;en&#x2F;latest&#x2F;" rel="nofollow">https:&#x2F;&#x2F;vdist.readthedocs.org&#x2F;en&#x2F;latest&#x2F;</a> - <a href="https:&#x2F;&#x2F;github.com&#x2F;objectified&#x2F;vdist" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;objectified&#x2F;vdist</a>) for doing similar things - the exception being is that it uses Docker to actually build the OS package on. vdist uses FPM under the hood, and (currently) lets you build both deb and rpm packages. It also packs up a complete virtualenv, and installs the build time OS dependencies on the Docker machine where it builds on when needed. The runtime dependencies are made into dependencies of the resulting package.
rfeatheralmost 10 years ago
I&#x27;ve had decent results using a combination of bamboo, maven, conda, and pip. Granted, most of our ecosystem is Java. Tagging a python package along as a maven artifact probably isn&#x27;t the most natural thing to do otherwise.
StavrosKalmost 10 years ago
Unfortunately, this method seems like it would only work for libraries, or things that can easily be packaged as libraries. It wouldn&#x27;t work that well for a web application, for example, especially since the typical Django application usually involves multiple services, different settings per machine, etc.
评论 #9861810 未加载
评论 #9861887 未加载
评论 #9862048 未加载
avilayalmost 10 years ago
Here is the process I use for smallish services -<p>1. Create a python package using setup.py 2. Upload the resulting .tar.gz file to a central location 3. Download to prod nodes and run pip3 install &lt;packagename&gt;.tar.gz<p>Rolling back is pretty simple - pip3 uninstall the current version and re-install the old version.<p>Any gotchas with this process?
评论 #9882129 未加载
评论 #9861827 未加载
velocitypsychoalmost 10 years ago
For installing using .deb files, how are db migrations handled. Our deployment system handles running django migrations by deploying to a new folder&#x2F;virtualenv, running the migrations, then switching over symlinks.<p>I vaguely remember .deb files having install scripts, is that what one would use?
评论 #9862145 未加载
评论 #9862359 未加载
lifeisstillgoodalmost 10 years ago
Weirdly I am re-starting an old project doing this venv&#x2F; dpkg (<a href="http:&#x2F;&#x2F;pyholodeck.mikadosoftware.com" rel="nofollow">http:&#x2F;&#x2F;pyholodeck.mikadosoftware.com</a>). The fact that it&#x27;s still a painful problem means Inam not wasting my time :-)
weboalmost 10 years ago
&gt; Building with dh-virtualenv simply creates a debian package that includes a virtualenv, along with any dependencies listed in the requirements.txt file.<p>So how is this solving the first issue? If PyPI or the Git server is down, this is exactly like the git &amp; pip option.
评论 #9861820 未加载
compostor42almost 10 years ago
Great article. I had never heard of dh-virtualenv but will be looking into it.<p>How has your experience with Ansible been so far? I have dabbled with it but haven&#x27;t taken the plunge yet. Curious how it has been working out for you all.
评论 #9862119 未加载
BuckRogersalmost 10 years ago
Seems this method wouldn&#x27;t work as well if you have external clients you deploy for. I&#x27;d use Docker instead of doing this, just to be in a better position for an internal or external client deployment.
评论 #9861854 未加载
评论 #9861825 未加载
ah-almost 10 years ago
conda works pretty well.
评论 #9861301 未加载
评论 #9861436 未加载
jacques_chesteralmost 10 years ago
Here&#x27;s how I deploy python code:<p><pre><code> cf push some-python-app </code></pre> So far it&#x27;s worked pretty well.<p>Works for Ruby, Java, Node, PHP and Go as well.
评论 #9871554 未加载
daryltuckeralmost 10 years ago
I see your issue of complexity. Glad I haven&#x27;t ever reached the point where some good git hooks no longer work.
theseatomsalmost 10 years ago
Does anyone have experience with PEX?
stefantalpalarualmost 10 years ago
&gt; The state of the art seems to be ”run git pull and pray”<p>No, the state of the art where I&#x27;m handling deployment is &quot;run &#x27;git push&#x27; to a test repo where a post-update hook runs a series of tests and if those tests pass it pushes to the production repo where a similar hook does any required additional operation&quot;.
评论 #9861566 未加载
hobarreraalmost 10 years ago
&gt; The state of the art seems to be ”run git pull and pray”<p>Looks like these guys never heard of things like CI.
评论 #9861590 未加载
评论 #9861662 未加载