That's an odd question, really. It all depends on the context and the type of changes, obviously.<p>If it's a critical fix, then it goes out asap. If it's a minor fix, but relating to a new feature roll-out from 2 hours ago, it too goes out as soon as it's ready. Otherwise, changes gets deployed once the feature is complete.
With my employer we deploy several times a day. We have 2 SVN repos. One dev, one live. To make a change live you commit your change to the live repo and it is auto-synced. I am not sure how we came to have this. When I first joined there was a code reviewer who manually pushed changes live. He quit and the server admin made it auto.<p>I wouldn't recommend this set up. I am incredibly competent. I make maybe 1 minor mistake a month - if that - so with me this set up is workable. The other developers I work with.. not so much. The main issue with this set up is newbie developers deploying what they thing is a good fix which breaks in lots of edge cases. Bad patches like that can take weeks to become apparent then longer to chase out of the system as there is no way to simply rollback.<p>For my side project I deploy every Wednesday. Every now and again there is a bug fix which needs to be slipped in on an odd day. I currently deploy manually because I haven't set anything else up.<p>Throw up maintenance page -> update database -> upload new files -> quick test -> remove maintenance page.<p>It is a bit time intensive but I haven't had any problems with it yet. I like to group changes so I can announce them on the same day.
Sometimes I like to edit things direct on the live server, then again, I'm not running a public facing website, so I'm allowed :)<p>I'd wager many people here missed "the good old days" when it wasn't uncommon to debug a live website over ftp.
This is an interesting enough question if you're just curious, but I wouldn't base any decisions on it. It's a good example of asking the wrong question. Different products require different approaches to QA and deployment.<p>For example, our software is sold to enterprise customers. These customers use the software to run real-time reverse auction procurements where the total value of the purchase can be anywhere from $250,000 to $20,000,000.<p>The results of the reverse auction events are awarded in the form of a contract between buyer and seller. If a software bug causes an incorrect calculation, it's a big problem (0.1% error on a $10M purchase is a $10,000 error). And by big problem, I mean that weeks (maybe months) worth of effort from our team, the customer's team, and several vendors go down the drain.<p>Put simply, a bug in our bid core could cost us $40,000-$50,000, assuming the lost business assessment is limited to the single failed procurement event. Looking at the total value of a lost customer, you're easily talking $250k.<p>Because of this, we have a very long QA cycle. Outside of automated tests, our software is touched by humans (a lot) before it goes to production, and production releases are less frequent (once every couple of months).<p>The frequency with which you deploy, and the amount of effort you put in to QA is determined by the cost of failure. We're in a rather unenviable position of being a small fish in the enterprise market, so our cost of failure is huge (we have a small number of huge clients). Thus, we invest significant effort in QA and slow-roll to production. Your situation may be entirely different.
My website runs on OpenShift. I deploy to production as soon as a new feature or hotfix/bugfix is ready. To be convinced that these changes work well and don't break any other working behavior, I first push the new version to a another server ("staging server"), which basically is a clone of the production server.<p>On the staging server, I observe if everything works well by going through some use case scenarios <i>manually</i>. Once I'm convinced that everything works well, I deploy the new version to the production server for my beta testers to try out and give feedback.<p>So cycles of "new versions on production" can currently take anything between a few minutes (really quick bugfix, "OMG the website is not working at all!") and a few days (new feature)<p>One could complain that there should be a test suite running unit tests and integration tests, taking care of making sure everything works well. But I have two observations that make it impossible for me to rely on a big automatic test suite:<p>* as my website is still technically in an explorative mode, constructing test cases is wasteful: my website's architecture changes quickly to adapt to new insights. I rather spend my resources on developing features than on a test suite that doesn't work with the new architecture.
* Subtle differences on my production server (limited server resources, different HTTP request behavior) make it impossible to rely on a simulated environment for my integration tests. Therefore, I have a staging server with the exact configuration as my production server. The only difference are IP address and domain name. These differences shouldn't break anything in my website, but you never know until it breaks the first time :)
Really depends on how much time I've got for my personal projects but here's my current breakdown;<p>Work - Continuously, can be within minutes:<p>* Team commits to SVN trunk, tests run + packages built. Manual tests run (can take some time and more fixing). Merge to production branch, tests run + packages built then pushed out to servers.<p>Personal - Depends on time but few times a month on average:<p>* I work in Python and do my own DevOps stuff. (Not a Heroku/Appfog/etc fan..) I use SaltStack (saltstack.org) to keep a 'template' that I can apply to a blank Linux Server (AWS|Digital Ocean|Local VirtualBox for dev etc) and it will set it up for me, same layout, every time. Kind of like Puppet/Chef, but in Python. (It's awesome - definitely check it out)<p>* Commit to BitBucket (for private repos) -> Jenkins pulls code down -> Runs tests, displays coverage etc -> Builds RPMs & Debs -> Salt pushes out to any connected minion (yup - it also has arbitrary command execution).<p>I'm going to write a blog post on the entire personal setup if anyone is interested.<p>Also checking out Linux containers (specifically docker.io) to see if I can speed this up.
Nowadays it is every couple of hours.<p>But, I used to work for departments of the UK and US governements and it wasn't unusual to have such locked-down and managed environments that it would take 6 months to release some code. It was also expensive (the sponsor department would have to pay for validation and testing by a third party).<p>The consequence of this is that we really only deployed every 1-2 years.
Well ... a few times a day. Very big and complex web portal here.<p>Live editing and debugging on the servers is strongly discouraged. But still have to do echo print_r from time to time.<p>Lesson learned trough the years of practice - there will always be big enough lump of fecal matter that when it hits the fan it will make you abandon all of the best practices for a while.
Support Lead (and the "Application Expert") for an Oil-Major internal web app here:<p>"support fixes" - ASA(F)P, after 2 rounds of testing and several sign-offs + Management approvals.<p>"Change Requests" - Deployed after an impact+risk-assessed notice period (between 1 - 14 days), and after the above testing/sign-offs/approvals<p>"New Release" - Should be Quarterly, after several rounds of testing, sign-offs, agreements from all affected business units... but there's been a big push on the app recently, so there's been 3 releases this year so far.<p>And then we have to go through all the testing/sign-off/deployment stages again with any Joint-Venture companies that have their own installations due to local data laws.
Depends on whether or not I'm actively working on a new part of the site. It varies from 10+ times to 1-2 times an hour.<p>However, with all these people talking about deployment, I just wanna hijack (sorry!) and ask if anybody can help with my current deployment setup:<p>- Branch into a new feature (refactor-javascript for example)
- Commit constantly
- Rebase with master then merge into master
- Push to remote repo
- SSH to server, and then pull from the same repo
- (.git isn't exposed via HTTP)<p>What's the better way of doing this? If you could help that'd be great :-) Drop me an email at andy@fine.io
I hand it over to the lead dev who is the only one of the contractors with sufficient security clearance to access the production systems. His process is less automated than I like and patching production bugs can be a pain.<p>I only hand over work to him on the proviso that my work is ready "pending end user acceptance testing". Of course there's still the traditional 2nd of january deploy untested code manouver they seem to love. Last time they pulled that it cost them at least $40k but they were warned and we weren't fired.
We've got a CI pipeline that watches the git repo, runs the tests, does an automatic deploy to a staging server, runs smoke tests against that, and merges that code into the "prod" branch. Then there's a simple web interface with a big button that checks out prod and deploys it to production. So it's <i>almost</i> continuous deployment; just one point of manual review involved between the 'git push' and code running on production.
Code freeze on Fridays, deployment regularly on Tuesday nights.<p>If you're not using Jenkins you don't know what you're missing, but I can tell you it's warm and sunny :)
That is a tricky question.<p>here, we deploy a major update to the system every 6 weeks.
All database changes go into the major update.<p>We release patches to production frequently.
In first week of the major update we might do 5 - 6 patches per day.<p>This is not easy for us, as we do not have everything on the web.<p>We have a java application that is patched and downloaded from our users so we want to avoid forcing them to download the new stuff too often.<p>if it was a web application the impact of the patch would be minimal.
We have a productized SaaS, where the product UIs are sometimes updated several times a day each (we have a couple dozen products now and growing). The backend driving the products is updated usually on a weekend after a 2-week sprint. It works out pretty well for us, and we rarely feel rushed to get something out. There is the odd hotfix for a critical bug - that happens maybe once every two months or so.
Investment baking web software; at the moment, every two weeks - code freeze + packaging is done on tuesdays, packaged release is about a week later (after being deployed to acceptance and getting stamps of approval from management that click through the app once or twice or something). I think we can do much faster though, weekly at least, and continuous if we improve our processes and discipline.
At <a href="http://multiplx.com" rel="nofollow">http://multiplx.com</a> (an RSS Reader) we usually do a production release once every day or two. This helps us keeping the bugs minimal and users happy.<p>Though it consumes a lot of bandwidth, but automation is the key here. It can be minimized if you have a CI tool that can build the product and deploy on the machine itself.
The majority of my SVN/deploy magic happens with hooks.
Pre-commit running PHP Lint - Keeps out the stupid errors that we all make.
Post-commit svnlook (diff) emailed to the full dev group - dead simple code review
Post-commit svn up of dev site
Post-commit varnish clear of dev site
Post-commit commit message in hipchat
Deploy message in hipchat
Architect for a major US bank reporting in here:<p>Major changes: every 5 weeks (build for 4 weeks; test for 1)
Minor changes: every week (urgent and emergency changes)
Bug fixes: as needed. usually 0x/week but can be multiple times a day if necessary.<p>Mobile app releases: rarely. The belief is that frequently updated apps are considered a bad thing.
Enterprise stuff in the domain of insurances:<p>new version around once a month to production, datafixes whenever they are urgent. BI/DW & reporting side with quite a bit faster cycle, around once a week, as the requirements of the extremely complex metrics and dimensional data hierarchies are still evolving.
We are deploying once or twice per month and we are using the "git flow" branching model. More info here: <a href="http://nvie.com/posts/a-successful-git-branching-model/" rel="nofollow">http://nvie.com/posts/a-successful-git-branching-model/</a>
At work... lots of red tape. Lucky to see one deployment a month, but uat, into, dev are continually deployed following build and tests passing.<p>Personally anywhere from a few times an hour to a day or so. I try to keep all development tasks small enough to deploy on the hour if possible.
We're in e-learning and we normally deploy to production once a month, usually with one or two following bug fix deployments. Most of the process is automated with our CI, but we still have some manual action steps required.
One week minimum to get through change control and schedule a release, plus a week of development and testing means only 2 non-emergency releases per month. But given the effort involved, usually once every 4 to 6 weeks.
While under development, a few times a week (every time a new feature is ready to be rolled out), and I do it manually (publish in visual studio -> Remote Desktop to the server, replace files)
Unfortunately not often enough. We're stuck with a human QA cycle and some legacy processes enforced by a PHB from hell which mean it can take literally months, even from our CI server.