>If you love automation why is your development environment manual?<p>Because I usually do it once in 5 years :-)<p>My setup: Linux, Kubuntu LTS. Entire system is backed-up with rsync, so I have daily snapshots for last 6 months. There is also weekly backup to cloud and external hdd.<p>I have primary 250GB SSD which usually sits in my workstation. Whan I travel I physically move SSD to laptop (takes 10 minutes).<p>If something went wrong and my primary SSD would die: I just boot from Livecd, copy 200GB, setup bootloader and reboot. No need to 'reinstall' system from scratch.<p>Every six months I take one afternoon and update all my tools (IDE, OS, apps...). I dont use automated updates except critical stuff such as browser.
I think the general idea of running a vm with all your dev setup in is great, but I've often found the small niggles (such as speed of switching quickly between browser and vm terminal, the fact you need a fairly good computer to run even quite a modest server with a gui and if you don't use a gui, you're limited to 256 colours) all a bit more trouble than the benefits.<p>My solution is to set up my perfect dev environment in linux (I use Fedora) on a partitioned usb pen drive. Then either boot from it (if i'm on a crappy computer) or boot a vm direct from the usb if the computer is more competent. It's also quite nice if there's an emergency and you're away from your computer -- you can just rush into an internet cafe, plead/bribe them to let you boot from your usb then have everything, ssh keys inlcuded ready for urgent repairs!
I don't want dev to be exactly like stage and prod.<p>Yep, you read that right. I deploy to Debian servers but develop on Linux Mint or OS X. The key is that the environments are very <i>similar</i> but not <i>identical</i>. The payoff comes when you break yourself of the habit of relying on accidents of deployment instead of building a general-case solution that works in multiple environments.<p>You scan /proc to look at running processes? That's great, until you're on a machine without /proc. Better to spend an hour learning how your dev platform (Python, Java, whatever) abstracts that away for you. Trying to send an email by shelling out to /usr/bin/sendmail? Oops! That's broken in lots of places; better learn how your dev platform handles it!<p>The big win comes when you upgrade your stage and prod environments to a newer distro - or a different one altogether - and your stuff keeps working because you'd relentlessly whittled away all the OS-specific dependencies.
Your don't need to give up your GUI editor to do this. Vagrant sets up a shared folder automatically in the virtual environment under /vagrant that maps to the folder where your Vagrantfile is located on the host machine.<p>I do all my development now using Vagrant running Ubuntu VMs but I still do all my editing with the same Windows editor I've used for years running on the host machine.
The author missed a golden opportunity to discuss battery life. Your laptop will run longer if you SSH/VNC/whatever into the virtualized development machine if you don't host the virtualized image on the laptop.<p>Also I can SSH/VNC/Xwindow/whatever into a server machine with performance stats far beyond any currently imaginable laptop. Its like owning a laptop from 2023 today in 2013.
I fully agree with this. 'Before', the dev team used a mix of, well, I'd rather not say. It was just not very pretty and eventually everyone got their work done. A while back I built a vagrant+puppet configuration for everyone and the team has since transitioned to using that.<p>It's nice to have at least semi-parity with the production environment. This is possible because we can utilize most (not all, due to no central Puppet server) of the same Puppet modules which we use (or I built), in the VM.<p>Essentially what you gain is not worrying about if a developer will break their machine/VM and go 'Whoops, can you fixor it?'. Additionally, you no longer have to worry about such things as php/node/ruby version issues with the dev/production side of things. We've gone from having sometimes issues with certain code not running the same as in development, to just throwing it up on the staging environment through deployment tools and it just runs!
We do this using Vagrant and Puppet and actually just blogged about it yesterday: <a href="http://blog.serverdensity.com/many-projects-with-vagrant-and-puppet/" rel="nofollow">http://blog.serverdensity.com/many-projects-with-vagrant-and...</a>
I've heard of Salt Stack[1], though I've never used it. From what I can tell, it can be used to do the same thing.<p>[1] <a href="http://docs.saltstack.com/" rel="nofollow">http://docs.saltstack.com/</a>
There is something to be said for running your software on multiple environments. There is also something to be said for stubbing out pieces of your stack for testing and development purposes. There is often a lot to be learned. That said, I've used the VM approach on projects with high turnover. It is usually not the same as production deployment, so it does take effort, but depending on the team it might be worth the effort.
I tried this on a recent couple projects and it failed miserably on both of them for completely different reasons. On one, the architecture of the project dictated really really fast network connections and the overhead from the VM choked the project. That's probably an unfair example because this was actually a toy project that was never meant to run in a serious environment (hence the requirement of a huge bandwidth virtually 0 latency network connection). The other failure however is more serious. Our company (and at least one project) dictates a Windows based development environment, and the abysmal performance of NTFS combined with a massive project code base meant it was taking 2+ minutes just to run a status check on the version controlled code.<p>I think given a Linux host environment this might work better, or a smaller code base, but at least in these two situations it was a failure.
I prefer having a VM built with Vagrant (and in our case puppet) which runs the same configs as our servers while having my development tools (editors etc) installed on the local machine. That way all my apps run at native speed, and the server replicates (almost exactly) the production environment.
Issue: What about VPN? If you're connecting to your corporate network with OpenVPN, and you are restricted to one connection (common sense security policy) then you need to get your infrastructure team to issue you a separate key for each VM that you're running that you want to be able to access a tunnel and get on your corporate network.<p>I realize that this isn't an issue for 99.9% of startups, but in the non-startup corporate world where there are security implications to keeping everything in a publicly accessible code repo (healthcare, government, education in some states) then you've got another complication.<p>The only thing I've been able to work up so far is sharing a folder on my desktop through to the VM and running commits from my desktop.
Vagrant + Chef is great!<p>Vagrant + Chef + Berkshelf is better!<p>You can define the cookbook dependencies in the Berksfile and version it along with Vagrantfile. Berkshelf is like bundler for your cookbooks. It assembles cookbooks from a variety of sources and loads them into your Vagrant-managed vm. Check it out at <a href="http://berkshelf.com" rel="nofollow">http://berkshelf.com</a><p>To help get things going in that end, I've been working on Kamino (<a href="https://github.com/hosh/kamino" rel="nofollow">https://github.com/hosh/kamino</a>). It's intended to generate a Vagrantfile+Berkshelf file for use in your project. Right now, it only supports a basic Rails stack.<p>Pull requests welcomed :-)
The first reason is, people who build
tools mostly don't think that the tools
will or should be used as components of other
tools for more in 'automation'. The
result is that it's tough to build
on such tools.<p>The second, and more specific, reason
is that location of one of my most
intense love/hate dilemmas, Xerox
PARC that pushed the first 'graphical
user interface' (GUI) instead of
what came before, usually typing
text into a command line. Command
lines are mostly easy to automate.
GUIs are mostly a total PAIN to
automate.<p>I'm wanting to automate, willing
to automate, waiting to automate.
Cloud development like www.nitrous.io (previously action.io) is the future. I've played around with it am quite impressed. Basically no setup for standard platforms like rails and node - choose your dev environment, install your dependencies if not already there and get coding. It's a Linux environment so your dev is same or close to production. And the IDE is quite nice. Caveat: Yes, you have to have a connection. But, for those few precious unconnected moments, I don't want to be coding anyways.
Any comments on what people use to manage the chef cookbooks (and why)?<p>Librarian or Berkshelf seem to be the two main contenders to make updating cookbooks similar to updating gems with bundler.
I've often thought that it would be nice to have a script that rebuilt the software installation on a developer's workstation from scratch, completely automatically, using only configuration files and scripts checked into version control somewhere. Such a process could run every night while the developer sleeps. This would ensure that it's dead easy to bring a new developer fully up to speed on a team.
if you use an ide you can do something similar, but use the vm client directly. i do this all the time, and virtualbox's seamless mode means that you can mix windows from client and host on the same screen (except ubuntu unity?).<p>it's particularly good when different clients have different OSs. and you can even do hardware development - i have tested usb drivers in a vm client that talk to hardware connected to the host.<p>the only drawback is initial startup time (particularly pulling latest updates after install) and archiving the vms (they're large, so fill up a laptop ssd). i export to ovas on my backup system and then wipe the vm. another worry is that virtualbox has been flakey recently (<a href="http://www.acooke.org/cute/VirtualBox1.html" rel="nofollow">http://www.acooke.org/cute/VirtualBox1.html</a> <a href="http://www.acooke.org/cute/UbuntuonVi0.html" rel="nofollow">http://www.acooke.org/cute/UbuntuonVi0.html</a>) - but ovas can be imported into other hosts...
How about the best of both worlds? A common VM image with CLI tools and dev server inside the guest OS, with VirtualBox folder sharing to permit a GUI IDE and the git repo to live on the host OS?
How is this different from Boxen (<a href="http://boxen.github.com/" rel="nofollow">http://boxen.github.com/</a>)? You have exactly copied the central idead around it and made a blog post.
I have used Vagrant in the past, and I had to stop using it. Shared folders were just too slow. 2 seconds to refresh a page was too much when you're trying to be productive.
has anybody tried Vagabond (<a href="https://github.com/chrisroberts/vagabond" rel="nofollow">https://github.com/chrisroberts/vagabond</a>) to do the same thing without a VM ?