SSH host keys are problematic on cloud servers, not just because of this problem, but also because if the cloud provider does the right thing and generates the SSH host key on the first boot, the key is generated when the system has very little entropy available. The primary sources of entropy on Linux are key/mouse input, disk latency, and network interrupts. There's obviously no keyboard/mouse on a server, and in an SSD environment like DigitalOcean, disk latency is quite uniform and thus useless as a source of entropy.<p>Linux distros mitigate the cold boot entropy problem by saving some state from the RNG on shutdown (on Debian, it's saved in /var/lib/urandom/random-seed) and using it to seed the RNG on the next boot. On physical servers this obviously isn't available on the first boot, and on cloud servers, the provider often bakes the same random-seed file into all their images, so everyone gets the same seed on first boot (fortunately this doesn't harm security any more than having no random-seed file at all, but it doesn't help either). What cloud providers should really do is generate (from a good source of randomness) a distinct random-seed file for every server that's created, but I haven't seen any providers do this.
This is not the last of the problems we'll have with "the cloud", but I guess it's part of what makes it so exciting. :-)<p>Many people, especially beginners, make the mistake of leaving the same SSH keys in a certain template or in a snapshot of a virtual machine that they later use as a template.<p>There are a few files that you really, really need to wipe out from a wannabe image template:<p>- /etc/ssh/* key* (for reasons explained in the parent article. stupid autoformatting, remove the space after the first asterisk)<p>- /var/lib/random-seed (the seed used to initialise the random number generator. this is the location on CentOS)<p>- /etc/udev/rules.d/70-persistent-net.rules (so that the VM's new NIC - with a new MAC - can use the same "eth0" name)<p>People who want to do this more exhaustively can have a look at libguestfs and it's program virt-sysprep which does all of the above and more!<p><a href="http://libguestfs.org/virt-sysprep.1.html" rel="nofollow">http://libguestfs.org/virt-sysprep.1.html</a>
They should be using cloud-init or virt-sysprep[1] on new instances. In particular, it is <i>vital</i> that you give your new instances a unique random seed (which virt-sysprep can do). Also that you provide the virtio-rng to guests that support it.<p>[1] <a href="http://libguestfs.org/virt-sysprep.1.html" rel="nofollow">http://libguestfs.org/virt-sysprep.1.html</a>
To avoid this kind of security problem, use providers that use official Ubuntu Cloud images only. If Canonical haven't certified the Ubuntu images you're using, then your provider could have done anything to them. You'll need some other way to determine their competence.<p>Cowboy images like this are exactly the reason trademarks exist. Commercial providers who don't get certification are in fact violating Ubuntu's trademark by telling you that you are getting Ubuntu, when in fact you are getting a modified image which is possibly compromised (such as in this case).
Generating fresh keys aside, one thing I do with our AWS setup is whitelist the IPs that can connect to our SSH bastion host. This completely eliminates scripted port scans of the SSH server and makes the auth logs much more manageable.<p>If our IP address changes (eg. ISP assigns a new one for the cable modem) then we just update the whitelist (and remove the old address). It's <i>very</i> infrequent. I could probably count the number of times I've done it on one hand.<p>It might not be the most scalable setup but at our small size with everybody working from home it works great.<p>The only slight hitch is updating it when traveling but even that isn't much of a problem. It takes a minute or two from the AWS console and its good to go.<p>I recently took a look at digital ocean ($5 servers gives me ideas...) but didn't see a firewall option similar to the security group setup in AWS. If it does exist then I highly recommend it.
One good thing to note is that any VM image using cloud-init (a package for debian/rhel systems) should automagically generate a new host_key set for any new system image. Basically if you build a system image for EC2 or any system that uses the EC2 data format (like Openstack) for host instantiation, then you should install cloud-init. It would prevent something like this.
Now that it's said, I did notice something strange once.<p>I had loaded up an Ubuntu Desktop droplet with the purpose of checking something out through the browser on the node.<p>The startup page was <a href="https://www.americanexpress.com/" rel="nofollow">https://www.americanexpress.com/</a><p>Since when is that default?<p>Didn't think much of it at the time, but now... whoa.
I suspect this kind of thing happens with other companies, but can only speculate.<p>Somewhat related: chicagovps gave me a 'fresh' gentoo vps, and the default provided root password was identical to the original one from several months ago. I assume it is one gentoo image with the same password (for all customers)?
Just verified this is also the case with at least some AWS-hosted servers. Coupled with the fact that many people simply ignore the MITM warning that SSH throws, this is scary stuff.
Great find. I came from a heavy security background and moved to SV where it seems like security is an after thought. I spent many long days and nights STIGing RHEL boxes so I can appreciate this find. Also thanks for letting me know about Digital Ocean, their VPS looks promising and I think I might start using it.
> <i>After you have run those commands, simply restart the SSH daemon so it starts up with the new keys in place</i><p>I believe if your version of OpenSSH is up to date, sshd will read the host key each time a session is opened and does not need to be restarted.
So you are the reason I started getting these error messages, I noticed the change on June 2, great work.<p>If you are still reviewing salt, I just wrote a post about salt-cloud and DigitalOcean that you should check out -<p>Create your own fleet of servers with Digital Ocean and salt-cloud:<p><a href="http://russell.ballestrini.net/create-your-own-fleet-of-servers-with-digital-ocean-and-salt-cloud/" rel="nofollow">http://russell.ballestrini.net/create-your-own-fleet-of-serv...</a>