Perhaps different use case - but I prefer to use a VPC with internal addressing and DNS. Particularly if you're using more than just a few instances.<p>Then have a bastion host in a DMZ that forward to the actual instances (I prefer 172 as it tends to avoid clashing with wifi networks). This does cost you a m1.small Amazon instance, but if you reserve it the cost is negligible.<p>Even better. You can do this automagically with ssh by putting a suitable `ProxyCommand ssh <bastion> "nc %h %p"` in your ssh config. So you just `ssh 172.0.0.10` or ssh `my-internal-name.blah` and it tunnels straight in for you.<p>You can pair this with internal DNS if you want to get really fancy - although it's a bit fiddly. From what I read internal DNS is pretty high up on the Route 53 feature request list.
Hi! I'm the author of the blog post that you listed in your article. Glad you found it useful; it surprises me to this day, almost 2 years later, at just how many pageviews it continues to generate.<p>However, I would like to point out that the correct solution to this problem is DNS, as others here have indicated. Couple Route53 with something like Zonify (<a href="http://nerds.airbnb.com/easy-aws-inventorying-with-dns/" rel="nofollow">http://nerds.airbnb.com/easy-aws-inventorying-with-dns/</a>) by the fine folks at AirBnB, and you've got something quite powerful that is diff'able via your normal tools, and can be easily versioned for sanity and safety.<p>Don't let my comments (or the comments of others here) detract from the pretty clever approach that you took. I think it's the fate of every ops/devops to, at some point in their careers, create a host address storage/querying system that contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of DNS without realizing it the first time around.
This is a pretty complicated solution. There are a ton of easier ones, but probably the easiest is to just use ec2-ssh. It lets you apply tags to your ec2 instances and ssh to them by very simple names.<p><a href="https://pypi.python.org/pypi/ec2-ssh" rel="nofollow">https://pypi.python.org/pypi/ec2-ssh</a>
If you're looking for something a little more packaged and not averse to installing a Ruby gem, this will manage multiple AWS accounts and allow you to ssh/scp using AWS instance IDs as the target:
<a href="https://github.com/mheffner/awsam" rel="nofollow">https://github.com/mheffner/awsam</a>
DNS is a good solution here, but it also seems appropriate to highlight that you shouldn't need to be connecting to these instances directly. As others said, some type of configuration tool should be in place, logs should be centralized, storage should be centralized, queues should be elsewhere. Painful ssh config is a symptom of a different issue.