I've got a few tidbits on the operations side of hosting that may be of interest. Since you've been around in the hosting biz a while, some or all of these things may be old hat to you. My experience is largely at mid- to large-size dedicated server hosts (1k-35k hosts), so this may not apply to your particular model, but hopefully it's useful in some way.<p>- Make sure your policies/procedures are clearly written and do not have any gaps or gray areas. Keep in mind that you will probably have to train a new hire from the ground up at some point, and the less hand-holding needed, the better. This goes for everything from operations to sales to billing.<p>- Automate EVERYTHING. Linode is a great example of how to do this correctly (although automating VPSes is a touch easier than bare-metal servers). Softlayer's web panel is pretty good, as well. The more your clients can do without opening a support ticket, the better.<p>- Monitoring is important. You should be notified of problems <i>instantly</i> so that they can be fixed very quickly, ideally before any clients notice a problem.<p>- Proprietary software/hardware for core offerings is generally a bad idea, unless you're hosting MS Exchange (and Openchange should eliminate that issue eventually). Keep in mind that you may have to migrate every bit of data someday in the future, and implement your stuff accordingly. This also ties into automation: proprietary stuff tends to be harder to write code for, harder to troubleshoot, and more expensive to maintain in the long run.<p>- Do not skimp on facilities, hardware, or network architecture. Always have hot spares to replace your live gear in case something gets fried (switches/routers, power supplies, hard drives, RAM, server chassis). This requires some investment, but telling clients "we're waiting for a new powersupply shipment from Dell, you're down for X hours" will make them spend X hours researching their next hosting company.<p>- If your organization is responsible for deploying hardware in datacenters, be absolutely sure that you are not overloading your power drops. If you can, get intelligent power strips that allow you to monitor load on each circuit. Know the maximum load for your hardware, in case everyone on a circuit gets slashdotted or similar.<p>- Do not roll out new services/datacenters/hardware without stress testing them first. Launching new stuff that doesn't quite work 100% (or will work with minor adjustments) will cause headaches for staff and clients alike.<p>- DO NOT LIE TO ANYONE, ABOUT ANYTHING, EVER. Transparency may be your policy, but integrity is pretty high on everyone's list, too. Admit mistakes, especially the embarrassing ones. Don't make promises you can't keep without breaking a sweat.<p>- When mistakes <i>are</i> made, take systematic steps to eliminate their causes, permanently. Examine procedural failure before human failure; the former generally leads to the latter.<p>That's just a few things I've gleaned from the last 6 years fixing broken servers... I may have left a few things out, but that should be a good start. Feel free to drop me a line sometime (email is in my profile) if you want to talk more about this kind of thing :)