From the Grasshopper blog:<p><i>Unfortunately, our efforts are being slowed by a few complicating factors:</i><p><i>1. We are running in our disaster recovery site and this is not a normal practice for us – this is leading to some unforeseen issues.</i><p><i>2. We are experiencing internal network issues at our disaster recovery site.</i><p>This really has the smell of a disaster recovery plan that was never adequately tested.
Moments like these define companies. Communications with clients and the recovery plan (do we offer a mea culpa package? et cetera) will be the company's turning point.
according to <a href="https://support.grasshopper.com/index.php?_m=news&_a=viewnews&newsid=73" rel="nofollow">https://support.grasshopper.com/index.php?_m=news&_a=vie...</a> they say: "We are still waiting on our primary storage array."<p>This companies infrastructure is such that the failure of a single storage array shuts them down? Tragic.
Not the first time people were p'oed: <a href="http://www.bnet.com/blog/smb/how-i-infuriated-customers-by-asking-them-to-pay/4586" rel="nofollow">http://www.bnet.com/blog/smb/how-i-infuriated-customers-by-a...</a>
Grasshopper CEO had to leave microconf due to an emergency before speaking earlier this week. I'm curious if this is related.<p>Hope they figure it out. we use them and have been down all day.
We use grasshopper but have scaled to the point where this (and many other things about grasshopper) aren't acceptable. What are some other good options? Thoughts on ringcentral?
Lesson for businesses: for any service that's "mission critical" (communications would be for most I would think) either have a <i>tested</i> backup plan of your own or be VERY comfortable that your vendor does.
The Grasshopper guys have given me great service over the years. It's unfortunate that this has happened, but I'm sure they'll make things right when they have time to breathe.