Company A exclusively utilizes a small set of domain-specific SaaS products to store literally <i>everything</i>. There is nothing of importance on local machines. They are essentially thin clients for a small set of cloud services.<p>This strikes me as precarious, given that everything breaks the moment that something happens to the local internet connection or service providers' connection (the latter, but not the former, being highly unlikely). It's turtles all the way down, and if a single turtle slips and falls so does the business at the top. I'm the type to use plain text formats stored on a local server, use WAN as little as possible, and keep a gun next to the printer if it makes any weird noises[0], so I'm (opionatedly) suspicious of 'too much' cloud.<p>The thing is, Company A's setup <i>works</i>. Functionally, the amount of abstraction they're playing with is quite powerful, and it makes everything pretty easy for them.<p>The following questions are asked from a technical perspective, <i>not</i> an inter-personal/workplace perspective:<p>- Should I work to keep this company's data a bit more independent? If everything is working just fine <i>right</i> now, is it really best to start messing with things?<p>- If so, how do I do this incrementally?<p>0: https://news.ycombinator.com/item?id=20981980
Make sure the data is exported and backed up, but “the Internet at the office is down and so is the cell network but magically we still need everyone to work despite massive localized disruption” seems a bit out there for most risk profiles.