> While data in the cloud is heavily secured, data on the way to the cloud is not.<p>This article is nonsense and completely pretends that TLS does not exist.
The use of "flood" reminded me of a few years ago when I was researching the post-war information technology era. What struck me was how many papers used phrases like "flood" or "deluge" of data.<p>For examples:<p>> The problems of adequate storage, preservation and service for the increasing flood of periodical literature coming into their collections are of special urgency for librarians. Many studies have been made, all of which view with deep concern the rapidly increasing rate of growth of American libraries.’ Such growth, if continued even at the present rate, will in a short time result in collections of almost unmanageable proportions, both as to physical size and servicing.<p>("The Use of High Reduction Microfilm in Libraries", J. Am. Doc. Summer 1950 - <a href="https://search.proquest.com/openview/14f723869613e43376c4a7646f583183/1?pq-origsite=gscholar&cbl=41135" rel="nofollow">https://search.proquest.com/openview/14f723869613e43376c4a76...</a> ).<p>> With the advent of the IBM card programmed calculators, actual calculating time on the data was materially diminished, leaving the problem of reading and processing the data standing as a very real bottleneck. It therefore became evident to responsible personnel concerned that a system would have to be devised that would allow either automatic or semi-automatic processing of much of the data incurred at the Air Force Flight Test Center if the Center was to survive this deluge of data.<p>("A centralized data processing system", 1954, <a href="https://dl.acm.org/citation.cfm?id=1455227" rel="nofollow">https://dl.acm.org/citation.cfm?id=1455227</a> )<p>Even "deluge of data" is still common, says <a href="https://duckduckgo.com/?q=%22deluge+of+data%22&t=ffsb&ia=web" rel="nofollow">https://duckduckgo.com/?q=%22deluge+of+data%22&t=ffsb&ia=web</a> .<p>65 years later and the data waters keeps rising.
I don't see any need for action here. If Amazon runs out of storage space or bandwidth for keeping track of everyone's botnet kitchen appliance statuses, I don't think the connected world will grind to a halt. We will simply begin dropping said fridge temperature status updates on the floor in favor of more important things like bank transactions. 99% of data today is garbage from the microsecond it was created, and virtually all of it is garbage after a few months elapses.<p>That said, I am not against new iterations on the idea of the internet and how we move data from point A to point B. Enabling a vast 'ocean' of a trillion+ devices to send data easily to/from any node is a compelling problem (which we've arguably already solved). Trying to agonize over whether the data is important or not in some overarching manner is not a compelling problem. Data is very ad-hoc in nature. I don't see any other way about this if you want to maintain its utility to everyone. You don't apply QOS rules to payment processing networks in the same way you do the xbox live network. Both parties would argue their data is very important, but neither party cares at all about the other party's data requirements.
If the costs of storing data rise faster than data is generated, people will simply be more selective with what data is retained permanently.<p>Most data is junk anyway, and is only useful in the immediate or short-term. It's also somewhat pointless to debate without talking about a specific type of data. Some of it is highly compressible and is a non-issue.
My summary of the article: Most data is or will soon be trash. Efforts should be made to get to that designation as fast as possible from the moment of data generation.