TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

The Great Data Flood Ahead

22 pointsby SemiTomover 5 years ago

8 comments

devicetray0over 5 years ago
&gt; While data in the cloud is heavily secured, data on the way to the cloud is not.<p>This article is nonsense and completely pretends that TLS does not exist.
eesmithover 5 years ago
The use of &quot;flood&quot; reminded me of a few years ago when I was researching the post-war information technology era. What struck me was how many papers used phrases like &quot;flood&quot; or &quot;deluge&quot; of data.<p>For examples:<p>&gt; The problems of adequate storage, preservation and service for the increasing flood of periodical literature coming into their collections are of special urgency for librarians. Many studies have been made, all of which view with deep concern the rapidly increasing rate of growth of American libraries.’ Such growth, if continued even at the present rate, will in a short time result in collections of almost unmanageable proportions, both as to physical size and servicing.<p>(&quot;The Use of High Reduction Microfilm in Libraries&quot;, J. Am. Doc. Summer 1950 - <a href="https:&#x2F;&#x2F;search.proquest.com&#x2F;openview&#x2F;14f723869613e43376c4a7646f583183&#x2F;1?pq-origsite=gscholar&amp;cbl=41135" rel="nofollow">https:&#x2F;&#x2F;search.proquest.com&#x2F;openview&#x2F;14f723869613e43376c4a76...</a> ).<p>&gt; With the advent of the IBM card programmed calculators, actual calculating time on the data was materially diminished, leaving the problem of reading and processing the data standing as a very real bottleneck. It therefore became evident to responsible personnel concerned that a system would have to be devised that would allow either automatic or semi-automatic processing of much of the data incurred at the Air Force Flight Test Center if the Center was to survive this deluge of data.<p>(&quot;A centralized data processing system&quot;, 1954, <a href="https:&#x2F;&#x2F;dl.acm.org&#x2F;citation.cfm?id=1455227" rel="nofollow">https:&#x2F;&#x2F;dl.acm.org&#x2F;citation.cfm?id=1455227</a> )<p>Even &quot;deluge of data&quot; is still common, says <a href="https:&#x2F;&#x2F;duckduckgo.com&#x2F;?q=%22deluge+of+data%22&amp;t=ffsb&amp;ia=web" rel="nofollow">https:&#x2F;&#x2F;duckduckgo.com&#x2F;?q=%22deluge+of+data%22&amp;t=ffsb&amp;ia=web</a> .<p>65 years later and the data waters keeps rising.
bob1029over 5 years ago
I don&#x27;t see any need for action here. If Amazon runs out of storage space or bandwidth for keeping track of everyone&#x27;s botnet kitchen appliance statuses, I don&#x27;t think the connected world will grind to a halt. We will simply begin dropping said fridge temperature status updates on the floor in favor of more important things like bank transactions. 99% of data today is garbage from the microsecond it was created, and virtually all of it is garbage after a few months elapses.<p>That said, I am not against new iterations on the idea of the internet and how we move data from point A to point B. Enabling a vast &#x27;ocean&#x27; of a trillion+ devices to send data easily to&#x2F;from any node is a compelling problem (which we&#x27;ve arguably already solved). Trying to agonize over whether the data is important or not in some overarching manner is not a compelling problem. Data is very ad-hoc in nature. I don&#x27;t see any other way about this if you want to maintain its utility to everyone. You don&#x27;t apply QOS rules to payment processing networks in the same way you do the xbox live network. Both parties would argue their data is very important, but neither party cares at all about the other party&#x27;s data requirements.
rdlecler1over 5 years ago
The problem is that data is going to be walled off. There will be a lot but it will be siloed and inaccessible.
caust1cover 5 years ago
This is a pretty vapid article. Surprised to see it on the front page.
antisthenesover 5 years ago
If the costs of storing data rise faster than data is generated, people will simply be more selective with what data is retained permanently.<p>Most data is junk anyway, and is only useful in the immediate or short-term. It&#x27;s also somewhat pointless to debate without talking about a specific type of data. Some of it is highly compressible and is a non-issue.
noobiemcfoobover 5 years ago
My summary of the article: Most data is or will soon be trash. Efforts should be made to get to that designation as fast as possible from the moment of data generation.
fabiofzeroover 5 years ago
This is an information-free article. There&#x27;s been a flood of those popping up these days.