Lets remind everyone:<p>1. IPFS attaches ALL network interfaces (internal and external) to your identity.<p>2. Tor is still "experimental" done by 3rd parties. <a href="https://flyingzumwalt.gitbooks.io/decentralized-web-primer/content/avenues-for-access/lessons/tor-transport.html" rel="nofollow">https://flyingzumwalt.gitbooks.io/decentralized-web-primer/c...</a><p>3. Due to 1 and 2, any hosted content is EASILY trackable to a user's computer, even behind NATs. A machine cryptokey also helps cement that (but can be changed). This allows easy DDoS'ing of any and all endpoints hosting content you don't like.<p>4. It is trivial to ask the dHT for *who* has a certain content key, and get all (or the top 50?) computers hosting that content. (this matters with regards to "sensitive" content)<p>5. Running a node is still high cpu, ram, and network chattiness - so using a VPS to keep IPFS off your local network is still tenuous to run.
3 of the authors work for Protocol Labs (the company that develops IPFS), which is likely why the paper is able to analyze data from the ipfs.io gateways
> The content retrieval process across all regions takes 2.90 s, 4.34 s, and 4.74 s in the 50th, 90th, and 95th percentiles<p>Good improvement over the years but still a long way to go feel it even soft real time. Not sure these servers are using on the fly gzip compression before sending over network but they should consider adding compression feature at file or block level natively in "ipfs add" command.<p>There was an interesting paper "Hadoop on IPFS" (around year 2016-17). I hope these continuous improvement will play good role in making big data and analytics decentralised before it hits v1.0<p><a href="https://s3-ap-southeast-2.amazonaws.com/scott-brisbane-thesis/decentralising-big-data-processing.pdf" rel="nofollow">https://s3-ap-southeast-2.amazonaws.com/scott-brisbane-thesi...</a>
IPFS is just too slow for it to be usable on a mass scale. It's a neat idea but unfortunately, p2p file storage is tough, you absolutely need a central model to scale up. Offering coins to cash out at casinos where the other side of the order book are people with unlimited supply of it. Doesn't work.
I'd like to image a world with globally addressable devices, maybe that's through NAT hole punching or just IPv6 where we can share everything, but the way things are now, I really struggle to determine what's the legitimate use case for this technology if we ignore the interest in decentralization for it's own sake and just consider engineering tradeoffs like cost for performance and end user experience/ergonomics.<p>As a storage layer, there are major challenges to adoption IPFS that have persisted almost a decade in to the project. At this level of partition comes at an incredible cost to availability, and from everything I read, the best practices for hosting user generating content still involve paying a service to "pin" your content to ensure it doesn't get dropped, so you still pay someone to host your data!<p>So what I'd like to know, is why would I want to use IPFS to host anything, when better, more performant and cost effective alternatives exist, and IPFS doesn't guarantee a file is actually hosted? Like, are there words you can say to your boss to argue for IPFS as a rational choice in systems architecture? What is the use case here?