I can't say I disagree with any of this (except maybe the cryptocurrency bit), this was my experience as well. I even made a pinning service (<a href="https://eternum.io" rel="nofollow">https://eternum.io</a>) years ago, but I shut it down after having to deal with the umpteenth frustration of the IPFS server being slow, not discovering data, not pinning, taking up all the resources, taking up all the space, not pinning, failing to find other nodes, and not pinning.<p>I think IPFS is a great idea, but I don't think IPFS is a good IPFS. Give me something that is a content-addressable network of people who want to archive/store sets, that sounds like a much better thing.<p>Imagine: You have 200 GB of free disk space and you want to donate it to the Internet Archive. You connect to its tracker and say "give me 200 GB of your rarest content". The tracker obliges, and soon you have 200 GB of blocks. Or, you can ask people to help you keep your site online, so they pin up to X GB of it. Or, you dedicate Y GB to your OS's packages, and you can fetch them from and send them to other people without needing to contact the package servers at all.<p>This sounds very much like "BitTorrent but with extensible data sets". Maybe I'll see if it's close enough that I can build it without too much hassle, hmm...
This was written a while ago (2019-2020) and I would say most of the complains are at least innacurate today.<p>While some of the criticism was valid (and many things were fixed), I find the tone and the personal attacks in linked pages a bit over the top (not sure what's the point?). I can only smile at some fancy misconceptions and completely wrong predictions.<p>Some things are still missing: NAT hole punching (without central servers!) and indexing supernodes are things that are literally landing these days and will further improve content routing and providing in the network.<p>IPFS has many moving pieces so it can be disorienting and frustrating, but its also really cool to understand it and see it work.<p>Disclaimer: I work on some IPFS things.
The hash will change depending on how the file is broken down into chunks and and a different hash can be used for multiple versions of the same file. I expected to be able to compute a hash and see whether the file is in the IPFS network; then upload it if it not in the network. That just feels wrong to me.
The main problem with IPFS is chunking, if file is uploaded 2 times into IPFS it can result in two different hashes.<p>It would be really great if we could just access ipfs://[sha256_of_file]
The premise seems to be that IPFS takes too many resources because of DHT, and DHT takes too many resources because it opens too many connections.<p>Possibly a single setting needs to be tuned based on a discovered connection thruput? If the entire and sole argument is that is opens too many connections, well, you know, <i>just open fewer connections</i>. This isn't a valid complaint about BitTorrent (in which every client allows for max connection tuning and good clients tune connection making behavior dynamically), and it's not a valid complaint against IPFS in general, just the current state of the code.