"2019 Goal: The most used code and binary Package Managers are powered by IPFS."<p>That's kind of stupid-ambitious for 2019 when another 2019 goal is "a production-ready implementation" and IPFS has been around for 3 years already.<p>This isn't a roadmap, it's a wishlist. And I'm someone who wants to see IPFS succeed.
This is not a roadmap, but rather a wishlist. There is a fundamental problem that IPFS needs to solve first. This problem is called an efficient WebRTC-based DHT. In order to change the web, IPFS needs to become usable in the browsers. Since the backbone of IPFS is DHT, there need to be an efficient UDP-based solution for "DHT in the web". Right now this isn't possible and reasons are not just technical, but political. The IPFS team would need to convince all major players that enabling this DHT scenario is a good idea.
If you want to do package managers, your #1 priority should be Nix. Don't do something more popular where you help less, go with the thing that you can really provide the killer missing feature.<p>Nix + IPFS has been tried before, but what was missing is the Nix-side hierarchical content addressing of large data (not just plans). With the "intensional store" proposal, this should finally happen. Please push it along.<p>Data shouldn't be hashed like Nix's NARs, or IPFS's UnixFS for maximum interopt. Instead please go with git's model for all it's problems.<p>Thanks, hope someone can take me up on this because I'm up to my neck with other open source stuff already.
The vision of a IPFS-powered web working is beautiful.<p>However I would love to see a reference implementation that works at minimum and not just drains out your computer up to latest resource it may have. If we're so near the "production-ready" status of the reference implementations then I think that goal will never be achieved.
Would love to see Arch/Alpine Linux repo move to IPFS by default. Would also like to see better integration with Git, and an SCM platform comparable to GitHub (or GitLab). That could really get the developer community heavily involved in the project if it was sponsored by Protocol Labs.
In addition to apt and npm, I would like to see docker image distribution powered by IPFS. It really feels stupid to pull images from a central registry sitting on the other side of the globe when the image is already present in the next node in your kubernetes cluster.
Discovery performance is the biggest issue I see. If I deliberately load the same file on a couple of peers, it can take hours (or forever) to be able to find a peer with that file to pin it. It is clumsy and difficult to explicitly connect to peers (because you can't just try to discover peers at an address, you need to include the node ID as well), and even if you manage to enter the right information, you won't necessarily succeed at connecting to the peer the first time.
It's nice to see #2 for package managers, something I've been thinking about recently. I haven't look much into this yet, but I wonder if IPNS could provide a step forward in supply chain protection since package signing isn't available yet in certain managers/repos or not commonly utilized.
I love the idea of IPFS, but I can't think of a use case not covered by torrents.<p>Would someone mind enlightening me regarding what sets IPFS apart from torrents?
One of the biggest challenges with IPFS in my mind is the lack of a story around how to delete content.<p>There may be a variety of reasons to delete things,<p>- Old packages that you simply don't want to version (think npm or pip)<p>- Content that is pirated or proprietary or offensive that needs to be removed from the system<p>But in its current avatar, there isn't an easy way for you to delete data from other people's IPFS hosts in case they choose to host your data. You can delete it from your own. There are solutions proposed with IPNS and pinning etc - but they don't really seem feasible to me last I looked around.<p>This list as @fwip said is great as a wishlist - but I would love to see them address some of the things needed in making this a much more usable system as well in this roadmap.
I have a question for the IPFS people. I am a non-techy who really likes the IPFS idea and wants to see it succeed.<p>However, whenever this topic comes up here at HN, we get a bunch of people who say they tried to use it but it was basically unworkable, like too much RAM usage and various sorts of failures. And rarely does anyone respond by saying that it is working just fine for them.<p>So my question to the IPFS people is, when is it going to get really usable? I am asking for something reasonably specific, like 2 or 3 years, or what? And I am supposing that would mean a different promise/prediction for each main different use case. So how about some answers, not just "We are aware of those problems and are working on them"
What's the difference between DAT and IPFS? I'm trying to understand all these new technology with a grand aspiration to replace the current infrastructure.<p><a href="https://ipfs.io/" rel="nofollow">https://ipfs.io/</a>
<a href="https://datproject.org/" rel="nofollow">https://datproject.org/</a>
<a href="https://beakerbrowser.com/" rel="nofollow">https://beakerbrowser.com/</a>
So I can store a file in ipfs by its hash, but there’s no way to link to the next version of the file. I can only link to older versions?<p>I’m a giant advocate for decentralized architectures but so far I’ve never found a use for it that doesn’t rely on a centralized way to find out about new data
IPFS is a joke. They have name lookup feature but relies on traditional DNS! What are they thinking?<p>Also, if the IPFS's idea of working as local server is sound, BitTorrent DNA(browser plugin, steaming video over BitTorrent) should had been worked.<p>It seems to me, they suffered NIH syndrome. They tried to reinvent the wheel. The P2P file transfer protocol over IP has already been covered by BitTorrent. What we need is a nice front end which use BitTorrent protocol as back end and offer a illusion of Web site.
I've been considering Swarm distributed file system because of its closeness with the Ethereum development.<p>It seems to do the same thing and works already but hardly gets any press. IPFS and the Protocol Lab's Filecoin sale seemed to generate a lot of marketing despite it becoming clearer later that Filecoin is for an unrelated incentivized network.<p>It is hard understand the pros and cons of choosing to use IPFS over Swarm, or where they are in comparative development cycle.<p>I know many decentralized applications that opt for IPFS for their storage component, and know of the libraries to help different software stacks with that. But I can't tell if it is right for me, versus the state of Swarm.