I have a truly naive question about the distributed web: what makes the supporters of it think it will be any different from the original web. I mean, isn't it likely that at some point, there will be the need for a centralized search engine for it? Isn't it unavoidable that big companies like facebook runs their own non-distributed subnetwork, so that it can deliver standard functionality to all its users? The original web IS distributed already, isn't it? It's just that organically, the way people use it has become a lot more centralized, no? Or am I missing the main argument for a distributed architecture?
I worry that this is another example of throwing technology at a social and political problem.<p>That the current web is centralized has little to do with its technical design, and everything to do with economic and structural incentives that have made it that way.<p>It's tempting to say "start afresh", but we'll just be trading our current problems for a new set of problems IPFS introduces. It's a law of nature that problems are always conserved.<p>I would rather we do the hard work of fixing the web we've got, in particular the hard issue of how to re-decentralize it.
So I've been thinking about creating a basic site running on IPFS and here's my dillema. The hash of each page is a sha256 of the contents right? So lets say you have 3 pages A, B and C, A links to B, B links to C, C links to A. How do you create all 3 pages with correct links to each other?<p>When you create page A you have to have the SHA of page B, but then to create page B you have to have the SHA of page C and finally to create it you need the SHA of page A. You get into this cyclical loop where you can't generate any page and link to others. What is the solution to this problem?
<p><pre><code> > Each network node stores only content it is interested in [...]
</code></pre>
Isn't that the issue here? Storing data that will maybe be there later isn't really storing data. People want to publish something that must always be available, so why inject data into the IPFS network and hope it will be there in a year, rather than set up a $10/yr VPS?<p><pre><code> > With video delivery, a P2P approach could save 60% in bandwidth costs.
</code></pre>
In my opinion, this may be true, but total costs will be greater. P2P solutions are awesome because they are resilient, not because they are cheaper. Distributing pirated movies by dumping them on public FTP servers is much cheaper than BitTorrent. BitTorrent appeared because the centralized method was not resilient enough against adversaries, not because it was cheaper (quite the contrary).
If I try to host a javascript application that uses LocalStorage for saving data, it would be visible to any other ipfs JavaScript application because they all exist under the same domain, right? Have you thought about having the URLs be something like ipfs://<hash>/index.html instead of <a href="http://local" rel="nofollow">http://local</a> host/<hash>/index.html so browsers keep the LocalStorage for each ipfs hash separated?
If you want to tamper with content on the web, the idea that content is fingerprinted in IPFS is a huge deal.<p>IPNS (the name service) then becomes the vulnerability, but that is also distributed.
How do hosting providers fit in here, if at all? E.g., if I want to host a website on IPFS, do I publish it from my own machine and then wait a healthy amount of time for the content to be absorbed by the ether, or is there some way I can encourage other nodes to pick it up without requiring end-users to actively seek out my fresh material?
Suppose I'm poking around IPFS and unintentionally download some unauthorized copyrighted content. Is my computer going to automatically start sharing this content, exposing me and my ISP to legal action?<p>Or if there is a way to prevent sharing particular content that I've accessed, what's to stop me from leeching everything and never sharing anything?<p>(Edit: Ah, now I see "BitSwap" as possibly addressing my second question, but I'm still concerned about the first.)
As a devops guy, I sort of think ipfs seems more useful as a private, backend sort of solution where you trust all the nodes. I'm sort of vaguely imagining it running as a shared file system in AWS, running on docker containers.
Ive been using IPFS to port and make serverless webapps.<p><a href="http://ipfs.io/ipfs/QmbLPfyehFnViKZpU237P6a6DpjCfWFSoDBMQFGUAgYW2t/" rel="nofollow">http://ipfs.io/ipfs/QmbLPfyehFnViKZpU237P6a6DpjCfWFSoDBMQFGU...</a>
Does IPFS come with some kind of content filter or firewall to protect its users?<p>When child porn inevitably shows up, how do you protect yourself from accidentally downloading <i>and</i> then seeding it?
Interesting, I have two questions:<p>Can you create your private ipfs network? (accessible by anyone, upload only me)<p>If you upload sensitive material to the global ipfs network, what do you think will happen?
There's an interesting emphasis on developing nations not engaging with the Internet, but I think that might be partially cultural too. What tools have we given the developing world to really engage with the internet? The easy-to-use publishing platform often require an email and usually a real name. Both of these things may be unavailable to countries where being connected to thoughts posted online could be dangerous.<p>Most content is not written in simple english, and there's just not much incentive for somebody who may not know how to think critically/complexly (due to lack of western education) to engage with the internet.<p>I think distributed web is an interesting idea, and that IPFS really lists out some issues with the internet that we'd all win in solving, but I think maybe some of these, like developing nation web access, are solvable with current tech, and more culturally based solutions
So I've got a (let's say) WordPress blog. Where's the "here's how to get your existing content on IPFS in less than an hour" guide?
Some content here to play with: [0]<p>Interestingly some links to copyrighted material end in ¨Unavailable for Legal Reasons¨ however, running the daemon and issuing an ¨ipfs get hash¨ the download does start.<p>[0] <a href="https://ipfs.io/ipfs/QmU5XsVwvJfTcCwqkK1SmTqDmXWSQWaTa7ZcVLY2PDxNxG/ipfs_links.html" rel="nofollow">https://ipfs.io/ipfs/QmU5XsVwvJfTcCwqkK1SmTqDmXWSQWaTa7ZcVLY...</a>
Hey I have a related question: so with IPFS we all host bits of the internet, and with IPv6 our machines are all directly world-accessible, right? So how do we prevent this from turning into a huge pwn-fest? If routers aren't doing NAT and a bit of firewalling along with that, would each machine be completely responsible for its own security?
This sounds like distributed Geocities, where you can have any content you like so long as it's static, or at least, changes in iterations of static files like a HTML-generator blog.<p>If you do anything that needs a central server, suddenly its advantages vanish. I could imagine Wikipedia using this; I couldn't imagine gmail doing so.
What I'd personally like to see is built in monetisation such that hosting and serving other peoples pages becomes a socialised cost and benefit although one would guess that such as feature would have to be deeply designed into the system itself, and cannot be added as an after-thought?
I really hope something like this takes off.<p>Connecting and indexing documents has been the challenge of a few internet generations. Creating a document at a point of filing is a subtle but potentially large shift.<p>Hopefully this lands on homebrew soon to aid it's growth.
I heard about IPFS at the Decentralized Web Conference in SF last spring. It sounded promising, long term. Anyone here using it right now? What are the costs for running it on a VPS, for example, bandwidth, storage, and CPU load?
ipfs is fantastic, but it is half the solution. We also need a distributed p2p application framework, with which nodes can securely communicate and allow building distributed apps, like search.<p>We can think differently with ipfs. Traditional web allows everyone to publish content <i>somewhere</i>, hoping that search engines will index it.<p>With ipfs, the same file (with the same content) is only indexed/stored once and then you reference the hash to get to the content.<p>This fact changes the problem of search.<p>Take all the world's movies. With ipfs + p2p network, you only need <i>one</i> back end in the form of a distributed search index, which can index all the movies in the world.<p>Same with the world's music. You only need <i>one</i> back end which can index all the music.<p>The index can be as simple as {"movie title": [sha256]}, where the array contains the hashes of different 'encodings' of the same content (eg. 'dvd rip', 'blue ray' or 'mp3').<p>Content can be indexed by all kinds of properties of course and it can grow organically over time to include more and more details.<p>With ipfs plus the p2p network we'll build 'apps', not 'pages'. People can have a list of 'apps' running on their machines - which are node instances in various distributed applications, sharing the same p2p network and using ipfs as storage.<p>Apps can have 'backend' and 'front end' parts - the back end is the part which participates in the p2p network, while the 'front end' provides a human interface to the back end, were users can search/browse/view the content.<p>Apps are distributed as git repositories stored in ipfs, while the 'core' running on the user's machine compiles the sources (inside a build vm) and loads the resulting binaries into containers running in virtual machines.<p>This would make it easy for devs to write and publish new distributed apps, making the network totally decentralised and virtually unstoppable.<p>Ps. If you feel that this insanity could work, then I'd love to discuss it in more depth - delegate78@gmx.com
IPFS appears here every 6 months, every 6 months the same questions get asked, the same problems get raised, the same collective sigh of bewilderment/disappointment appears to emanate from the comments, and it goes away again for another 6 months. Everybody wants something this clever and community-spirited to work, but the basic problem is, I don't want my data to be vulnerable to slow, unreliable endpoints, or people switching off their IPFS servers. I can't really trust an unremunerated volunteer system with my data, and I don't believe that my keeping your data is remuneration enough for you to keep mine forever.<p>Peer-to-peer is excellent for ephemeral streaming stuff like chat, file transfer, even gaming. But it is not good for permanence unless some monetary remuneration gets involved, either via a centralizing entity asking for payments (dropbox et al), or a distributed monetization system like bitcoin. Somewhere, somehow, someone needs to get paid to keep the system running.