Some initial questions about this plan:<p>1. How does this system deal with the "data withholding" problem? In other words, when people provide "storage power" their data will be repeatedly sampled to make sure it is available... but when an entity claims that samples aren't being provided as required by the protocol, how does the system determine that that person wasn't lying, if the sampled data is still provided correctly in a followup request? If the answer is "through arbitration", what prevent the arbitration system from being DDOSed?<p>2. The "verified clients" are certified by "a decentralized network of verifiers". How does this system prevent a sibyl attack, i.e how does it prevent verifiers from repeatedly verifying themselves using multiple accounts?<p>3. I notice this system doesn't mention the use of erasure coding, which is usually a common feature of similar schemes by other projects. Why is it that erasure coding isn't necessary in this system? In other words, if data is randomly sampled, how does a client make sure 0.001% of the data isn't missing if only 99.999% or less of the data has been sampled so far?<p>4. The filecoin organization has a ton of funds due to their successful ICO. This makes it hard for users of the filecoin network to know if it is truly scalable (since the filecoin org could just run a bunch of anonymous server farms with their funds that provide free storage to paper over flaws in the cryptoeconomic incentives) How can a user of filecoin get some assurance that the files they are storing aren't just sitting on a server run by the filecoin organization & are truly running on a decentralized system functioning through the specified cryptoeconomic mechanism?