IMO, the meat of this paper is in section 4.3 and 4.4.<p>And I cannot say for sure, but the formal proof of 4.4 basically summarizes the same points pointed out in 4.3.<p>Most of these are not inherently mathematical problems but a social one.<p>> Verifying sentience is a fuzzy concept. While they can be bound together momentarily as we see in [66 ], the binding is very easily decoupled.The verified user might choose to sell off their uniqueness identifier at time period 𝑡 + 1 if the verification which binds sentience with uniqueness ends at 𝑡.<p>Basically, people can sell identities<p>----<p>What really concerns me though, is how much and how often this paper discusses DRM, or in their own words, a "trust anchor"<p>> With the assumed threat model in our case, the lack of inherent trust in the user only compounds the unreliability of the model without any trust anchor.<p>> Assuming a proof of location is for a mobile device, rather than a particular human being, then associating the proof of uniqueness obtained under such a condition, i.e., without the involvement of a trust anchor, is unreliable.<p>I know that the authors aren't directly calling for more centralized trust. But given recent development at Google, we all know how the readers would think