Reminds me of Adam Back's hashcash[1], which was originally devised for similar purposes and was cited in Satoshi's Bitcoin paper[2]. Bitcoin's PoW scheme is a sightly embellished version of hashcash. I wish this work cited it too.<p>[1]: <a href="http://www.hashcash.org/papers/hashcash.pdf" rel="nofollow">http://www.hashcash.org/papers/hashcash.pdf</a><p>[2]: <a href="https://bitcoin.org/bitcoin.pdf" rel="nofollow">https://bitcoin.org/bitcoin.pdf</a>
I would prefer to see 2 options in browsers:<p>1. LSAT[1] support for micropayments (recently mentioned on HN[2])<p>2. RandomX[3], mining XMR for the site owner<p>Both provide something useful, replacing advertising and/or subscriptions for the site owner, rather than solely wasting energy. Let's eliminate captchas and advertising together.<p>[1]: <a href="https://lsat.tech/" rel="nofollow">https://lsat.tech/</a>
[2]: <a href="https://news.ycombinator.com/item?id=28459713" rel="nofollow">https://news.ycombinator.com/item?id=28459713</a>
[3]: <a href="https://xmrig.com/docs/miner" rel="nofollow">https://xmrig.com/docs/miner</a>
Correct me if I'm wrong, but wouldn't this keep the endpoint accessible for any bot/script that is willing to "invest the work"? E.g. if I only plan to query the endpoint a few times per day, the captcha won't be an obstacle.<p>I mean, if that's an intentional exception for personal scripts, that's awesome, but it doesn't really seem to serve the expectations of a CAPTCHA then.<p>Also, while I like the idea, I fear this could stop working in the long term.<p>With cryptocurrencies, PoW works because the "good guys" (miners) and the "bad guys" (double spenders) have equal access to computing power: If the difficulty increases, both can simply add more mining hardware and stay in the game. If the "bad guys" threaten to get an advantage, the system can always increase the difficulty without risking to lock out the "good guys".<p>With CAPTCHA, the situation is different: Here, the "bad guys" (spammers) still have as much computing power available as they can buy and stuff in their data center. However, the "good guys" (regular users) have hard constraint: They have to use whatever hardware the browser runs on (which might just be a smartphone) and they can't spend more than a few minutes to solve the puzzle - otherwise, the user will probably grow impatient and give up.<p>This means, you can't easily increase the difficulty of the puzzle without locking out regular users. If the captcha grows popular, there can easily be a situation where you'd make the captcha unsolvable for all regular users ling before it would become unsolvable for spammers.
So now instead of annoying users with image or audio challenges, websites can annoy users by running up their electricity bills (CPU work aint cheap) and/or denying them access if they [selectively] disable JavaScript and/or block web workers in their browser.
CAPTCHA are meant to exclude computers.
PoW does not do this at all. This is completely missing the point.<p>An attacker can easily an cheaply generate way more PoW than a legitimate user by optimizing their system.<p>This is just an "unskippable" delay timer not a CAPTCHA!
The issue with browser based PoW is that browsers are still fairly slow execution environments.<p>Any waiting period for calculation that won't annoy users is not long enough for an attacker to not still be able to spam, given that they will be solving them 2-100x faster with an optimized native implementation vs in a browser.<p>It also doesn't work as a turing test, because by their nature computers are good at batch solving proofs of work.<p>I once started an anonymous email service with browser-based PoW for antispam. It didn't work.<p>You'd need users to do like, several hours of in-browser PoW to make it viable as an anti-abuse measure. Anything less means a bot farm is posting spam dozens of times per hour.<p>Frictionless micropayments are still a pipe dream today, as any useful technology available to do so has basically been outlawed in the USA without a multimillion dollar license, and a KYC department, et c. It's a real shame because we have all of the technology for cash-based anti-abuse bonds and the like. It's just illegal to deploy it unless
you go full MSB.
Why do we need work? Since no valuable work product is being made, proof of work is really just a proxy for <i>proof of elapsed time</i>.<p>The animated demo shows this perfectly. The bar which is showing the progress in the proof of work could just be a simple timer, and it would look exactly the same.<p>The back end generates the page, and makes a note of the current time. Then it doesn't accept the submission until N seconds have passed since that time. The animated bar on the front end is just for show; the browser isn't what is enforcing it.<p><i>Proof of elapsed time</i> requires nothing from the other party. If I want proof that you spent at least 30 seconds waiting from the moment I gave you some starting signal, the only evidence I need to trust are the readings of my own stopwatch.
I like this - the hash function is memory-based rather than CPU-based so it's easier on your CPU while being more costly for attackers to spoof en masse.<p>Good thinking!
It mentions on the widget itself that it's accessible. That makes sense at a high level, since it doesn't require interaction.<p>But I'm curious if it might need more work in the 'accessible' area. Like, for example, is the progress bar percentage-done exposed in an accessible way? I don't see anything obvious here: <a href="https://git.sequentialread.com/forest/pow-captcha/src/branch/master/static/captcha.js#L102" rel="nofollow">https://git.sequentialread.com/forest/pow-captcha/src/branch...</a> , seems like it just changes width via css styling, but I could be missing it. I'm not sure it presents an easily understandable reason why the submit button is disabled, that you need to wait, etc, either.
I think proof of work makes bad captchas. CPU power is pretty cheap. Its really hard to have it be expensive enough to deter bad people well being cheap enough to not deter real users
> <i>It uses a multi-threaded WASM (Web Assembly) WebWorker running the Scrypt hash function instead of SHA256. Because of this, it's less succeptible to hash-farming attacks.</i><p>That's a problem; captchas need a fallback mechanism for situations when JS is disabled.<p>(I think that could be arranged; e.g. in the no JS case, the web application just spits out some token, which the user must copy and paste into some program that does the work, and then passes the answer back into the web application.
Very nice. Wish there was a demo.<p>This project is also cool: <a href="https://git.sequentialread.com/forest/greenhouse" rel="nofollow">https://git.sequentialread.com/forest/greenhouse</a><p>A reverse proxy that lets you split the "public-visible focal point" part of a web server from the "Holds a lot of private data and runs code" part. So the latter can run in someone's living room.
> It is impossible to predict how long a given Proof of Work will take to calculate.<p>This seems like a very significant limitation. Is there a way around this?<p>My first though is that if instead of one problem 100x as hard you solved 100 easier problems. That at least would give you a somewhat accurate loading bar, but I'm not sure if that would actually reduce your variance.
The end user experience isn't too terrible, big improvement over other captcha's I've had to use. Though I imagine it might get frustrating for things like logging in, where you might get your password wrong and have to start over. Or maybe it supports caching the idea that you've already proved yourself?