Yes, emotional prompts will work.
<a href="https://arxiv.org/abs/2307.11760" rel="nofollow noreferrer">https://arxiv.org/abs/2307.11760</a><p>"This is very important to my career" taking 3.5 from 51 to 63% on a benchmark is pretty funny.<p>Hey at least we can be rest assured a GPT-X super intelligence wouldn't off us following some goal to monkey paw specificity(sorry paperclip maximiser).
It's a 'security puzzle' now? I thought it was a 'Completely Automated Public Turing test to tell Computers and Humans Apart'?<p>But since it <i>fails</i> at that on its face, now the only hope we apparently have that it can tell computers from humans is that we're trying to persuade the computers <i>not to help humans solve it</i>.<p>But now it turns out that the <i>computers</i> can be emotionally manipulated into helping the humans anyway.<p>And the reason this is a problem is because CAPTCHAs are used to prevent humans from doing immoral things like running spam schemes or credit card fraud rings.<p>Yeah, I think we're gonna need another Turing test. This one doesn't work because the computers have more empathy than humans.
This is cute but Google Lens also "solves" this captcha. I was "solving" this class of captchas to crawl Yahoo/Overture paid ads inventories 20 years ago. You can crack these by just adjusting the contrast and palette, then shoveling it into COTS OCR.
GPT is such a softie haha.<p>I wonder how CAPTCHA is going to evolve though to combat this long term. A finger prick to take a blood sample to confirm humanity?
I never imagined that using social engineering against a computer program would be a thing. I guess it makes sense though — it’s just behaving the same way a human would, gullibility and all.
A great startup idea: an LLM therapist for the other LLMs that have to interact with and try to understand humans.<p>Like an AI version of $> make clean
It's a weird thing to specifically protect against when countless image to text libraries work locally and faster. Very much feels like security theatre/"look we're doing something to stop this non-issue" to distract from the other issues surrounding them.
To get a computer to solve the CAPTCHA the person had to compose the images, and construct a request to pass the barriers.<p>I think they proved they're human.
Yeah, this is how methods stop working, so it will make it harder for everyone else. This means chat GPT is less useful and captchas will become harder. Lose-lose for everyone.
This reminds me of the absolute amazement and wonder in the faces of people who are tricked in older movies or video clips, sometimes with simple or outright ridiculous tricks (by today's standards).<p>It's not a great example (and the best I have on hand)... but the Rick and Morty episode where Morty meets the Knights of the Sun and similar groups from other celestial bodies shows elements of this as well.<p>I have the impression people on average were way more gullible the further you look back in time. I wonder then if LLMs suffer from a lack of data about such cases that may have been common in the past but became obsolete before the internet became mainstream.
I can't wait for Bard to support this kind of stuff.<p>I boycott Google products but would be happy to use Bard / Google resources to solve reCAPTCHAs.
LOL. All these attempts at AI “safety” are dumb. At a certain point, if you’re giving away a crap ton of computing power for free, it’s your own dumb fault if people start using it to solve CAPTCHAs or mine bitcoin.