Way back, Microsoft added a feature to Microsoft Office tool suite to interrupt your work with a "helpful" paperclip.<p>Unrequested AI features remind me of that paperclip.
What's funny is Prompt injection warning at the bottom.<p>"Many of LLM applications are susceptible to a form of abuse known as prompt injection. This feature is no different. It is possible to trick the LLM into accepting instructions that are not intended by the developers."
Edge did it first:
<a href="https://learn.microsoft.com/en-us/microsoft-edge/devtools-guide-chromium/console/copilot-explain-console" rel="nofollow">https://learn.microsoft.com/en-us/microsoft-edge/devtools-gu...</a>
> To use this feature, make sure that you:<p>> Are located in one of the supported regions and are at least 18 years old.<p>I wonder how much of the 18+ part is due to their models producing unsafe outputs (which seems unlikely considering how often they refuse to be useful), or if they're actually using it as a subtle form of anti-bot protection (I had one of my accounts suddenly marked as "potentially under 18" and I have to submit my passport to verify)
Not a great start in there example gives a suggestion that would almost definitely lead to more problems. Setting no-cors prevents access to response, which while useful in certain circumstances (when you just want a 200), most of the time this is going to cause more issues for a Dev who doesn't know that. Again great potential but a lack of true understanding of context holds these services back still.
“Anti-features nobody asked for for $100 Alex.”<p>“This Google product added an AI feature in 2024.”<p>“What is Chrome?”<p>“Correct! We would have also accepted ‘anything people actually valued, cared about, or used’.”<p>“Ways to get promo while ruining user trust for $500…”
Maybe it's just me but if I felt like my application's error messages weren't easy enough to understand I'd try to improve the messages instead of throwing all the context at an AI and hoping for the best.
> To use this feature, make sure that you:<p>> Are located in one of the supported regions and are at least 18 years old.<p>Seriously. For to get an explanation of a freaking JS error message.<p>Now for a debug session you need a Google account, to agree with a legal notice, a privacy notice and be at least 18 years old and boil I don't know how many liters of water for generating a text that could be static in some documentation center / KB.<p>I love some self deprecation humor Google, too bad it is a little late for April Fools.
Why does this exist?<p>The example AI-generated explanation shown doesn't seem any more helpful than the original error. It just states the same information in a long-winded manner.<p>Errors and warnings are already deterministic and unambiguous. Why introduce the opportunity to confuse people or just be plain wrong?
I recently found the cursor in dark mode to be impossible to see, the autocompletion to be maddening, and the constant change of tab key behavior all to be so frustrating that I ended up instrumenting my own overlay debugging system into a recent single page app using xterm.js.<p>I'm just really tired of all these hyper opinionated bad corporate tools.<p>So now after three separate click through agreements you can have Gemini tell you what any google search of the error message itself could have. Notably, because Gemini knows nothing about your server, it can't tell you how to _actually_ fix the problem, just describe it in _slightly_ more detail.<p>Perhaps they chose the worst possible example, but to jump through all those hoops to end at that very underwhelming response which fails to truly explain the consequences of no-cors does have me giggling.
So does this also consider the JavaScript the browser loaded or is this just a dumb LLM "explain this error message: " prompt? If the latter... Who needs this?