Probably an unpopular opinion, but it's sucky to see these tools constantly getting nerfed. I get that there are large questions out there about things like "browse with bing" but that's why I thought it was supposed to be a limited <i>alpha</i> preview. If OpenAI wants us to build workflows on their stack, they need to really crystalize and figure out what that stack is without changing the underlying stack every 5 minutes. From the constant prompt/jailbreak-defeat tweaks to stuff like this, it really doesn't feel like a stable platform at all.
<p><pre><code> For example, if a user specifically asks for a
URL's full text, it might inadvertently fulfill
this request.
</code></pre>
So this seems to imply two things:<p>1: Bing has access to text on websites which users don't. Probably because websites allow Bing to crawl their content but show a paywall to users?<p>2: The plugin has a different interface to Bing than what Bing offers via the web. Because on the web, you can't tell Bing to show the full text of the URL.<p>I have to contact my ISP. That's not the open web I subscribed to :) Until they fix it, I just keep reading HN. A website which works the way I like it.
All this song and dance to delay the inevitable death of "ad supported journalism" by a few more months.<p>Can't wait for open source AI to catch up and watch all these "safeguards" crumble down. Although I have a sinking feeling that they'll be in cahoots with the congress by then, protecting us from all those unauthorized non-OpenAI-bot scariness.
So they want to align the AI with corporate goals. At least they're being honest here. I want a personal assistant to summarize a page, remove all advertising and do a fact check. Can we have that?
"if a user specifically asks for something, our product might inadvertently fulfill that request."<p>If that's something that needs fixing, the product seems fundamentally broken unless the product isn't designed to work for the "user", and if that's the case, I'm not interested in being a "user" of an adversarial product that also occasionally lies to my face.<p>Powerful AI on the desktop can't happen soon enough. Even if it still lies sometimes, it seems the only way to makes sure it's working for me and my interests without throwing up artificial restrictions around whatever is possible.
This feature was really useful for linking to live documentation URLs and asking GPT4 questions on them. Soon after I think what the other user said became true, where their ips started getting banned.
> As of July 3, 2023, we’ve disabled the Browse with Bing beta feature out of an abundance of caution while we fix this in order to do right by content owners. We are working to bring the beta back as quickly as possible, and appreciate your understanding!
With 2markdown.com you actually only see what a user would see. Except if the website decides otherwise. This nerfing is why you should build with langchain rather than openAI directly. Keep components exchangeable!
The "in order to do right by content owners" quote implies that certain companies did not like that Browse with Bing could be used to bypass paywalls.
Interesting. I think a lot of AI agent internet navigation is still being figured out. Both rules as implied in comments but also tools. There are a lot of nuances OpenAI probably doesn't want to dedicate too many cycles or open up risk for.<p>Folks at Perplexity AI are doing a great job for general info AI charged browsing that's comparable to Bard. Our startup Promptloop has a web browser model targeted specifically at market research and business research. There are certainly many different ways to connect internet and model.
Folks will dangerously bypass this by using an extension that browses from their own browser when ChatGPT needs to hit the internet.<p>Regardless, the browse with Bing was slow and flakey.
It's probably fine for me if they just want to nerf the ability to bypass paywalls. But it's very common now for me that I found something lengthy and informative, pull the URL and ask ChatGPT to summarize. If that is also nerfed then people will have to turn to self-hosted interfaces :(
I haven't tried the CoPilot for Office or whatever it's called, but even though I am certain that MS has applied "an abundance of caution" to their implementation, there is simply no way I would unleash an LLM on all of my data (and possibly allow it to <i>do</i> things as well), at least not at this point in time.<p>We're in the 90's in terms of LLM security, using plain-text passwords and string interpolation for our SQL.
As a developer and consumer I support everything that OpenAI is doing. They are not perfect but I appreciate their services.<p>That said: I have been both fascinated by and having fun self hosting less capable models like Vicuna 33B which is sometimes surprisingly good and sometimes mediocre.<p>For using LLMs, I think it is best to have flexibility and options.
What's the endgame? LLMs just slurp up all data that is made possible by advertising and kills all free websites?<p>Or there is an ai.txt where you can forbid corporate LLMs from touching your content?
It's a curiously worded announcement as it basically says it's doing what it is supposed to.<p>If paywalls are being bypassed that implies what -<p>Inadequate authentication at the provider end?<p>Or perhaps these are "you get n free articles" types that have a more difficult task to counter automated access than those with no unregistered access?<p>Unethical crawl/access techniques being used by BWB that the OpenAI legal/PR team have only just realised? :)<p>To be fair, it should respect the terms but they need to work on their wording and clarity.
I see this as only one more vote for the thesis that OpenAI is in this not for all its lofty goals but purely to make a "Netflix, but for AI". It's no secret they're in this for the money -- but they're doing so in a particularly lazy & outright commercial way that is rewarding highly capable early adopters massively (their intentions going both ways) and punishing everyone else.<p>I'm really hoping we see the other side of capitalism emerge here and present some strong competition that demonstrates that there is a better (as in humane), more responsible as well as profitable way of doing this.
I am sure most at OpenAI realize that they don't have a moat[0] that will keep them ahead for an extended period of time. Being a first mover does have its advantages of course, funding and resources from MSFT also help, but if they continue to (temporarily) take away features from their paying customers, are not forthcoming with whether there have been changes to in-use models and generally make it hard to rely on and trust them to deliver a consistent service, that will further people looking for alternatives, be that other competitors[1] or local models[2]. Unless they can capture regulators in ways that would make local models impossible, they have a very narrow time window to retain a large section of the market in the medium term.<p>Despite their best efforts at presenting themselves as the "ethical" ones in this industry and attempts to convince governments that open/local models must be regulated[3], the only way I can see them accomplishing longer term limits on local LLMs considering the many purposes GPUs and inference accelerators can serve to the public will be hard. I have said this before, but I view this similarly to the Crypto Wars[4] or more recently Nvidia LHR[5].<p>I can see a Clipper Chip[6] for GPUs, that tries to limit inference to features approved for the common public, (mainly gaming focused applications like DLSS) in the near future, but I am certain that like before, where there is a 10ft wall, 12ft ladders will emerge.<p>Basically, OpenAI must prove able to provide a consistent service to their paying customers. Do not make changes to models in production, do not take away features, if something is down provide a timeline and deprecate or replace models in a transparent manner. When the switch from the "initial" ChatGPT model to "gpt-3.5-turbo" happened, there was a lot of confusion that could have been avoided with more transparency from the outset.<p>[0] <a href="https://archive.is/ImFl2" rel="nofollow noreferrer">https://archive.is/ImFl2</a><p>[1] <a href="https://www.anthropic.com/" rel="nofollow noreferrer">https://www.anthropic.com/</a><p>[2] <a href="https://huggingface.co/tiiuae/falcon-40b-instruct" rel="nofollow noreferrer">https://huggingface.co/tiiuae/falcon-40b-instruct</a><p>[3] <a href="https://www.nytimes.com/2023/05/16/technology/openai-altman-artificial-intelligence-regulation.html" rel="nofollow noreferrer">https://www.nytimes.com/2023/05/16/technology/openai-altman-...</a><p>[4] <a href="https://en.wikipedia.org/wiki/Crypto_Wars" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Crypto_Wars</a><p>[5] <a href="https://blogs.nvidia.com/blog/2021/05/18/lhr/" rel="nofollow noreferrer">https://blogs.nvidia.com/blog/2021/05/18/lhr/</a><p>[6] <a href="https://www.cryptomuseum.com/crypto/usa/clipper.htm" rel="nofollow noreferrer">https://www.cryptomuseum.com/crypto/usa/clipper.htm</a>