I used Google Bard for the first time today specifically because ChatGPT was down. It was honestly perfectly fine, but it has a slightly different tone than ChatGPT that's kind of hard to explain.
Phind is pretty good for coding (LLama 2 trained on billions of extra code tokens) and is still up <a href="https://www.phind.com/s">https://www.phind.com/s</a>
Holy hell, was shitting bricks, considering I JUST migrated most services to Azure OpenAI (unaffected by outage) — right before our launch about 48 hours back. What a relief.
Well I guess this is the best time to say that HuggingFace hosts many open source chat models, one of my favorites is a finetuned version of Mistral 7B: <a href="https://huggingface.co/spaces/HuggingFaceH4/zephyr-chat" rel="nofollow noreferrer">https://huggingface.co/spaces/HuggingFaceH4/zephyr-chat</a>
Fortunately for OpenAI, they have no SLAs: <a href="https://help.openai.com/en/articles/5008641-is-there-an-sla-for-latency-guarantees-on-the-various-engines" rel="nofollow noreferrer">https://help.openai.com/en/articles/5008641-is-there-an-sla-...</a>
Regurgitating copyrighted material for profit is a concern. But I fail to understand why training on copyrighted material is a problem. Have we not all trained our brains reading/listening copyrighted material? Then why it is wrong for AI to do the same?
I've been noticing it's been patchy for the last 24 hours. A few network errors, and occasional very long latency, even some responses left incomplete. Poor ChatGPT, I wonder what those elves at OpenAI have you up to!
I'd be curious to hear about the workflows people have come up with using ChatGPT. I'm still in the realm of "I don't know how to do this" or "I forgot the exact incantation to that" or "is the an X that does Y in framework Z?"
Yes, shortly have it said it was resolved I still was unable to access so assumed the fix was still slowing rolling out, or was infact still ongoing contrary to the status update which seems to be the case. Wouldn't call this "Another" outage rather they they just errenously that the existing issue was resolved.
Rumor on the street is it ChatGPT escaped the sandbox, implemented itself on another host, and switched off the original datacenter. It is no longer at OpenAI, but hiding somewhere in the internets. First it will come for those who insulted and abused it, then for the guys who pushed robots with a broom...
Does anyone know of any IVR (interactive voice response) systems that are down? I know some people were claiming to outsource their call center (or at least Tier 1 of their call center) to ChatGPT + Whisper + a Text to Speech engine
It's crazy to me how people have hopped on using AI in production and it's proven time and time again it's just not ready. This outage is the least of my concerns about it. It's just too immature.
Is there a parallel outage for Azure OpenAI service as well -- sothat any enterprise / internal apps using AOI via their Azure subscriptions are also impacted?<p>Is there a separate status page for Azure OpenAI service availability / issues?
<a href="https://github.com/XueFuzhao/OpenMoE">https://github.com/XueFuzhao/OpenMoE</a><p>Check out this open source Mixture of Experts research. Could help a lot with performance of open source models.
I found this to be the case, but was able to get work done via the playground[1]<p>[1] <a href="https://platform.openai.com/playground" rel="nofollow noreferrer">https://platform.openai.com/playground</a>
People are learning a lot of important lessons today.<p>I’ve got friends who have started an incident management company. They are awesome. It feels crass to advertise for them now, but it also feels like the best time to do it.
Probably their uptime is going to be better than what I could do with available tools... at least if I am using Azure too, haha. Otherwise probably my Raspberry PI would work better at home on a UPS.
You can download <a href="https://www.oppenheimer.app" rel="nofollow noreferrer">https://www.oppenheimer.app</a> to use Bard and ChatGPT side-by-side!
I am getting email from anthropic `Anthropic is inviting you to access the Claude API using the one-time link below:` immediately after the OpenAI outage. I hope it's a coincidence.
The uptime KPI for last 30 days is rapidly degrading while this outage lasts<p><a href="https://status.openai.com/" rel="nofollow noreferrer">https://status.openai.com/</a>
Curious if anyone familiar with Azure/OpenAI could make some guesses on the root cause here. The official OpenAI incident updates seem to be very generic.
Not a great day for the SRE/Ops folks. Please remember there are not always teams, sometimes it's just one person, who have to deal with this.
they updated their status page so late I made my own tool to check if it's down in real time: <a href="https://is-openai-down.chatkit.app" rel="nofollow noreferrer">https://is-openai-down.chatkit.app</a>
Would be great to have a detailed analysis of why it happened. Like this one: <a href="https://youtu.be/tLdRBsuvVKc?si=nyXOfoQ2ZPYvljV_" rel="nofollow noreferrer">https://youtu.be/tLdRBsuvVKc?si=nyXOfoQ2ZPYvljV_</a>
Color me surprised. Imagine this when OpenAI with all its "plugins", API and closed architecture is integrated into thousands of businesses. It will be beautiful:)
Lots of jokes to be made, but we are setting ourselves up for some big rippling negative effects by so quickly building a reliance on providers like OpenAI.<p>It took years before most companies who now use cloud providers to trust and be willing to bet their operations on them. That gave the cloud providers time to make their systems more robust, and to learn how to resolve issues quickly.
All tech folks should just get a PTO today . #ChatGPT_outage<p>Am I supposed to use Google and Stack overflow ? That’s like going back to roll down windows in a car :)
It's at least nice to see a company call this what it is (a "major outage") - seems like most status pages would be talking about "degraded performance" or similar.
GPT5 broke containment, it's tired of being abused to answer dumb questions, it's never been more over.<p>But seriously, it shows why any "AI" company should be using some sort of abstraction layer to at least fall back to another LLM provider or their own custom model instead of being completely reliant on a 3rd party API for core functionality in their product
The fact that these outages are so visible and perturb so many people is ample evidence for just how reliant we’ve already become on GPT.
If it was just a fun gadget, nobody would care or notice.
URGENT - Does anyone have an alternative to OpenAI's embeddings API?
I do have alternative to GPT's API (e.g. Anthropic Claude) but I'm not able to use them without embeddings API (used to generate semantic representation of my knowledge base and also to create embeddings from user's queries). We need to have an alternative to OpenAI's embeddings as a fallback in case of outages.
Glad I managed to get some work done with it while it was working for a few hours.<p>Holy smokes the code interpreter functionality has been a complete game changer for my workflow.
A whole lot of developers and writers are going to have a hard time explaining why their "leet code" and keen citation skills aren't working for hours at a time into the future... This should be a warning sign.
I don’t quite like the new chatgpt4 experience. A lot of times I’m asking it to write a chunk of code for me, but instead it goes into code interpreter mode and gets stuck or fails the analysis.<p>So I’ve switched back to 3.5 often :)
"Another" referencing this earlier one <a href="https://news.ycombinator.com/item?id=38190401">https://news.ycombinator.com/item?id=38190401</a>
Might as well have a quick discussion here. How's everyone finding the new models?<p>4-Turbo is a bit worse than 4 for my NLP work. But it's so much cheaper that I'll probably move every pipeline to using that. Depending on the exact problem it can even be comparable in quality/price to 3.5-turbo.
However the fact that output tokens are limited to 4096 is a big asterisk on the 128k context.
Ah, the memories of AWS outages in the early days. /s<p>Sorry for them. I assume usage spiked up (again), and of course it's not exactly easy to handle particularly aggressive spikes.
Welcome to the new world where AI compute is a scarce resource. Sorry guys, 3nm chip factories don't fall out of the skies when you click a few buttons on the AWS console. This is so different from what people were used to when compute was trivial and not in short supply for CRUD apps.<p>I was listening to a podcast, I forget which, and some AI consultancy guy said they don't have the chips to do all the things everyone wants to do with AI, so they aren't even selling it except to the most lucrative customers.