I want ChatGPT for Family.<p>The free version gets a lot of use around here but the most powerful feature is the ability to search the web, which is only available to paid users. I pay $20/month for myself and I’d happily pay a bit more for the whole family, but not $20/month per person - it adds up. Family members end up asking to borrow my phone a lot to use it.<p>Give me a 3-4 person plan that costs $30-$40/month. You’re leaving money on the table!
A notable feature here is "no training on your business data or conversations" which really shouldn't have to be a feature. (requests using the ChatGPT API already aren't trained upon)
Note that "no training on your data" is only for Team and Enterprise: <a href="https://openai.com/chatgpt/pricing" rel="nofollow">https://openai.com/chatgpt/pricing</a>
A major change is that you cannot opt out from having your conversations used for training unless you are usig a team account which is pretty costly for a single person.
Adjacent question, leaving aside value proposition. Do companies pay for 1000 seats like this? I didn't realize slack is $5 a user a month. Do they discount this for bulk, or are companies paying $5k/month $60k/yearly? These subscriptions must really add up.<p>On All In, they discussed the leverage from AI tools and they probably also meant open source, but one of the companies just rolled their own instance of a big monthly SaaS product because it was such a big expense for the startup.
Ok, so there are now 2 tiers where they don't use our data to train the model?<p>The higher bandwidth is to clearly entice new customers, but the question remains, what happens to the old ChatGPT Plus users? Do their quotas get eaten up by these new teams?
I can see some good use cases
- A custom gpt just trained on your code base can help you write test cases in your desired syntax.
- A custom gpt trained on internal PRDs can help brainstorm better on the next set of features.<p>Hoping to see something good come out of this
I'm not too suprised by the move, it's a classic segmentation steategy but I was surprised how poorly the example screenshots they gave reflect on the product.<p>You have one non actionable marketing answer, a growth graph created without axis (what are people going to do with that?) and a Python file which would be easier just to run to get the error.<p>That kind of reinforce my belief that those AI tools aren't without their learning curves despite being in plain English.
Here’s an idea - ChatGPT app for Apple Carplay. Right now while driving i often do “hey siri” - but instead of carrying on a conversation where I can ask clarifying questions, I am most greeted with “i cannot show you this information while driving”, because rather than summarizing the answer, Siri tries to show me some website link.
100 messages / 3 hrs, with a 32k context window.
That's really cost effective and efficient for my use case!<p>Does anyone know if this applies to voice conversations?
This is me while I'm driving: upload big PDF -> talk to GPT: "Ok, read to me the study/book/article word for word."<p>Good job OpenAI.
The way they purposefully made the Enterprise Plan so much better than the Teams plan is genius, the pressure on Enterprises to "just do the right thing" is pretty heavy here, I'd bet this will make them more than billion before the year is over.
Did anyone evaluate this compared to using api access through an external gui (i.e. continue.dev). For software dev did the cost end up higher? I am thinking this is can be more convenient (and I suppose engineers can more easily use it outside work as a perk). Given practical use across a team will vary you get a lower price when using api and perhaps additional opportunity for scripted use.
Our team has an Enterprise account, but individuals cannot access GPT-4 through the chat.openai.com interface. With teams, do individuals get access to GPT-4 through that interface? Is our account just broken somehow?<p>It seems odd we have enterprise but cannot access GPT-4 through the main ChatGPT interface.
I'd love if I could use both my users at the same time to ask 2x questions.<p>My wife uses chatgpt only a few times a day.<p>I guess I need to 2x my browsers. I don't think this would work on the phone because I believe I need my browser open for chatgpt to continue its computations.
Also part of the announcement:<p>The GPT store<p><a href="https://news.ycombinator.com/item?id=38941158">https://news.ycombinator.com/item?id=38941158</a><p><a href="https://chat.openai.com/gpts" rel="nofollow">https://chat.openai.com/gpts</a>
I think assistants / agents are going to be the big thing this year.<p>I was working on something at the end November that was proposing competent PRs based upon request for work in a GH issue. I was about halfway through the first iteration of a prompt role that can review, approve and merge these PRs. End goal being a fully autonomous software factory wherein the humans simply communicate via GH issue threads. Will probably be back on this project by mid February or so. Really looking forward to it.<p>Bigger, more useful context is all I think I really want at this point. The other primitives can be built pretty quickly on top of next token prediction once you know the recipes.
I’ve got my stuff rigged to hit mixtral-8x7, and dolphin locally, and 3.5-turbo, and the 4-series preview all with easy comparison in emacs and stuff, and in fairness the 4.5-preview is starting to show some edge on 8x7 that had been a toss-up even two weeks ago. I’m still on the mistral-medium waiting list.<p>Until I realized Perplexity will give you a decent amount of Mistral Medium for free through their partnership.<p>Who is sama kidding they’re still leading here? Mistral Medium <i>destroys</i> the 4.5 preview. And Perplexity wouldn’t be giving it away in any quantity if it had a cost structure like 4.5, Mistral hasn’t raised enough.<p>Speculation is risky but fuck it: Mistral is the new “RenTech of AI”, DPO and Alibi and sliding window and modern mixtures are well-understood so the money is in the lag between some new edge and TheBloke having it quantized for a Mac Mini or 4070 Super, and the enterprise didn’t love the weird structure, remembers how much fun it was to be over a barrel to MSFT, and can afford to dabble until it’s affordable and operable on-premise.<p>“Hate to see you go, love to watch you leave”.
Could a moderator change the "Teams" in the title to lowercase (as it is in the article)? Capitalizing Teams misleadingly implies it's regarding Microsoft's chat platform.
At the end of the day I wonder what openai's endgame is here. They're starting to expand their business in a way that geometrically grows the size of the team, overlapping products that microsoft is offering, making the whole non-profit/capped-profit thing a head scratcher.<p>I guess you can argue this is just a marginal add-on to their existing ChatGPT product but I can imagine seeing them go full Salesforce/Oracle/enterprise behemoth here.<p>I would say I'm very pro AI development and pro Sam reinstating but I've been starting to shake my head a bit. Their mission and their ambition are <i>wildly different</i>.
The Engineering example is absolutely hilarious. Sure, I’m going to copy paste my code into an AI assistant to ask it about a bug that a linter would spot in realtime as I wrote the code.