Turned out to be four months, not one<p><pre><code> Leadership at this crop of tech companies is more like followership. Whether it's 'no politics', or sudden layoffs, or 'founder mode', or 'work from home'... one CEO has an idea and three dozen other CEOs unthinkingly adopt it.
Several comments in this thread have used Anthropic's lower pricing as a criticism, but it's probably moot: a month from now Anthropic will release its own $200 model.
</code></pre>
<a href="https://news.ycombinator.com/item?id=42333969">https://news.ycombinator.com/item?id=42333969</a>
I want different capabilities at $200.<p>I paid for that to get access to Deep Research from OpenAI and I feel I got more than $200 of value back out.<p>These companies have a hard time communicating value. Capabilities make that easier for me to understand. Rate-limiting and outages don't.
I have been a heavy user of Claude but cancelled my Pro subscription yesterday. The usage limits have been quietly tightened up like crazy recently and 3.7 is certainly feeling dumber lately.<p>But the main reason I quit is the constant downtime. Their status page[0] is like a Christmas tree but even that only tells half the story - the number of times I have input a query only to have Claude sit, think for a while then stop and return nothing as if I had never submitted at all is getting ridiculous. I refuse to pay for this kind of reliability.<p>[0] <a href="https://status.anthropic.com/" rel="nofollow">https://status.anthropic.com/</a>
I cancelled my OpenAI plan. Gemini 2.5 Pro is extremely good compared to OpenAI and Anthropic models. Unless things change, I don't see why I need to pay those subscription fees?
I really like Claude and have the money to spend on something like this if I wanted to but it's not compelling at all.<p>No new models, no new capabilities, just higher limits. Which I know some people are asking/begging for but that's not been an issue for me. If I needed more I'd probably use the API.<p>I only continue to pay because some of the features in Claude Web are better than what I've seen elsewhere but their latest web redesign is making me seriously reconsider that stance. It's /bad/, breaks scrolling while it's generating a response, break copy/paste/selection, etc. It's incredibly hostile and I have to assume anyone seriously using Claude internally is using a different client.
I use the free model and API, when I look at the pricing page I see (for the extant $20 Pro plan)<p><pre><code> - More usage
- Access to Projects to organize chats and documents
- Ability to use more Claude models
- Extended thinking for complex work
</code></pre>
What does "more usage" mean? It doesn't say anywhere what the free tier usage limits are. What are "more" models? It also doesn't make clear what models are available with each tier (except for "extended thinking" which is a separate bullet point)<p>The only thing I can reason is that they want to keep this vague so that they don't have to update their marketing copy each time they update their offerings, but that's absurd.
F<p>I can't believe we're going to high pricing based subscription. This makes me think we need than ever open source models like qwen and deepseek.
A bit off-topic, but I wonder when Cursor is going to see a massive price increase. I've tried Aider (granted, this was when GPT4 prices were still much higher) and spent $10 in one hour easily. Now I use cursor a LOT (100h focussed programming time per month) and mostly stay within my $20 monthly fee. I think they must be losing money on customers like me.
My experience with Claude 3.7 with thinking has been incredible for coding tasks. I did not find the same level of success with Gemini even though the context window is nice.<p>Before they rolled this out, I rarely hit usage limits. Now it seems the usage limits have been lowered for Pro to add more value to Max. That is a less than ideal experience for users.<p>I agree with what most comments here are saying, that there should be more than just usage limits and I hope this changes (as it likely will because the state of competition is still high)
I currently pay for Claude and I still find it best for coding but it's definitely bit unreliable at times and I find it rate limited or responses limited and also unresponsive at times. I wouldn't be upgrading to a $100/$200 packages unless the reliability improved on the $20 offering first
I would rather pay for o1-pro and also have access to the fantastic 4o image generation. o1-pro feels, to me, far ahead for complex coding tasks.<p>I do really prefer Claude’s unified model interface. Hopefully OpenAI improves that soon. Their product UX is a mess.
Aside from the more ambiguous access to newer features first, is there any benefit to this over just leveraging the API?<p>Going via the API with your own interface seems like it's objectively the better way to handle things like this at high usage, since you'll always have priority, wont have usage limits, and you'll pay as you go.
That explains why ~24h ago I reached my first rate limit on Claude (3.7 seems to misunderstand stuff a lot and you have to retry constantly which makes you hit those limits faster). I loved 3.5, possibly my favourite model. Might need to shop around soon for something better.
I’m surprised they didn’t just price it as $200/month but include high daily use of Claude Code. That tech is very cool and some sort of upper limit / bulk discount on costs would be quite compelling.
Bold to launch this before the roll out their own reasoning models / deep research. Seems like table stakes if you want to capture power users but maybe that's just my workflow.
You can just go on AWS bedrock and use the thinking claude 3.7. You can tune the output size. I doubt you would spend 200 bucks a month in AWS on bedrock even with daily use of claude.
I already had to go through so many hoops to sign up for a teams account for 5 separate accounts for myself, I wonder if this will be any better or not. At least if one account starts getting dumb or somehow runs out of context, I can just switch, but if something goes wrong and your $200 account exponentially loses context, then you are stuck waiting. Anyone have any idea of the actual differences?
Everyday it is looking better and better to self host. I really don’t feel like ChatGPT $200/mo is 10x better than $20/mo and after trying QwQ 32B myself I just can’t justify paying these AI subscriptions. Gemini Flash is also dirt cheap as an api and Groq is dirt cheap and fast for QwQ and other OSS models.
How does this compare to o1 pro for coding? I can't figure out what exactly the max plan offers besides usage limits. Claude also has some interesting novel features but i can't try them out as a European. That's one thing OpenAI does better.
If they had something better than OpenAI's Advanced Voice Mode and it came with unlimited access I'd consider it. If Advanced Voice Mode was just 15% better than it is now, I'd find $200 a month to give to OpenAI for unlimited access.
A little off topic, but how does o1 and o1-pro compare to 3.7/3.5 claude for coding and system design/architecture? I currently have the $20 chatgpt plus and claude pro plans but want to upgrade for a month to do some heavy coding. Thanks!
Seems inferior to OpenAIs offering given is same price and capabilities, but OpenAI always have better models first other than code. But code is only useful when using Antvripic API.
Having used Claude daily for last year or so, I don't feel great about being pushed into quite expensive tiers of 5-20x cost. I already feel like I'm hitting the Pro limit 2x/day more often than recently, although no data to support this - but the feeling is there.<p>Mostly I'm disappointed that the higher tiers are just usage tiers rather than features.
Their comparison is not clear at all what you’re getting out of a rather large subscription price, IMO.<p>> Substantially more usage to work with Claude<p>Compared to pro or free what does this mean? More requests / time? More tokens / time? Something else?<p>> Scale usage based on specific needs<p>Are there limits with pro that this is removing or increasing? What kinds of specific needs?<p>> Higher output limits for better and richer responses and Artifacts<p>So Claude will do more thinking before responding compared to pro? Is that due to a new variant of 3.7 model?<p>> Be among the first to try the most advanced Claude capabilities<p>Okay so you’ll get access before people who pay less but not before well known tech people who get early access for testing, I’m assuming?<p>> Priority access during high traffic periods<p>If you’re paying for pro and they throttle you wtf are you paying them for? This seems like an admission they suck at capacity planning based on their current and predicted user base?
At last. I honestly think the product engineering org at Anthropic has serious issues. They are slow to ship, and quality of the webapp has been decidedly mixed in the past 6 months. Some of the UX decisions they've made are bone-headed.<p>(OpenAI launched their pro plan 4 months ago!)
They are a bit of ripoff, especially cli thing. It will hallucinate tons of things, add things that break code and then you pay again (quite a lot) to fix it.
I have a bridge to sell to all those who are paying these companies $100 / $200 a month. Obviously they aren't hackers, I get that, but even normies should have a better sixth sense?
lol no. Claude Pro gave me like, 20-40 questions every 12 hours before it would throttle, and half of those were mistakes we already corrected that it was trying to re-integrate. Plus Claude is always trying to censor itself. I got suckered in by the "claude is better at code" shit, but wow was that BS.<p>Compared to ChatGPT, where 20 bucks gets you hundreds of prompts, and if you ask it hard questions it actually answers instead of lecturing about morals, and why you shouldn't be asking these questions in the first place.<p>Other than using Free Claude to generate resumes to apply to job postings at anthrowhatever, it is by far and away the most useless of the LLMs.