So does anyone know how this works out with Googles investment a few months back?:<p>Announcement "Anthropic Partners with Google Cloud" Feb 3, 2023 - <a href="https://www.anthropic.com/index/anthropic-partners-with-google-cloud" rel="nofollow noreferrer">https://www.anthropic.com/index/anthropic-partners-with-goog...</a><p>"...Anthropic, an AI safety and research company, has selected Google Cloud as its cloud provider. The partnership is designed so that the companies can co-develop AI computing systems; Anthropic will leverage Google Cloud's cutting-edge GPU and TPU clusters to train, scale, and deploy its AI systems."<p>Announcement "Expanding access to safer AI with Amazon" Sep 25, 2023 - <a href="https://www.anthropic.com/index/anthropic-amazon" rel="nofollow noreferrer">https://www.anthropic.com/index/anthropic-amazon</a><p>"AWS will become Anthropic’s primary cloud provider for mission critical workloads, providing our team with access to leading compute infrastructure in the form of AWS Trainium and Inferentia chips, which will be used in addition to existing solutions for model training and deployment. Together, we’ll combine our respective expertise to collaborate on the development of future Trainium and Inferentia technology."
Several things to note here:<p>Amazon as a corporate investor - Of course a lot of this is a futures contract on cloud compute. This indicates how much the leadership here thinks the compute will be problem. Money comes way cheaper outside of the big cloud providers (they also know the importance of compute and pull their leverage). This is not a sure bet. While true AGI is probably sitting behind a huge amount of compute, the “products” that are catching on right now are very much on lower end of the spectrum for required model performance. Small models are cheaper and can run on commodity compute. It’s not entirely clear to me that this is a financially sound bet..<p>Timing - This is an interesting time to do so. That indicates that the company feels that it’s shown some of its best work and right now is the time to bank and on that (so as to make the leap to the next big breakthrough). Openai did so on the heels of ChatGPT. This is somewhat discouraging, because outside of the context length hackery, Anthropic doesn’t have much to show as a differentiator. At best they’re a me-too startup set on the path to be acqui-hired by Amazon when the VC money subsidizing the compute drains up.<p>Structure/Size - there was a lot of information about the structure of the openai deal. We’re not so clear on what’s happening here. One of the big questions is valuation. Making a similar promise to openai (ie 50% of profit until 100b) would put the valuation of the company in to 10s of billions. Note that this is a very different proposition than a year ago. In navigating the “product maze” we’ve realized that there aren’t that many killer products. Most enterprises are throwing spend in this direction because the board requires you to have an “ai strategy”. At best, we’re talking about capturing all the VC money that’s going into companies with a new angle on knowledge management/search. As I mentioned above, that’s something that’s getting severely commoditized at the bottom of the market. The prospects here are pretty grim .
I've been using the claude model over the past months and I have to say I usually prefer it over ChatGPT.<p>Some things that impressed me particular are:<p>* It can often give you working urls of images related to your query. (e.g. give me some images to use with this blog paragraph).<p>* It can list relevant publications corresponding to paragraphs, chatgpt often halucinates new papers but claude consistenly gives highly cited and relevant suggestions.<p>* It can work with pdf inputs.
FWIW I can't build OpenAI into our product without <i>huge</i> headaches around privacy policies and getting additional customer consent. I <i>can</i> build Amazon services into our products because Amazon is a trusted vendor and we use an AWS stack already.<p>Because Anthropic / Claude is available via Amazon Bedrock, they have a significant advantage in any company that's already using AWS. If you're on Azure, OpenAI has that advantage.
For those who don’t know, Anthropic has an interesting list of investors:<p>- Eric Schmidt (former Google CEO/Chairman), Series A<p>- Sam Bankman-Fried (FTX), lead investor in Series B<p>- Caroline Ellison (FTX), Series B<p>- Google, Series C<p><a href="https://www.crunchbase.com/organization/anthropic/company_financials" rel="nofollow noreferrer">https://www.crunchbase.com/organization/anthropic/company_fi...</a><p>———<p>Question: why isn’t Anthropic using Google Cloud, given who their past investors include?
> ” AWS will become Anthropic’s primary cloud provider”<p>How much of this becomes creative accounting?<p>AWS gets to reduce its profits by making this investment, which means they pay less taxes.<p>Then with AWS’ own money, they will get to recognize this as new AWS cloud revenue continuing their sales growth.<p>While all during which they also get an equate stake.<p>This seems like a creative way for AWS (and Microsoft with OpenAI) to artificially boost cloud revenues.
Anthropic’s main mission is ethical. Profit is just fuel for the ethical mission (and they would rather have hockey stick AI safety than hockey stick growth given the choice)<p>Amazon, well you know them :-)<p>Flogging their model on Bedrock etc. must be part of the plan for Anthropic but AWS investment must surely create tension.
I think it might be because they want to diversify their infrastructure and reduce the risk associated with relying solely on one provider. It's interesting to see Anthropic switching cloud providers from Google to Amazon. I think it might be because they want to diversify their infrastructure and reduce the risk associated with relying solely on one provider. Additionally, AWS Trainium and Inferentia chips could offer specialized hardware support that aligns better with their AI workloads. This move doesn't necessarily reflect poorly on Google Cloud's future, but it does indicate that Anthropic wants to explore multiple cloud options for their AI projects.
A bunch of folks in the thread seem confused as to why they would do this after their Google investments. It's important to note that Amazon has been working closely with Anthropic for a long time, and already vends their models in AWS:<p><a href="https://aws.amazon.com/bedrock/" rel="nofollow noreferrer">https://aws.amazon.com/bedrock/</a><p>So if anything this is just a more public way of acknowledging the existing relationship.
So what will this mean in relation to their recent Alexa announcements? are they using Anthropic in this announcement? Is it so difficult for a big company to build their own?
At this point Apple is the only BigTech without any major investment in LLMs. Which isn't too surprising, they have historically never jumped on the shiny new thing until the dust clears off.
If Amazon has $4B to invest into what may end up being the Theranos of AI research, why don't they have $4B to invest into their workforce and, you know, actually pay people instead of all the extra-legal shenanigans they're pulling to avoid cutting paychecks?<p>Edit: For those not understand what I'm referencing, Amazon is currently trying to get out of owning their Seattle offices, and are using "get back to the office" tactics to harass workers into leaving so they can dump the real estate at loss: <a href="https://www.seattletimes.com/opinion/unpacking-amazons-stealthy-mass-layoff-strategy-in-seattle/" rel="nofollow noreferrer">https://www.seattletimes.com/opinion/unpacking-amazons-steal...</a><p>It isn't about their global payroll spend dwarfing the $4B. Its about having a sense of ethics and a grasp of math.