"Dario Amodei, the former VP of research at OpenAI, launched Anthropic in 2021 as a public benefit corporation, taking with him a number of OpenAI employees, including OpenAI’s former policy lead Jack Clark. Amodei split from OpenAI after a disagreement over the company’s direction, namely the startup’s increasingly commercial focus."
So Anthropic is the Google-supported equivalent of OpenAI? Isn't the founder going to run into the same issues as before (commercialization at OpenAI)? How does Google not use Anthropic as either something commercial or nice marketing material for its AI offerings?
There may have been a disagreement, but now they're focused on profit, as everybody else. From the same article:
“Anthropic has been heavily focused on research for the first year and a half of its existence, but we have been convinced of the necessity of commercialization, which we fully committed to in September [2022],” the pitch deck reads. “We’ve developed a strategy for go-to-market and initial product specialization that fits with our core expertise, brand and where we see adoption occurring over the next 12 months.”
This is hilarious to me, in that the disgruntled departure just did a 180… how long until the next disgruntled spin-out for higher reasons chases the dollar too…
> This is hilarious to me, in that the disgruntled departure just did a 180… how long until the next disgruntled spin-out for higher reasons chases the dollar too…
The cynic in me wants to ask "What makes you think his departure was because of an anti-commercialisation position?"
My take (probably just as wrong as everybody's else take) is that he saw the huge commercialisation potential and realised that he could make even more money by having a larger stake, which he got when he started his own venture.
It’s pretty clear, the words say he was anti, then the company he helped create apparently has marketing material all about being commercialization. Unless he leaves tomorrow for the same reasons it is quite hard to disbelieve that “cash rules everything around me”.
> If you look read the parent comment in this thread you'd get an answer...
I looked and I didn't get an answer. hence my comment.
To clarify, we know what he said his reason was, we don't know if that really was his reason.
When people leave they very rarely voice the actual reason for leaving; the reason they give is designed to make them look as good as possible for any future employer or venture.
To be fair I think he had the same realization that they had at OpenAI. Sam Altman has gone on the record saying it's basically impossible to raise significant amounts of money as a pure nonprofit and you aren't going to train cutting edge foundation models without a lot of cash. Anthropic is saying they literally need to spend $1B over 18 months to train their next Claude version.
The same thing happened back in the processor arms race days and before that in the IC days. Ex-Fairchild engineers created a lot of the most durable IC and chip companies out there. Intel's founders were ex-Fairchild.
>So Anthropic is the Google-supported equivalent of OpenAI? Isn't the founder going to run into the same issues as before (commercialization at OpenAI)? How does Google not use Anthropic as either something commercial or nice marketing material for its AI offerings?
I think the unstated shift that has happened in the past few years is that we've gone from researchers thinking about Fourier transforms to efficiently encode positional data into vectors to researchers thinking about how to train a model with a 100k+ token batch size on a super-computer-like cluster of GPUs.
I can totally see why people believed the math could be done in a non-profit way, I do not see how the systems engineering could be.
What does a policy lead do and how are they relevant to an early stage startup? I would be more interested in seeing which researchers and engineers join.
> As the Product Policy Lead, you will set the foundation for Anthropic’s approach to safe deployments. You will develop the policies that govern the use of our systems, oversee the technical approaches to identifying current and future risks, and build the organizational capacity to mitigate product safety risks at-scale. You will work collaboratively with our Product, Societal Impacts, Policy, Legal, and leadership teams to develop policies and processes that protect Anthropic and our partners.
> You’re a great fit for the role if you’ve served in leadership positions in the fields of Trust & Safety, product policy, or risk management at fast-growing technology companies, and you recognize that emerging technology such as generative AI systems will require creative approaches to mitigating complex threats.
> Please note that in this role you may encounter sensitive material and subject matter, including policy issues that may be offensive or upsetting.
Jack is pretty well known in the community since he runs not only the Import AI newsletter, but also has been a partner in the AI Index report. He also has a media background so is generally well connected even beyond his influential reach. Also, though not relevant to your question, he's a really nice guy :)
Curiously, Anthropic.com was launched in 2021, but a small custom software shop in Arizona around since the mid-late 90s had registered and been using Anthropic.ai in 2020 for a couple projects.
“Hi we’re here to save humanity, and we’re stealing your name! We have a ton of lawyers, buckets of cash from Google to hire more lawyers, and if you don’t like it, you’re fucked. Now please enjoy being saved by us.”
It's the trademark that matters i thought (possibly naively), since anthropic.ai was registered in 2020 for a product built in 2019, it seems, and the Anthropic spin off from OpenAI was formed in 2021, seems to have purchased a squatted domain name of anthropic.com then.
Well you can also buy a trademark, right? Though I think different companies are allowed the same trademarks if the things being trademarked aren’t confusable
yeah, LOL on that. Their idea of "public benefit" is of course that they benefit publicly, though for marketing purposes "public benefit" sounds nicer (just like the "Open" in "ClosedAI") because people would tend to emotionally associate something nicer with it.
Reminds me of the (possibly LLM-generated) marketing tirade of a voice faking text-to-speech service recently here on hn, which ended with: "We are thrilled to be sharing our new model, and look forward to feedback!":
… "share" yeah right… like: where can I download the model then? Of course they didn't mean to actually share their model but only to rent out remote access to it, but that doesn't sound as nice as "share".
So Anthropic is the Google-supported equivalent of OpenAI? Isn't the founder going to run into the same issues as before (commercialization at OpenAI)? How does Google not use Anthropic as either something commercial or nice marketing material for its AI offerings?