From a non-US perspective this must be disquieting to read: Not so much that Anthropic considers only US companies as partners. But what does Anthropic do to prevent malicious use of its software by its own government?
> Anthropic has also been in ongoing discussions with US government officials about Claude Mythos Preview and its offensive and defensive cyber capabilities. As we noted above, securing critical infrastructure is a top national security priority for democratic countries—the emergence of these cyber capabilities is another reason why the US and its allies must maintain a decisive lead in AI technology.
Not a single word of caution regarding possible abuse. Instead apparent support for its "offensive" capabilities.
There is very little Anthropic can do - that job is up to US citizens creating and enforcing checks and balances. You can’t ask a company legally bound by your country laws (made by your own representatives) to protect you or anyone else from said laws. That is your job.
And it is other countries job to protect themselves from other countries weapons. As EU citizen I’d much rather if EU had a frontier model on par, but here we are.
According to Dario Amodei, companies bear a lot of responsibility and must act on this. Just read https://www.darioamodei.com/essay/the-adolescence-of-technol... . But it seems that he has given up on this, even if he has a president that demands "complete and total control of Greenland" etc. What "allies" is this Anthropic statement referring to anyway?
In my view it would be extremely strange if it was any other way round. Anthropic is the US based company. There are no "citizens of world" at that scale, or at almost any other scale for that matter.
Anthropic stood up to the Pentagon because they were worried of potential abuse of their model. Never before a US company was labeled supply chain risk by the US government. That's a lot of business. Action speaks louder than words.
As for what your country can do, it's up to you to decide, isn't it? Instead of complaining about the US, think about the alternatives. Do you trust China to be your partner? Suppose you are being objective and say no, then what do your country need to do?
You have to decide whether AI capability is critical that your country must own. What factors prevent it from happening in the first place, what need to change and whether you accept changes that may come as the results.
On the other hand, if you say that AI is just a bubble, that the huge investment pouring into it is just greed and fraud, then I suppose you are ok with the status quo.
When I was reading https://ai-2027.com, which is quite a scary read, I couldn't help but think the US president being mentioned in the story acts too rational compared to the real world. It can get a lot crazier than this fictional piece.
And every time it works, they still don't acknowledge it. Would he have blown up bridges and power plants? Quite possibly. Would he have dropped a nuke? Obviously not.
> Anthropic has also been in ongoing discussions with US government officials about Claude Mythos Preview and its offensive and defensive cyber capabilities. As we noted above, securing critical infrastructure is a top national security priority for democratic countries—the emergence of these cyber capabilities is another reason why the US and its allies must maintain a decisive lead in AI technology.
Not a single word of caution regarding possible abuse. Instead apparent support for its "offensive" capabilities.