Questioning Power
as we approach the singularity
I have believed that we are about to reach a technological singularity for some time. Note that I don’t view the singularity as some kind of a Christian rapture that settles all questions all at once - just a point past which we won’t be able to make predictions. If anything, I expect there to be more questions as things change faster and faster.
In fact, if there’s one prediction I’m confident in, it’s that things will change faster and faster. Every year will be weirder than the previous one from now on, by a lot.
Democracies as they exist today will not do well as the pace of change accelerates. Though they have the most legitimacy, they are sloooooow. It takes 4 years for the population to respond to a change, and on top of that you might just get a deadlocked government for a number of years. That will not work in a world where there are groundbreaking changes happening every year. For a while democracies will coast by outsourcing decisions to the executive branches, but that will reduce their legitimacy until it no longer holds.
This is not to say that totalitarian states will do better. They might appear ahead for a while because they don’t need to ask permission to react to changes, but also, the cost of a mistake will get higher and higher. One wrong technological choice and you’re irrelevant. Consider for example how Russia had not secured compute prior to invading Ukraine, and that effectively knocked it out of the AI race. China’s doing well for now, but it’s hard for me to believe that a 70-plus-year-old dictator will make every right technological choice in an ever accelerating environment.
Power will inevitably shift to corporations in the run up to the singularity. Corporations respond to tight feedback loops, hire for talent, and are mostly run by significantly younger and more tech capable actors - at least the relevant ones are, the ones that don’t fit this description will fade away together with governments.
Because of this, for about a year, I have been looking for early signs of that power shift. I’ve been trying to find examples of corporations strong arming governments. I’d argue that we already live in a world where e.g. Google is more powerful than Trinidad and Tobago. The scenario I had expected was a country passing some populist anti tech law and Google responding by saying “OK, we can’t reasonably comply, we’re exiting, starting tomorrow Google, Gmail, Youtube, Chrome, Android, etc. will not work in your country” and the country backing out. I’ve been wondering if the threat is big enough - it gets more and more difficult for the country as its economy gets powered by self-driving cars, humanoid robots, delivery drones, etc.
What I did not expect was for the first big challenge to be thrown by Anthropic to the US Government. These seem ridiculously mismatched at this point - the US Government can stop using Claude and be completely fine, and it can also send men with guns to San Francisco to essentially destroy Anthropic. Sure, such an action might eventually be ruled illegal in court, but it doesn’t matter - if you paralyze the company for 3 years, it won’t matter if it wins at court - it will essentially drop out of the AI race and die.
There are two interesting pieces of writing on the saga - by Ben Thompson and Dean Ball. They are extremely different and are both worth reading.
Thompson’s argument is fundamentally about power and who has it. He takes Dario Amodei’s own analogy - that selling advanced AI chips to China is “a bit like selling nuclear weapons to North Korea” - and follows it to its logical conclusion. If AI really is that powerful, Thompson argues, then the US government simply cannot tolerate a private company retaining veto power over how that capability is deployed. Just as no government would allow a private firm to dictate the terms of nuclear weapons use, the US won’t accept it for AI either. Thompson sees Anthropic as having made a basic strategic error: asserting independence from the entity that still holds all the actual power. His verdict is blunt. Anthropic either accepts a subservient position relative to the US government, or the US government destroys it or removes its leadership. Those are the only two options.
Ball looks at the same collision and sees a different wreck. He doesn’t dispute that the contract was arguably an imperfect vehicle for policy, he even concedes that the administration has a point in principle. But the heart of his piece is a detailed accounting of the less extreme options the government could have used: canceling the contract, issuing narrower procurement guidance, adding specific acquisition clauses. The government had proportionate tools available and chose the most destructive one instead, threatening to designate an American company as a supply-chain risk - a label previously reserved for entities like Huawei - and to bar all military contractors from any commercial relationship with Anthropic. Ball sees this not as a rational exercise of power but as the lashing out of a dying institution. He frames the entire episode as a “death rattle” of the American republic, one more data point in a decades-long decline of governance into something arbitrary, thuggish, and capricious.
Thompson’s piece is stronger when it comes to describing the situation today - but that is becoming less and less useful as the pace of change picks up. Ball is more future focused, but only talks about the decline of the US Government, ignoring the bigger trend of democracies not dealing well with an increased pace of change.
I’m watching this to calibrate where we are in the power transfer from states to corporations. This is the first real conflict between an AI company and a government over the terms of power, and it will not be the last. The only prediction I have is that AI companies will never be less powerful relative to governments than they are today.


What are the relevant sources of power here? Capital and talent ultimately. The capital required for frontier models is small compared to defense budgets so this is about talent. How to, one way or another, get it to work for national interests including military purposes. This is reminiscent of the Manhattan Project, where talented folks each had to decide what they would and would not do, and for whom.