How Anthropic Can Win in Consumer
Claw(d)ing your way to first place
At the time of writing, ChatGPT is the top free app in the App Store, Grok is #5, Gemini is #6 and Claude is down at #85. Many people in tech say that they prefer Claude, but for normal people there are a lot of disadvantages that make Claude unviable:
It doesn’t have the top brand (ChatGPT)
It doesn’t have image/video generation
It doesn’t have a massive channel pushing it to consumers (Gemini, Grok)
It doesn’t have unique access to consumers’ data (Gemini)
Anthropic’s response has been to focus on B2B where it’s doing well with developers using Claude Code and businesses basing their AI services on Anthropic’s API. That’s not unreasonable, but unfortunate given that Claude has the best personality and would legitimately make people’s lives better. Luckily, there is a way for Anthropic to win the consumer market by doing something no other AI company would do.
Recently, Clawdbot Moltbot Openclaw broke through in a way few AI projects have. If you haven’t seen it - it’s basically a version of an AI chatbot that lives on its own computer, can access the browser, have its own email, use messaging apps etc. There is absolutely nothing surprising here for people who have used Claude Code, but living in SF you tend to forget how few people have.
The relevant part of Openclaw to this post is the login mechanism. You can either use an API key for any model provider, which is something normal consumers would never do, or you can “sign in” with your ChatGPT or Claude subscription and have the bot use that to power itself. The reason “sign in” is in quotes it that at least Anthropic views this behavior as a hack breaking its ToS and can close your account over it. I haven’t heard of OpenAI doing the same, but it’s definitely not an intended use of an AI subscription.
Anthropic’s reasoning is first order sound - a subscription is meant to power only Anthropic’s products, and because of it is much cheaper than the API. If you want to use your own tools powered by Claude - you’re welcome to do so but pay us the appropriate amount. But strategically, this is a huge mistake. Not only should Anthropic allow this behavior, it should encourage it, and build a user-friendly “sign in with Claude”1 flow. Here’s why:
Anthropic's competitors all have their own ecosystems where they want to keep consumers locked in. Google has been doing this for decades, Grok is heavily linked to X, OpenAI is trying to make its app powered by ads and shopping, which would require similar ecosystem lock in. Anthropic doesn’t have this constraint.
If Anthropic allows its users to use the tokens that come with their Claude subscription for other apps two things happen:
Claude becomes the default AI subscription for people who like trying AI tools.
Developers get a user boost if their product is powered by Claude, so more developers choose Claude to power their apps.
This is a classic two-sided network effect similar to, for example, Uber. Doing this would start the flywheel of becoming more attractive for both developers and users at the same time.
True, this tactic would significantly lower the margins on the Claude subscriptions. But, in return, many more people would switch to the higher tiers - even Openclaw requires the $100 a month tier to do anything well, and this is just the first of many, many, MANY consumer AI applications to come.
To put it bluntly: if Anthropic wants to make a little more money today - they should keep banning the unauthorized users of Claude subscriptions for other tools. If they want to win the consumer space in the long run - they need to make it easy and shout from the rooftops about how Claude lets you use all of the tools out there.
Or “sign your Claude in” if they’re feeling cheeky.


Anthropic's B2B game is extremely on point. As an anecdotal example, I recently found needing professional services for some of our larger enterprise use-cases, we use OpenAI models via Azure Foundry and Claude models via AWS Bedrock. OpenAI refused to engage unless we use retail OpenAI, whereas Anthropic's stance was yes, we'd love to help, don't care how/where/when you're using claude, let's get into solving problems. We are currently migrating several deployments from Foundry to Bedrock.
They won't win me!
Claude, from Anthropic, is one of the worst! Pirated books are their training materials.Not an opinion, a finding in a court of law.
"Educational/textbook and university press/academic authors represent almost half of the close to 500,000 works infringed in Bartz v. Anthropic." Yeah, my books were taken with no consultation, no chance to opt out and no compensation. If you use Claude or any degenerative AI, you're building on work stolen from people like me.
Use your own intelligence!
Learn about court case here:
https://www.taaonline.net/anthropic-settlement