Everyone and his dog these days seem to use large language models (LLMs) like ChatGPT, Claude, Gemini or Copilot. From drafting emails, writing code, generating strategies or just as a rubber ducky. But in some industries and professions, LLM use from a competitor can get you in serious trouble. Let me explain why.
This LinkedIn post raised a few questions in the back of my mind: LinkedIn Post
I work at Google Cloud. I use Claude Code for most of my development work.
Why? As an individual user, Claude Code Max currently gives me the best value for the kind of tokens I need - access to Opus for complex planning and Sonnet 4.5 for execution at a price point that works for running multi-agent workflows daily.
But here's what matters more: I'm building agent orchestration patterns that work across models. Every workflow I build in Claude Code, I validate in Gemini CLI. The patterns transfer.
When Gemini 3.0 launches or new features ship, I'll re-evaluate. That's how you build real systems - use the best tool for your constraints today, design for portability tomorrow.
This engineer is working on agent orchestration patterns that work across models for Google Gemini. Google Gemini and Anthropic Claude have products that are in the same space and he is stating he is using a competitors models to help him develop the tools that are directly competing. Legally he is operating in a gray area and this becomes quickly very murky for professionals working in sensitive industries.
The Compliance Challenge
All the major LLM providers have a terms of service that generally states that using their services comes with limitations and liabilities. Most of them explicitly exclude external competitors from using their services. Why? There could be multiple reasons:
- Competitive advantage: Companies invest heavily in proprietary data and models. Allowing competitors to use their LLMs could erode this advantage.
- Data security: Competitors might inadvertently or deliberately input sensitive information into the LLM, risking poisoning or try to extract sensitive data.
- Legal liability: If a competitor uses the LLM to generate content that infringes on intellectual property or violates regulations, we have yet to see the first lawsuits in this area, but they are likely coming. Who owns the actual code or content generated by an LLM? The user? The provider? The legal landscape is still evolving.
- Brand protection: Companies want to ensure that their LLMs are not used in ways that could harm their reputation or brand image.
- Many more: There are likely other reasons specific to each company or industry.
What do licenses say?
Most LLM providers have terms of service that restrict usage by competitors. In this section I will go over all the licenses and mention key points.
Anthropic’s terms of service state: “To develop any products or services that compete with our Services, including to develop or train any artificial intelligence or machine learning algorithms or models or resell the Services.”
OpenAI’s terms of service state: “What you cannot do. You may not use our Services for any illegal, harmful, or abusive activity. For example, you may not:
- Use Output to develop models that compete with OpenAI.”
Google’s terms of service state: “You must not abuse, harm, interfere with, or disrupt our services or systems — for example, by:
- using AI-generated content from our services to develop machine learning models or related AI technology
And finally, Microsoft’s Code of Conduct… This was a web of restrictions, TLDR:
There seems to be no trouble generating competitive models or services, no abuse such as scraping from their services, but they can suggest autocompletions with public code that is not licensed for use. The problem with the Microsoft terms and services was that it was all over the place, there were references all over without actually specifying what terms and services were referenced to.
I spend some time reading all their services, AI Services Code of Conduct, GitHub Terms of Service and more. It is a maze, feeling I got? You better have a legal team on speed dial and ready to explain what it is your exactly doing.
Please be aware that these terms and services were active at the time of writing this article in October 2025, they may change in the future.
And while these terms of services focus on training of models and development of competitive products, they also have implications for the use of LLMs in general. For example, these AI companies regularly release competitive products in entire new markets where they never had a presence before. All the existing parties in that market are now competitors, where before these companies were not developing competitive AI products, now they are.
A direct example is how Anthropic released an AI Legal Assistant, the significance is that Anthropic now shifted from model supplier to the application layer and workflow ownership. The end result is that any new AI startup should be careful, at any moment these model providers can decide to enter their market and cut off their access to the LLMs, which could be a death trap for these companies.
Another thing I have seen is that companies are using public LLMs to finetune their own models. This is a gray area, but it is likely that if the LLM provider finds out about this, they will take action against the company using their services.
Implications for Professionals
For professionals working in sensitive industries, using an LLM from competitors can cause damage to their organization and lead to serious consequences:
- Breach of contract: A Ban in the best case scenario, lawsuits in the worst. Even when your the middle man and competitors use your services, your account can still be banned: account deactivated
- Public relations issues: If it becomes known that a professional used a competitor’s LLM, this competitor will likely use this against them in marketing or legal actions as we seen with Anthropic that cut off OpenAI developers. cut off
- Loss of trust: Currently the industry is hyper competitive and any sign of weakness will result in loss of trust from either clients or shareholders. We seen a massive drop in many share prices when DeepSeek claimed it required less GPU power to train models at the start of 2025. Imagine what happens when someone from Google or Microsoft claims their products are inferior to a competitor?
- Job loss: It all depends on the contracts signed, but in some contracts reputational damage could be grounds for termination. Most organizations these days have restrictions, even on outside work hours. And this is all the short term consequences. In the long term, we really need to discuss the moat these companies are building around their LLMs.
Who is the GOAT in a world full of MOATS?
This is a philosophical and long term question which goes beyond the scope of terms of service and legal implications:
In recent studies there is evidence that the use of LLMs can reduce critical thinking and problem solving skills. We have an entire new generation of professionals that never worked a day in their life without the use of LLMs. Even in the short span of 4 years since LLMs became open for public use, we have seen various articles and studies that report a decline in skills. A few examples:
- Younger participants exhibited higher dependence on AI tools and lower critical thinking scores compared to older participants. - Gerlich, M. (2025)
- Although AI can promote critical thinking, it can also undermine autonomous cognitive development if students bypass deeper engagement, highlighting the importance of fostering critical engagement strategies in AI-driven education.- Gonsalves, C. (2024)
Consider where we’ll be in ten years. Today’s computer science students have never written production code without LLM assistance. Studies already suggest younger developers show higher AI dependency and lower autonomous problem-solving scores compared to their seniors (Gerlich, 2025). This isn’t a critique of these developers, they’re rationally using the best tools available. But it creates a structural problem. Now combine this with the ToS restrictions outlined above. Every major LLM provider reserves the right to cut off competitors. If you want to build the next foundation model company in 10 years time, you face an impossible equation: your workforce depends on tools they’re contractually forbidden from using, and they may lack the skills to work without them.
Previous tech MOATS were about data, network effects, or switching costs. These were already almost impossible to cross. This is the next level. This is a moat built on human capability dependency. Google can’t easily stop you from hiring engineers who previously used Gmail. But Anthropic, OpenAI, and Google can collectively ensure that anyone building a competing AI product either violates ToS or operates with a handicapped workforce. How does a new entrant compete? Train developers from scratch without LLM assistance? That’s a decade-long investment. Use open-source models? They’re already behind, and we rarely get to see the data they have been trained on. The barrier to entry isn’t capital or data anymore: it’s that the incumbents control the tools your people need to think.
You’re good till you’re not
The problem with the provided examples is that they are not public. In general these cases are handled internally and settled without any attention. And this is exactly the risk we run into when we use these services. These companies tolerate violations until they don’t. A Google engineer publicly admits to using Claude for competitive work? Tolerated today. But at any moment these organisations can decide to enforce their ToS without public notice, without warning, and without recourse. You cannot look at what others get away with and assume you’re safe. There is no precedent to rely on, no transparency in enforcement, and no appeals process you can point to. We can operate freely right now… but that tolerance is not a right, it’s a courtesy that can be revoked the moment you become inconvenient. And when that happens, you won’t see it coming.
Governance of AI use
This is a complex topic, and you should not decide to stop the use of LLMs based on risks alone. It should however be part of your AI Governance strategy. Recently I have given workshops for organizations to discover the risks and opportunities in using LLMs.