https://www.linkedin.com/posts/jamieeduncan_i-work-at-google-cloud-i-use-claude-code-activity-7388260629687738368-n305?utm_source=social_share_send&utm_medium=member_desktop_web&rcm=ACoAAAWP0BcBpqYfxTwAOkQIphWaHcrYkA2lzMc


tags:

  • llm
  • ethics
  • policy
  • legal
  • compliance Title: No LLMs for you Subtitle: For some professionals, LLMs use is off-limits date: 2025-10-15T00:15:00+00:00

Everyone and his dog these days seem to use large language models (LLMs) like ChatGPT, Claude, Bard or Copilot. From drafting emails, writing code, generating strategies or just as a rubber ducky. But in some industries and professions, LLM use from a competitor is a death trap. Let me explain why.

The Compliance Challenge

All the major LLM providers have a terms of service that generally states that using their services comes with limitations and liabilities. Most of them explicitly exclude external competitors from using their services. Why? There could be multiple reasons:

  • Competitive advantage: Companies invest heavily in proprietary data and models. Allowing competitors to use their LLMs could erode this advantage.
  • Data security: Competitors might inadvertently or deliberately input sensitive information into the LLM, risking poisoning or try to extract sensitive data.
  • Legal liability: If a competitor uses the LLM to generate content that infringes on intellectual property or violates regulations, we have yet to see the first lawsuits in this area, but they are likely coming. Who owns the actual code or content generated by an LLM? The user? The provider? The legal landscape is still evolving.
  • Brand protection: Companies want to ensure that their LLMs are not used in ways that could harm their reputation or brand image.
  • Many more: There are likely other reasons specific to each company or industry.

What do licenses say?

Most LLM providers have terms of service that restrict usage by competitors. In this section I will go over all the licenses and mention key points.

Anthropic’s terms of service state: “To develop any products or services that compete with our Services, including to develop or train any artificial intelligence or machine learning algorithms or models or resell the Services.”

OpenAI’s terms of service state: “What you cannot do. You may not use our Services for any illegal, harmful, or abusive activity. For example, you may not:

  • Use Output to develop models that compete with OpenAI.”

Google’s terms of service state: “You must not abuse, harm, interfere with, or disrupt our services or systems — for example, by:

  • using AI-generated content from our services to develop machine learning models or related AI technology

And finally, Microsoft’s Code of Conduct… This was a web of restrictions, TLDR:

No trouble generating competitive models or services, no abuse such as scraping from their services, but they can suggest autocompletions with public code that is not licensed for use.

Spend some time reading all their services, AI Services Code of Conduct, GitHub Terms of Service and more. It is a maze, feeling I got? You better have a legal team on speed dial and ready to explain what it is your exactly doing.

please be aware that these terms and services were active at the time of writing this article, they may change in the future. date: 27 October 2025

Implications for Professionals

For professionals working in sensitive industries, using an LLM from competitors can cause damage to their organization and lead to serious consequences:

  • Breach of contract: A Ban in the best case scenario, lawsuits in the worst. Even when your the middle man and competitors use your services, your account can still be banned // fix formatting: https://community.openai.com/t/account-deactivated-due-to-allowing-competitor-llm-to-same-platform/1009631
  • Public relations issues: If it becomes known that a professional used a competitor’s LLM, this competitor will likely use this against them in marketing or legal actions as we seen with Anthropic that cut off OpenAI developers. // fix formatting: https://techcrunch.com/2025/08/02/anthropic-cuts-off-openais-access-to-its-claude-models/
  • Loss of trust: Currently the industry is hyper competitive and any sign of weakness will result in loss of trust from either clients or shareholders. We seen a massive drop in many share prices when DeepSeek claimed it required less GPU power to train models at the start of 2025. Imagine what happens when someone from Google or Microsoft claims their products are inferior to a competitor?
  • Job loss: It all depends on the contracts signed, but in some contracts reputational damage could be grounds for termination. Even when in your own time, anyone remember the coldplay incident?

And this is all the short term consequences. In the long term, we really need to discuss the moat these companies are building around their LLMs.

Who is the GOAT in a world full of MOATS?

This is a philosophical question:

In recent studies there is evidence that the use of LLMs can reduce critical thinking and problem solving skills. We have an entire new generation of professionals that never worked a day in their life without the use of LLMs. Even in the short span of 4 years since LLMs became open for public use, we have seen various articles and studies that report a decline in skills. A few examples:

A new generation of developers that never coded without LLMs is at the start of their career. My main question to you is: W