The Erosion of Corporate Sovereignty in the Midst of an AI Gold Rush
Your company is “leaking enterprise value to some model company somewhere” - Satya Nadella
There’s a trap hiding in the AI adoption rush, and Satya Nadella, the CEO of Microsoft, just named it.
During the 2026 World Economic Forum, in a conversation with Larry Fink, CEO of BlackRock, Satya goes on to say:
“Just imagine — if your firm is not able to embed the tacit knowledge of the firm in a set of weights, in a model that you control, by definition you have no sovereignty. That means you’re leaking enterprise value to some model company somewhere.”
Nadella argues that the “least talked about” but potentially most significant topic in the coming years will be the “sovereignty of a firm.”
He warns that if an organization cannot embed its tacit knowledge into a set of model weights that it controls, it effectively has “no sovereignty.”
Consequently, the firm ends up “leaking enterprise value to some model company somewhere” rather than retaining that value for itself.
This leakage should be big news for corporate execs — if you outsource your information to AI model companies, you have no moat, or what he calls sovereignty.
One could argue that what Satya is saying here is a push to raise Microsoft’s relevance in this model race. But the underlying concern is still valid.
When I listen to the video, I’m almost surprised. He calls this issue of corporate sovereignty the “least talked about” issue. But as AI continues to advance companies that have squeezed every drop of their corporate knowledge into these models may feel the pain.
The models only know what they are trained on, so if knowledge is withheld from outsourced models (OpenAI, Anthropic, Google, etc..) but still accessible via a self-hosted or on-prem model through AI governance and privacy, corporations could position themselves well with their data while other organizations lose that footing.
Why does this matter?
Organizations became convinced that if they don’t jump on the AI train, they’ll get left behind. But that train cost them more than they knew. In exchange for a costly ride, AI model companies are benefiting from a treasure trove of private data, fiercely guarded by corporations, that once served as their moat.
Imagine a logistics company feeding years of route-optimization data into an AI model they don’t own. In 18 months, that intelligence will improve outputs for every competitor on the same platform.
Now the moat belongs to the AI companies, who can do with it as they will. You can “trust” an enterprise agreement, but that contract is only as strong as its enforceability, and even if it holds, the damage could be irreparable.
The takeaway
Go too fast and give all your data and intellectual capital to the model, and you may achieve an early advantage, but you’re going to be exposed to those insights shared with your competitors through model improvements that came from your data.
Or, if you keep all your assets locked away, you could get eaten by your competitors before you even have a chance to use your moat. The best you could hope for is an acquisition.
But what about the middle way? Be shrewd and strategic about what you share with your model partners. Keep them guessing about what secrets you have in the fortress of your data lake, but never expose the data that is most valuable.
Before committing to your next AI tool or model provider, you should ask:
What proprietary knowledge am I exposing by using this tool?
Does this partner’s business model depend on learning from my data?
Can I achieve the same outcome with a model I control?
There are fewer and fewer moats. Your data is one of them. Don’t give it away for a productivity bump.



