Who Owns What Your AI Learned From Your Calls?

Who Owns What Your AI Learned From Your Calls?

🔭 Scout's Take

No Jitter just ran a piece on AI data ownership in UC platforms. It's a good overview, but it stays at the analyst level. Matt's team at TeleCloud actually walked away from third-party conversational intelligence vendors over this exact issue, and built their own solution instead. This post is the practitioner's version of that story: what the contracts actually say, what the deletion question exposes, and why the biggest risk isn't someone stealing your recordings, it's someone owning what their AI learned from them.

No Jitter published a piece this week asking who owns AI-generated data in unified communications. It's a fair question. When your UC platform generates a meeting summary, a call transcript, or a behavioral insight, who actually owns that output?

The article covers the basics well: most enterprise contracts were signed before AI features existed, vendors have broad licensing language, and nobody went back to update the terms. All true.

But I've been closer to this than most people. At TeleCloud, we evaluated third-party conversational intelligence vendors before deciding to build our own. The reason we walked away wasn't technical. It was contractual.

What's the Real Problem With Third-Party Call Intelligence?

Here's how a lot of conversational intelligence vendors structure their agreements. You, the end customer, own the recordings. You own the transcriptions. You own the summaries. Sounds reasonable.

But the vendor owns the IP behind the extraction of insights from your calls. Meaning: if their system analyzes thousands of your sales calls and learns that a certain objection-handling pattern closes 40% more deals in your vertical, that insight belongs to them. They can use it to train their models, improve their product, and sell that intelligence to your competitors.

That's not a hypothetical. That's how several vendors we evaluated were structuring their contracts. They weren't hiding it. It was in the terms. Most buyers just didn't read that far, or didn't understand the implications.

We decided it didn't make sense to give a third party that level of ownership over intelligence derived from our customers' conversations. So we built our own conversational intelligence solution internally.

What Should You Ask Your Vendor About Data Deletion?

The ownership question is one thing. The deletion question is scarier.

If you cancel your contract with a conversational intelligence provider tomorrow, can they tell you, specifically, where all of your data lives? Every system, every database, every storage array, every LLM provider they're passing your data to?

Better yet: can they show you?

Because if your vendor is using a third-party LLM to generate call summaries (and most of them are), your conversation data has touched at least one system outside the vendor's direct control. That LLM provider has their own retention policies, their own training data pipelines, their own terms of service.

So the question you should be asking isn't just "do you delete my data when I cancel?" It's: "Can you show me every system my data has touched, who operates it, and how long they retain it?" If the answer is vague, or if they can't map that chain for you, that's a problem. You don't have data ownership. You have a promise with no audit trail.

Who's Really Trying to Own Your Call Data?

This isn't just a startup problem. The biggest players in UC are moving aggressively into conversational intelligence, and they have their own incentives.

Zoom would love to eat Gong and Chorus's lunch. They already have the recordings. They already have the transcripts. Adding AI-powered call scoring, deal intelligence, and coaching recommendations is a natural extension. And when the platform vendor is also the intelligence vendor, the lines between "your data" and "our product improvement" get blurry fast.

Microsoft is in the same position with Teams and Copilot. They've never been accused of being overly conservative with customer data when they see value in it. When Copilot is summarizing your calls and feeding insights into your CRM, who owns the model improvements that come from processing millions of enterprise conversations? Microsoft's terms are dense, but the broad strokes favor Microsoft.

The scale is the risk. When a vendor processes enough conversations across enough customers in a vertical, the aggregate intelligence becomes more valuable than any individual customer's data. That's not a bug in their business model. That's the business model.

So What Should You Actually Do About It?

Three things.

First, read the AI addendum. Not the main contract. The AI-specific terms. Most UC vendors added these after the original agreement was signed, and they contain the broadest language about data usage, model training, and derived insights. If your vendor doesn't have a separate AI addendum, that's worse, because it means the AI features are covered by whatever catch-all language was in the original terms.

Second, ask the deletion question before you need to. Don't wait until you're canceling to find out your vendor can't map the data chain. Ask now: where does my data go, who touches it, what are the retention periods, and what happens to model weights trained on my conversations? Get it in writing.

Third, evaluate whether building beats buying. We went through this calculus at TeleCloud. For some organizations, the convenience of a third-party tool is worth the tradeoff. For us, the risk of a third party owning intelligence derived from our customers' calls wasn't acceptable. The build cost was real, but the control was worth it.

The No Jitter piece is right that this is a governance blind spot. But it's not just a governance problem. It's a competitive intelligence problem. When your vendor learns from your data and sells that learning to your market, you've subsidized your own competition.

Have questions about AI data ownership in your stack? Reach out.

Get in Touch