Is 'AI Made Us More Efficient' the New Excuse for Layoffs?

Is 'AI Made Us More Efficient' the New Excuse for Layoffs?

🔭 Scout's Take

Block cut 40% of its workforce and Jack Dorsey blamed AI. The "AI laundering" debate is real, but it's a distraction from the more important point: AI genuinely compresses workflows, and companies paying attention are using it to both reduce headcount and transform what their remaining people do. This post walks through what that compression actually looks like in a UCaaS business, how to automate your way into it, and the one testing move that matters before you go live.

Jack Dorsey cut 4,000 people from Block (40% of the company) in February 2026. The stock jumped 25%. His explanation: "The models just got an order of magnitude more capable and more intelligent." People heard that and started throwing around the term "AI laundering," meaning companies using AI as cover for cuts they were planning anyway.

Maybe that's true in some cases. But the debate is beside the point.

Is 'AI Laundering' a Real Thing?

In 2025, U.S. companies attributed 55,000 job cuts directly to AI, twelve times the number from two years prior. January 2026 alone saw over 108,000 layoff announcements, up 118% year over year. That's not a rounding error. Something is happening.

The skeptic's take: executives were already looking for headcount reductions, and AI is a convenient, politically palatable explanation. Hard to argue with. You can't verify intent from a press release.

But here's where I land: it doesn't matter. If a company uses AI as cover for cuts they were going to make anyway, that's a leadership transparency problem. The workflow compression is still real. The capability shift is still real. Whether Dorsey was going to cut those 4,000 people regardless, the fact that AI genuinely enables a smaller team to do what a larger team used to do isn't in question.

The question worth asking isn't "did they use AI as an excuse?" It's "what does AI actually change about how work gets done?" That's the one with an answer you can act on.

What Does AI-Driven Workflow Compression Actually Look Like?

I'll use content publishing as the clearest example, because you're reading a product of workflow compression right now.

At my day job, publishing an article looks like this: our marketing person schedules an SME interview, records it on Otter.ai, takes the transcript, pastes it into a custom GPT, edits and refines the output, humanizes it, sends the draft back to the SME for approval, then posts it. That's five or six distinct steps, multiple people, multiple handoffs, usually spread across several days. It works, but it's manual at every stage.

For this site, I do the same thing with an AI agent and three questions. Scout (my OpenClaw agent) synthesizes an RSS feed to surface relevant topics, I give my take on a few questions, it drafts, I review, it posts. The whole loop is one sitting. If I gave that same marketing person an agent like this, sent the SME three questions to answer instead of scheduling an interview, you'd collapse that entire workflow to a fraction of the time and cut out half the handoffs.

Same story with the website itself. A site like this used to cost thousands of dollars and take months to build. I told Scout what I wanted, we reviewed the framework together, and refined it in a single day. Months to a day isn't an incremental improvement. That's a different category of output.

The SEO side has the same compression potential. Right now, our marketing person does biweekly reviews of Clarity analytics: checking the data, adjusting page layouts, fixing dead code. That's a full afternoon every two weeks. An agent that watches Clarity directly, flags anomalies, and proposes changes could automate the bulk of that. The person doesn't disappear from the equation, but what they spend their time on shifts entirely.

Does AI Replace Jobs or Change Them?

Both. That's the honest answer, and anyone who tells you definitively one or the other is selling something.

AI changes what people do first. But someone who's efficient with AI tools can absolutely turn that into a headcount question. If one person with the right agents can produce what three people produced before, you don't need three people anymore.

At TeleCloud, I think about this with our support stack. L1 tickets have a pattern: triage the issue, gather data, reproduce if possible, escalate with context. That's mostly information collection. If I synthesized our troubleshooting runbooks and spun up an agent to handle L1 triage and data gathering, my L2 and L3 engineers stop spending half their day on information collection and start working on actual problems. Does that shrink total headcount? Over time, yes. Does it make the people who stay more effective at harder work? Also yes.

The companies pretending it's only one or the other are either naive or not being straight with their people.

How Do You Actually Roll Out AI Without Breaking Everything?

You don't go from idea to production in a week. We set up our first internal custom GPTs for the sales team: did the prompting, gathered the knowledge base, tested the responses, then rolled it out. That sequence matters. The sales team got a tool that worked because we'd already broken it ourselves first.

Right now we're building an agentic voice agent for one of our larger customers. The timeline to production is a month and a half to two months. Not because the technology is hard. Because the testing has to be thorough, and thorough takes time.

Here's the move that actually matters before you go live: find someone who has absolutely no idea what you built and have them try to use it. Not your team. Not anyone who sat in a meeting about it. Someone with zero context who will prompt it the "wrong" way over and over, hit edge cases you never thought of, and expose every gap in your logic. That person will break it better than any internal tester you have, because they're not biased toward the correct inputs.

Internal testers know what the agent is supposed to do. They'll guide their prompts toward success without realizing it. A naive user doesn't know what success looks like. That's exactly why they're the most valuable test you can run.

Get the naive user results. Fix what they found. Then ship it.

Thinking about how AI changes your team's workflow? I can help you plan for it.

Get in Touch