I can now produce in a week what used to take months. Research, analysis, slide decks, written briefs: AI compresses all of it. But I keep running into the same friction. The output piles up. The reviewing, the deciding, the acting on it: that part has not sped up. I think the bottleneck moved rather than disappeared, and I am trying to work out what that means.

The production side is done

I can produce a full strategic brief in an afternoon. A research summary that used to take a week takes a couple of hours. Slide decks, written analysis, data pulls: the tools handle it. That part works.

I see the same pattern across the industry. Consulting firms are reporting 25-40% productivity gains on the production side. Internal AI tools scan tens of thousands of documents in seconds. The constraint that used to sit at production, writing the deck, doing the research, building the analysis, is gone.

But what happens after the output lands?

This is the part I keep getting stuck on. I generate the analysis. Then I need to review it. Pressure-test the reasoning. Decide which parts matter. Figure out what to do next. Coordinate with people. Follow through.

None of that got faster. The human side of the workflow runs at the same speed it always did. And the more the production side accelerates, the more the gap between output and action grows.

I read that 80% of workers now report information overload, up from 60% in 2020. I also came across a study linking long-term AI use with mental exhaustion and lower confidence in decision-making. That tracks with what I am seeing. More coming in, harder to process.

Open Question

Is the execution bottleneck temporary, something humans will adapt to with better filters? Or is decision-making throughput fixed, a ceiling no tool can raise?

A number that stuck with me

I saw a study where experienced developers using AI coding tools took 19% longer on real-world maintenance tasks than when they worked without AI. Nearly half said debugging AI-generated code takes more time than writing it themselves.

This is not about the tools being bad. It is about generation and absorption being two different operations running at two different speeds. The code got written instantly. The queue behind it, review, testing, integration, did not move any faster.

I keep wondering whether this applies more broadly. If the speed of generation does not match the speed of adoption, the gap becomes a queue. And at some point the queue becomes the problem.

What are consulting firms selling now?

McKinsey reduced its workforce by around 5,000 between 2023 and 2024, while simultaneously disclosing it was running tens of thousands of AI agents across its operations. BCG made around $2.7 billion from AI-related advisory in 2024, roughly 20% of its total revenue. These firms are restructuring around something, but I am not entirely sure what.

If a consulting firm can produce 10x the analysis in the same time, and the client can still only absorb the same volume of recommendations, what is the product now? Is it the analysis itself, or the judgment about which parts of it matter? I do not have a clear answer.

Open Question

If production is commoditised but clients absorb the same volume of recommendations, what exactly is the consulting product now?

Is a new role forming?

I notice that AI output triage and data curation are becoming real job titles. Teams with dedicated curators reportedly produce measurably more usable output than those without. The function sits between AI production and human decision-making: filtering, validating, deciding what actually reaches the people who need to act.

This reminds me of how project management became a formal discipline in the 1990s when workflows got too complex for anyone to track informally. Maybe the same pattern is repeating. Or maybe it is not, and organisations just learn to ignore the excess.

Friction Point AI Operations Execution Gap
Open Questions
  • Is this a temporary mismatch or a structural one?
  • At what point does more AI output start hurting rather than helping?
  • Will AI output triage become a formal role?
  • If consulting firms can produce 10x the analysis but clients absorb the same volume, what is the product now?
Sources

25–40% productivity gains from AI on knowledge tasks

Dell'Acqua et al., "Navigating the Jagged Technological Frontier," Harvard Business School / BCG, 2023 — hbs.edu; McKinsey Global Institute, "The Economic Potential of Generative AI," June 2023 — mckinsey.com

80% of workers report information overload, up from 60% in 2020

OpenText / 3Gem survey, 27,000 workers across 12 countries, March 2022 — opentext.com

Long-term AI use linked to mental exhaustion and lower confidence in decision-making

Shalu et al., "The Cognitive Cost of AI," SAGE / PMC, 2026 — pmc.ncbi.nlm.nih.gov

Experienced developers using AI coding tools took 19% longer on real-world maintenance tasks

METR, "Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity," July 2025 — metr.org; arXiv:2507.09089

Nearly half of developers say debugging AI-generated code takes more time than writing it themselves

Stack Overflow Developer Survey 2025 — survey.stackoverflow.co

McKinsey workforce reduction of around 5,000 and deployment of AI agents at scale

The Register, December 2025 — theregister.com

BCG AI-related advisory revenue: approximately 20% of total 2024 revenue

BCG press release, March 2025 — bcg.com; Analytics India Magazine — analyticsindiamag.com