The AI Adoption Value Ladder

There’s a pattern that repeats in every technological wave.
When a new technology emerges, adoption is usually the problem.
With some perspective, what happened in each wave?
- DevOps: everyone started using CI/CD tools, but very few companies changed their processes.
- Cloud: everyone migrated to the cloud, without rethinking architecture.
- Containers: companies installed Kubernetes, but few trained their infrastructure teams.
- Microservices: monoliths were broken apart without understanding when microservices actually made sense.
The technology works. The issue has always been how we adopt and use it. The same is happening with AI.
The paradox OpenAI, Google, and Anthropic don’t want explained

The 2025 Faros AI report, based on data from more than 10,000 developers and 1,255 teams, shows seemingly contradictory results.
At the individual level: +21% more tasks completed, Up to +55% faster on specific tasks and +98% more pull requests merged.
Everything looks great!
At the organizational level:
- +91% increase in code review time
- +154% increase in PR size
- +9% more bugs
- No correlation with business results
How is that possible?
This is what’s being called the AI Productivity Paradox.
The person validating the code now has double the work. The quality process gets saturated. Downstream systems can’t absorb the output.
For every hour gained in production, minutes are lost elsewhere in the chain. If the organization isn’t prepared to absorb the increase, individual gains turn into net organizational losses. Bottlenecks appear and frustration grows.

BCG summarized it in Where’s the Value in AI (2024):
10% of the value is in the algorithm
20% in the technology
70% in people and processes
Integration matters far more than the tool itself. It’s not ChatGPT vs Claude vs Gemini. It’s how AI integrates into your workflows, how you adapt processes, teams, and people to absorb the change.
Buying the tool isn’t enough.
Not all AI usage is the same
Even if many of us live in a tech bubble, adoption happens at different levels. Each level requires different preparation, technical and organizational.

Level 1
- What it is: Browser
- Typical productivity gain: Baseline
- What the organization needs: Nothing (everyone starts here)
Level 2
- What it is: AI-enabled editor (assistant)
- Typical productivity gain: +26% to +55%
- What the organization needs: Licenses
Level 3
- What it is: Context-aware agent
- Typical productivity gain: ×3–10 in specific tasks
- What the organization needs: Processes, conventions, real training
Level 4
- What it is: Autonomous ecosystem
- Typical productivity gain: ×10+ (proven)
- What the organization needs: Architecture, governance, culture
Let’s go level by level.
Level 1: Browser
This is where a lot of people were in 2025, and even more still are in 2026. ChatGPT in one tab, Claude in another, your IDE in the middle. Copy and paste.
Of course, there is a performance improvement; but the manual effort of constantly transferring context removes much of the benefit, and it also creates frustration when a tangible productivity boost is needed.
Level 2: Editor with integrated AI
This is the level where you start seeing quite a few people in 2026, although not companies or organizations. Copilot or Cursor, among others, used as autocomplete assistants, not in agent mode. The AI has access to the code you are writing. That improves context and suggestions become more relevant.
Here you already see a serious return: between +26% and +55% productivity improvement according to studies.
The important thing is not the specific editor. The key is that the AI has more context: it knows which file you are editing, which dependencies you use, which patterns you follow.
Important to note: here we are already at the dividing line: at Level 2 the AI suggests. At Level 3 it acts. That’s the difference between an assistant and an agent.
Level 3: Agent with full context
Here the real concept of AI Agent appears: the AI plans, executes, and reviews autonomously, but you supervise and correct.
You are there while it works (what the industry calls human-in-the-loop (HITL)). Until recently, this required a CLI (Claude Code, OpenCode); today there are desktop interfaces (Claude Cowork, Lovable, Antigravity).
The tools (like Claude Code) have access to your entire project. They can read files, run commands, understand the full architecture, analyze the whole codebase.
It’s best illustrated with a concrete example: analyzing five production log files to find an error pattern.
At Level 1 (browser), this can take about 15–20 minutes:
- Access via SSH to the instance
- See where the files are
- Download them to your local machine
- Upload them to the LLM chat
- Explain that you have a failure on X server, that you downloaded the logs…
- Wait for a response
- It will probably ask for more information or to run some diagnostic commands
- Manually synthesize the responses
Loss of context is a real problem.
At Level 3 (agent with context), it takes 2–4 minutes:
"Analyze the five log files in /logs-demo/ and identify the common error pattern"
The AI Agent reads everything, correlates it, and gives you the diagnosis.
Extra: if you give it access to the repository with the application code, it will find the relevant parts where those errors could have been generated. In a few additional minutes, it can present:
- Technical error report
- Post-mortem analysis
- Resolution proposal: affected code, involved components
All in ~10–15 minutes. The same task in the browser would easily take 45–60 minutes.
That’s the difference between ×1 productivity and ×3–10 productivity in specific tasks.
Level 4: Supervised autonomous ecosystem
The difference with Level 3 is that you are no longer there while the agent works. The agent executes and you review afterwards. Supervision is asynchronous.
Almost no one is here in a mature way. The few organizations that are, experiment constantly.
These are not just “agents that perform tasks.” I’m talking about something more complex:
- Organizational pipelines with agents integrated into CI/CD
- Shared guidelines: organizational context, not just individual
- Automated corporate quality metrics, before human review
- Multiple specialized models: some for code, others for documentation, others for analysis, others for QA
- Security guardrails: clear limits on what AI can and cannot do
Basically, setting all this up is like creating a full AI project, integrated with the entire development and operations ecosystem, as well as organizational workflows. That’s why almost no one is here, because the issue is redesigning how the organization works.
Individually, reaching N4 is relatively easy: you leave an agent running overnight and review in the morning. But the real impact is at organizational N4. Getting the whole company to this point is a problem beyond your control.
Potential return is ×10 or more. But it requires serious investment in architecture, governance, and culture. The companies experimenting with this are mostly large tech firms or AI startups with dedicated resources. And I say “experimenting,” because there really isn’t a clear recipe for success yet.
Note for early adopters: This level cannot be reached only at the organizational level. It requires architecture decisions, budget, and infrastructure. But you can prepare: learn about MCPs, experiment with agents in personal projects, understand the patterns. When your company is ready (or when you move to one that is), you’ll be the person who knows how to apply all these patterns.
What to do starting tomorrow: start adopting all this technology and these paradigms
If you are at Level 1 (browser)
- Create a project in Claude or ChatGPT. Upload 5–10 context files from your main project.
- Compare results between asking without those files and asking with the context loaded.
- Document the difference. Time it takes to do something before vs. after.
If you are at Level 2 (editor with AI)
- Try Claude Code, Codex, OpenCode, or Aider on a personal project. They are free up to certain limits. It’s the first step towards N3.
- Create an AGENTS.md file at the root of your project with the conventions and context the AI should know.
- Measure a specific task you usually do. How long with your current tool vs. with full context.
If you are at Level 3 (agent with context)
- Identify a repetitive process you could automate with agents.
- Experiment with n8n or Playwright + AI to automate the most tedious step.
- Document the results with numbers. That’s what you need to propose scaling.
For all levels: generate evidence
If you solved something in 10 minutes that previously took 2 hours, write it down.
Become an internal reference. When your company’s official initiative arrives, you’ll be the one who has already done it and knows how.
And choose your battles: if your company doesn’t move, maybe you can.
Conclusions
88% of companies use AI regularly. Only 6% have adopted it usefully.
AI introduces its own risks: cognitive dependence, inherited biases, the temptation to trust without verifying. But the biggest risk is not adapting.
For a long time, the same pattern has repeated. Technology changes; organizational dynamics do not.
What level are you at? And your company?
*This article comes from a talk for which I analyzed over 80 academic papers, consulting reports, and studies on AI adoption.
I run the risk of being obsolete in three months at the pace we’re going. However, the ideas about technological change management are transversal. That is the real challenge both professionals and organizations face.
Full references for this article (13 sources, studies, and tools), a glossary of terms, and resources for deeper learning are available at this link.
Want to know a little more about who wrote this article? 👇
He has over 25 years of experience leading technological transformations. As a Platform & Solutions Architect in the emergency services aerospace sector, he applies and advocates for responsible AI adoption with a focus on real ROI.