Much has been said about the capabilities of AI and one question has come up in all the conversations: how far can it go?
Yes, its capabilities are enormous. Every day brings new, surprising announcements: AI can create, code, analyse, decide… And all this noise can easily make us lose focus, leaving aside far more relevant questions. Who governs its use? How do we measure its impact? What responsibility do we assume for its results?
Once Generative AI is already part of our processes, it ceases to be a promise and turns into either a risk or a competitive advantage. And it all depends on how we govern it.
The false autonomy of AI
But then, aren’t we close to full AI autonomy? Announcements such as Anthropic’s early-2026 release of Claude Co-work¹, reportedly developed “entirely” by AI, might lead us to believe that we are on the verge of total automation for certain tasks. However, if we look beyond the headline, the reality is far less disruptive than it seems.
Governance was human: design, planning, and reviews were done by people. Yes, AI is an excellent executor, but it needs human direction, if we don’t want it to become a source of out-of-control risks and costs.
Recent studies confirm this. Among them, a study on collaborative coding² shows that when faced with complex logical problems, even the most advanced AI models fail to achieve good results when working alone (below 1%). On the other hand, when you operate in collaboration with people, effectiveness is significantly multiplied, even surpassing the performance of experts working in isolation.
Productivity is not just speed
That’s a mistake we often see in many organizations: confusing speed with productivity. When adopting any new technology or tool, we can’t know if it really improves productivity if we are not able to answer certain questions:
What does AI produce, and what did we produce before introducing it? Not in terms of lines of code or documents generated, but in terms of useful, comparable, and traceable products delivered.
How much does it cost? Including licences, supervision time, rework, and opportunity cost.
What risks does it bring? In quality, security, regulatory compliance and technological dependency.
Productivity cannot be based on perception alone. It must be objectively measurable, regardless of whether the work is being performed by a person, a tool, or a generative model.
If we incorporate AI into our processes without a clear purpose, without metrics and without limits, we will not be able to answer these questions objectively, and it will be difficult for us to justify its use. All it will generate is uncertainty.
Govern, measure and decide
Experience and data prove it. The best results do not come from “letting AI do its thing,”, but from integrating it into governed processes. And to achieve this, we must rely on three fundamental pillars:
- Estimate governance, even when part of the work is automated, to protect budgets and justify costs.
- Objective measurement of the software produced, using standards that allow us to compare the real impact of AI before and after adoption.
- Specialized human oversight, not to slow down technology, but to steer it towards business value.
This is how we can know when AI adds value, but also when it does not, and it is better not to use it. Decisions can then be made with data in hand.
Intelligent AI control
The near future of AI in organizations is not to eliminate human intervention, but to elevate it: moving from executing tasks to designing Systems, from producing to governing.
At LedaMC we work precisely in that direction, helping organizations integrate AI into their IT development and management processes while keeping people at the centre of decision-making. People define context, apply critical thinking, and assume responsibility for outcomes. All this without losing control over costs, quality and results, relying on objective measurement of productivity and quality, benchmarking and robust estimation models. As well as tools such as Quanter, which enable the governed application of generative AI to optimize software development projects.
Those who know how to govern AI best, with data, processes, and well-prepared teams, will gain a significant competitive advantage in the market.
Sources: