Are you ready for a world where AI agents work alongside humans, not just as tools but as decision-makers? Imagine walking into your favourite coffee shop and overhearing a conversation: “I’ve built an agent.”
This is what Rahul Bhattacharya, AI leader, GDS Consulting, EY, spoke about at MLDS 2025, discussing the role of the self evolving agentic workforce of the future, and how assessing the risk is equally important as to measuring the benefits.
Bhattacharya explained that for a system to be considered an agent, it must have certain abilities. It should be able to interact with its environment by observing what’s happening around it and taking actions. It must also understand changes in the world, recognising what happens after it makes a move.
“A key ability is making decisions, where the agent chooses the best action based on set rules, goals, or rewards. Over time, it should learn from past experiences and feedback to improve its performance,” Bhattacharya said.
Additionally, an agent must balance new ideas with proven methods, exploring different approaches while still using what works best.
Giving an example of self-driving cars, which senses their surroundings, follow traffic rules, make decisions, and “learns from real-time data,” Bhattacharya pointed out that an agent also needs to have agency, which is the ability to make choices and not just follow a fixed path, which is the risk factor since they become unpredictable.
He mentioned that one major difference of current AI agents with what was discussed with LLMs a year back is “Tools vs. Actions.”
A tool has a fixed, predictable output, like a calculator, while an action is more flexible and can lead to different results, such as an AI assistant making a complex decision.
Another key aspect is planning and memory as AI agents can break tasks into smaller steps (sub-goal decomposition) and use memory, both short-term (within a task) and long-term (learning over time).
The “Risk” of Agentic Workforce
As Bhattacharya observed that the workforce of the future will not just be made up of humans but will also include “teams of AI agents working alongside people.” Instead of hiring only humans, companies will begin to deploy AI agents for tasks.
Some of these tasks will go to deterministic tools that follow fixed processes, while others will be handled by AI agents that can make flexible decisions. Just like humans, these agents will need knowledge—both general skills and company-specific information about internal processes.
This shift is also creating new job roles. “Knowledge Harvesters” will be responsible for collecting and documenting human knowledge so AI agents can use it, while “Flow Engineers” will decide which tasks should be assigned to AI agents, which should remain as tools, and how everything should work together.
AGI Coming Soon?
This brought Bhattacharya to talk about AGI. He said that instead of a single, super-intelligent AI, there could be a “network of self-evolving AI agents” that can “self-spawn” (create new agents) and “self-train” (learn new skills).
He described a future where an AI system starts with no agents, but as tasks arise, it creates a new agent to handle them, leading to continuous growth and learning—possibly even true AGI.
However, this progress also comes with risks. AI must have “agency”, meaning the ability to make decisions, but “agency creates risk because it is not deterministic… It might take actions that do not align with our morals, ethics, or company policies.”To keep AI under control, observability is crucial. Just like how airplanes rely on autopilot but still require human pilots for safety, AI systems need oversight to ensure they make the right choices within safe boundaries.