THE AI INTERVIEW
systems that earn trust. That is a competitive advantage, not just a regulatory obligation.”
The accountability question When autonomous agents cause harm, determining who is responsible is one of the most pressing unsolved questions in AI governance.
Ivana is direct about the limits of current legal frameworks. Her position is that accountability must attach to humans and organisations, not to the systems themselves.
The challenge lies in tracing the causal chain from a problem or mistake to a decision, which can be complex or obscured in agentic systems.
“ What is needed is a principle of non-delegable responsibility,” Ivana asserts.“ When an organisation deploys an autonomous agent, it cannot transfer accountability for that agent’ s actions to the agent itself, to another organisation in the supply chain or to the user who triggered the activity.
“ The organisation that put the system into the world retains responsibility for its foreseeable consequences.”
“The organisations that treat governance as a design principle will build systems that earn trust”
Ivana Bartoletti Vice President and Global Chief Privacy & AI Governance Officer Wipro
Are we making AI too human? One of the most overlooked risks in AI, Ivana argues, is anthropomorphism – the tendency to design AI systems with human names, voices and emotional cues in ways that lead users to attribute human qualities to them.
The practice creates what she describes as a“ misalignment” between a user’ s mental model of a system and
26 May 2026