AI ETHICS BY NUMBERS
Customers :
• 49 % trust AI interactions ( up from 30 % in 2018 )
• 66 % expect AI models to be bias-free
• 67 % expect organisations to be accountable for AI algorithms
Organisations :
• 53 % have a leader who is responsible for AI ethics
• 50 % have a confidential hotline to enable whistleblowing
• 60 % allow customers to access and modify their information ( down from 70 % in 2019 )
Executives :
• 78 % are aware of explainability in AI systems ( versus 32 % in 2019 )
• 65 % are aware of AI discriminatory bias issues
• 52 % have experienced legal scrutiny of their AI systems
• 22 % have faced customer backlash stemming from AI systems
From AI and the Ethical Conundrum : How organisations can build ethically robust AI systems and gain trust ( 2020 ). Publisher : Capgemini Research Institute
59
Yardley agrees that AI should only be held accountable in a situation where “ there is negligence involved ”. He says , “ The customer must ultimately take a view on whether the advantages outweigh the risks and needs to do this in the light of the overall performance of the aid with respect to human options . So if , for instance , the number of accidents caused by driverless cars is a lower proportion than those caused by human drivers , it is not reasonable to seek compensation for a contingency that could not be designed for .”
At the root of the AI ethics debate is the question of scale . As Peter van der Putten , assistant professor at Leiden University and global director for
aimagazine . com