TECHNOLOGY
Artificial intelligence ( AI ) does not have opinions . There are no baked-in beliefs it would like to champion , no self-evident propositions it ' s itching to share with the world . Instead , AI relies on and learns from huge datasets generated by humans .
And while nobody consciously includes bias in a database , over time it has crept in . Without human intervention , AI could be reinforcing some damaging societal bias , limiting its impact as the next great technical innovation .
AI needs data that is collected and labelled by data scientists , and algorithms defined , trained and tested by
AI developers , according to Francesca Rossi , IBM Fellow and AI Ethics Global Leader . And that ’ s where the problems often start .
“ People are biased – mostly in an unconscious way – so it is possible that , without a careful methodology , such biases are embedded in both the data and in any other developer ’ s decisions in building an AI system ,” says Rossi . “ We devote significant effort to tools , methods , education and impact assessment processes to ensure that AI bias is detected and mitigated by our developers , consultants , sellers and all other IBMers in their respective roles .” aimagazine . com 77