AI Magazine October 2022 | Page 83

TECHNOLOGY

AI ARTIST PAINTS A POOR PICTURE

We asked the AI-powered art creator Craiyon to draw a picture of “ a CEO being served a drink by a flight attendant ”. Craiyon ( formerly DALL · E mini , an AI model that creates new images from a text prompt ) complied and you can see the results here .
Sadly , this AI artist seems to have an outdated concept of gender roles in the air and on the ground , with women in uniforms serving men in suits .
OpenAI , the AI research and development company behind DALL-E , are onto this and taking it seriously . Earlier this year , developers introduced a new technique to ensure generated images more accurately reflect the diversity of the world ’ s population .
The company reported a 1,200 % increase in the number of results that would pass as a more acceptable representation of a diverse world . platforms that offer guided paths and recommendations for model usage , while providing clear data lineage trails and notes for explaining data points , will provide the level of interpretability and explainability needed to help democratise data science understanding and accessible AI implementation across your domain experts .”
But even the experts working around the clock on these problems say we shouldn ’ t expect bias to be entirely removed from artificial intelligence any time soon .
“ AI bias can be detected and mitigated , but often it cannot be completely eliminated , because of intersectionality issues ,” says IBM ’ s Rossi . “ While decreasing bias over some protected variable , one may increase bias over another protected variable . This is why inclusiveness , transparency and explainability are so fundamental in AI models .
“ These properties allow AI users to know what kind and how much bias is still present in the AI system , and to make an informed decision on whether it is appropriate to use the AI system in the deployment environment . It is therefore essential to have a global approach to AI ethics and not just focus on a single issue .”
Human judgement and processes are still needed to ensure AI-supported decision-making or prediction is fair and unbiased , says Accenture ’ s Tripathi .
She concludes : “ A confluence of humans and machines working together offers many prospects that may well lead to a common language and standardisation in how best AI could operate in multiple contexts , while reducing bias .” aimagazine . com 83