AI Magazine October 2022 | Page 79

TECHNOLOGY

“WHILE ANALYSTS AND DATA SCIENTISTS OFTEN BUILD MACHINE LEARNING ( ML ) MODELS , THOSE IN EXECUTIVE POSITIONS NEED TO UNDERSTAND THE RESULTS ”

ALAN JACOBSON CHIEF DATA AND ANALYTIC OFFICER , ALTERYX
A company-wide approach is overseen by a centralised governance structure , with the IBM AI Ethics board providing guidance and support throughout .
The IBM playbook goes beyond fairness and also covers other trustworthy AI pillars , such as transparency , explainability , robustness , and privacy , explains Rossi . “ In this way , we ensure that the AI systems we build and deliver to our clients can be trusted and have a positive impact on the relevant communities .” results . Explainable AI ( XAI ) offers significant transparency and trust advantages over black-box models .
“ These interpretable models ensure that ML , AI algorithms and the reasoning behind a specific result are more understandable to people who don ' t have a data science background .”
For example , an ML model trained using a collection of financial data to help approve or deny a loan applicant could use XAI to provide not only the answer but also detail how and why it arrived at its response , explains Jacobson .
“ Rather than believing AI will simply deliver the correct autonomous insights , it ’ s imperative to fully understand how and why it arrived at the answer it did ,” says Jacobson . “ The importance of explainable AI goes beyond making the wrong decision .” Alteryx says executives need to know what AI and ML mean Although AI can be trained to perform many tasks without human interaction , it ’ s essential those designing the system fully understand any possible defects before AI amplifies them , says Alan Jacobson , Chief Data and Analytic Officer at Alteryx .
“ While analysts and data scientists often build machine learning ( ML ) models , those in executive positions and other leadership roles frequently need to understand the aimagazine . com 79