AI Magazine November 2024 | Page 97

PRAMOD BELIGERE
DEEP LEARNING
Indeed OpenAI ’ s and even Google ’ s models have been criticised for generating biased or discriminatory content , enabling the creation of misinformation .
The importance of understanding this is not only to avoid litigation from companies , but is necessary to stay on the right side of regulations . As the use of LLMs becomes more widespread , regulatory frameworks are evolving to address transparency issues .
“ Regulations like the GDPR emphasise data protection and the right to explanation , requiring organisations to provide understandable information about automated decision-making processes ,” says Pramod . “ Also , the EU
AI Act aims to set stricter transparency and accountability standards for high-risk AI systems .”
This underscores the need for greater transparency and accountability in LLMs as they become more and more integral to operations around the globe . But how can you get a window into these internal machinations and make them inherently more understandable ?
Busting open black boxes To make deep learning models like LLMs more understandable , researchers are developing tools to enhance the transparency of deep learning models .
“ These efforts contribute directly to the UN ’ s Sustainable Development Goals

PRAMOD BELIGERE

TITLE : VICE PRESIDENT OF GENERATIVE AI PRACTICE HEAD
COMPANY : HEXAWARE INDUSTRY : IT LOCATION : INDIA
Pramod Beligere is a seasoned technology executive with over 25 years of experience in the IT industry . His role as Vice President of Generative AI Practice Head sees him lead Hexaware ’ s practices surrounding Gen AI to find the optimal application of the technology within their operations .