AI ETHICS AND REGULATION
impact on human values. Paraphrasing Steve Jobs, he says:“ Technology alone is not enough, it’ s technology married to the liberal arts and with the humanities that make our hearts sing.”
He adds:“ I’ m not worried about AI giving computers the ability to think like humans, I’ m more concerned about people thinking like computers.
“ Without value or compassion, without concern for consequences – that is what we need people to guard against.”
This perspective stands in tension with Meta and Grok’ s positioning as models that answer questions others refuse – a stance that worries AI ethics experts who see content moderation and refusals as essential safety features.
Meta claims Llama 4“ refuses less on debated political and social topics overall( from 7 % in Llama 3.3 to below 2 %).”
Allen Institute for AI senior researcher Jesse Dodge questions this:“ Refusals are an important part of building a model and having a model that’ s usable to lots of people,” she says.
“ I don’ t know why they would advertise that it refuses a lot less.”
The technical minefield of fixing algorithmic bias The technical challenge of addressing AI bias proves far more complex than political rhetoric suggests.
With billions of parameters, getting models to answer in particular ways isn’ t straightforward – and clumsy attempts can backfire. Vaibhav Srivastav,
CREDIT: WIN MCNAMEE VIA GETTY IMAGES
Head of Community and Collaborations at Hugging Face, explains that model creators can influence outputs at different stages.
Before training, they decide what data gets included and how sources are weighted. During post-training, techniques like reinforcement learning from human feedback guide models toward preferred responses.
“ Besides anecdotal evidence, little public knowledge exists about what goes into post-training these models,” Vaibhav says. Meanwhile, system-level prompts is a particularly blunt tool that risks unintended consequences. Both Meta and Google have stumbled here,
164 November 2025