AI Magazine January 2026 | Page 31

DANIEL HULME
THE AI INTERVIEW systems and verifying AI agents, including whether they are conscious. The company’ s first product tackles a problem organisations are facing right now – checking whether AI agents are effective at the role they’ ve been engineered to do.

DANIEL HULME

CHIEF AI OFFICER
Daniel is a globally recognised expert in AI. He’ s the CEO of the AI company Satalia, that joined WPP in 2021 where Daniel is now the Chief AI Officer. Daniel has been recognised as one of the top 10 Chief AI Officers globally.
Having received a Masters and Doctorate in AI at UCL, Daniel is also UCL’ s Computer Science Entrepreneur in Residence and a lecturer at LSE’ s Marshall Institute, focused on using AI to solve business and social problems.
Daniel is a serial TEDx speaker and is a faculty member of SingularityU. He has advisory and executive positions across companies and governments, and actively promotes purposeful entrepreneurship and technology innovation across the globe.
In 2024, Daniel also founded Conscium, an AI research lab dedicated to the understanding of conscious AI and its implications for developing safe, efficient neuromorphic models.
Neuromorphic systems use spikes, not numbers Large language models burn through training data and learn slowly, while human brains operate on about 20 watts – approximately the power of a light bulb – and pick things up from single examples.
“ LLMs are crude representations of our brain. They require nuclear power stations to run. They require lots and lots of data to learn and they’ re not adaptive at all,” Daniel says.“ Your brain operates on the power of a light bulb and you learn incredibly quickly. I don’ t have to say‘ that’ s a phone’ to you once you know what phones are, you don’ t need to see millions of examples of phones.”
Neuromorphic computing takes a different approach by copying how biological brains function. Rather than passing numbers around networks, neurons fire spikes at different frequencies. The technology has promise: spiking neural networks pack more information into fewer neurons. But they are complicated to build. GPUs excel at propagating numbers but struggle with spikes and that technical hurdle has kept neuromorphics confined to research labs for two decades. Now that large language models have been solved, academics are moving to harder problems.
“ Conscium is investing in neuromorphic systems and we’ re already showing some improvements in terms of energy reduction and adaptivity,” Daniel says. aimagazine. com 31