“Mustafa’s taking some time out right now after ten hectic years,” the company said, describing the leave as a mutual decision and adding that the unit expects him back by the end of the year.
Bloomberg originally reported news of Suleyman's departure, citing controversy over some of the projects he led.
Google acquired DeepMind in 2014, four years after Suleyman cofounded the London-based lab with CEO Demis Hassabis, and the reported price tag—400 million pounds or $650 million U.S. at the time—signified its big ambitions in artificial intelligence and DeepMind’s technical expertise.
Suleyman has been running the group’s “applied AI” division, which aims to find real-world uses its scientific research, and previously led the group's health efforts, including an extensively criticized 2016 partnership with the U.K.’s National Health Service. The collaboration gave DeepMind access to 1.6 million patient records for a kidney monitoring app called Streams, and its data-sharing practices were ultimately deemed illegal in 2017, prompting an apology from Suleyman and the company.
Late last year, Google announced that Streams and the team working on it would be absorbed into Google proper, essentially dissolving the DeepMind Health group.
Meanwhile, the applied AI team has worked on projects like a text-to-speech service for Google Cloud and cutting Google’s data center cooling costs. Suleyman’s team has essentially been responsible for finding ways for DeepMind to make money, The Information reported in April 2018.
Google parent company Alphabet doesn’t break out the balance sheets of its individual businesses in its earnings reports, but recent documents filed with the U.K.’s Companies House registry show that in 2018 DeepMind’s pretax losses grew to $570 million with revenues of $125 million.
Suleyman himself is a former social activist who believes “capitalism is failing society,” according to Business Insider, and has been personally outspoken about ethics in AI.
“We need to do the hard, practical and messy work of finding out what ethical AI really means,” he wrote in a Wired op-ed in 2018, predicting that the study of the safety and societal impact of AI was going to be “one of the most pressing areas of enquiry.”
As part of Google's acquisition of DeepMind, he and Hassabis even required that the company set up an internal ethics board to oversee AI work across all divisions.
Earlier this year, Google tried to create an external artificial intelligence ethics council, but quickly dissolved the project in the wake of internal and external protest over its members.