DeepMind’s New Research Team is Diving into AI Ethics
Google-owned DeepMind is spearheading a new research team that will be tasked with looking into the ethical and social impacts of AI.
DeepMind Ethics and Society–referred to as DMES–will be co-led by Verity Harding and Sean Legassick. The London-based initiative was launched with two goals top of mind: to help technologists put ethics into practice, and help society anticipate and direct the effects of AI.
“We believe AI can be of extraordinary benefit to the world, but only if held to the highest ethical standards. Technology is not value neutral, and technologists must take responsibility for the ethical and social impact of their work,” write Harding and Legassick in a blog post.
Starting in 2018, the ethics-focused team will publish original interdisciplinary research online that they hope will positively contribute to the meaningful debate about the real-world impacts of AI.
The unit will be comprised of eight full-time DeepMind technologists as well as six fellows acting as independent advisors that will provide oversight, feedback, and guidance. The unit has outlined six research areas including managing AI risk and AI’s economic impact.
They’ll try to answer ethical quandaries, like what new societal risks emerge when different AI systems begin to interact with each other and with existing human systems and how can we ensure that people remain in control?
The co-leads said they have a responsibility to conduct and support open research and investigation into the wider implications of their work. While there aren’t universal ethics that guide the use of AI, many leaders and companies have called for AI to be used for good instead of evil.
“At DeepMind, we start from the premise that all AI applications should remain under meaningful human control, and be used for socially beneficial purposes,” write Harding and Legassick. “This isn’t a quest for closed solutions but rather an attempt to scrutinize and help design collective responses to the future impacts of AI technologies.”
DeepMind cofounder Mustafa Suleyman tweeted about the new unit, writing, “Understanding the social impact of AI [and] putting ethics into practice is essential.” He was one of 116 tech luminaries from 26 countries that signed an open letter to the UN calling on the organization to ban the use of lethal autonomous weapons.
The potential for AI to be harnessed for weapons, warfare and even killer robots may sound like a Hollywood tale, but that narrative has been touted by many within the technology industry itself including Elon Musk.
With AI hysteria circulating, Montreal-based Element.AI has made it known that tech companies need to be responsible when it comes to the application of the algorithm-driven technology.
“There are ethical concerns around what AI models are being used for… And we’ve publicly committed that we are going to use AI for good,” said Megh Gupta, Element AI’s director of strategy and solutions, to Techvibes last month.
DeepMind has partnered with nine academic institutions, non-profits and charities to form the new ethics unit, including the University of Bath’s Institute for Policy Research and the Institute for Public Policy Research.