DeepMind's Mustafa Suleyman: In 2018, AI will gain a moral compass - Wired

Humanity faces a wide range of challenges that are characterised by extreme complexity, from climate change to feeding and providing healthcare for an ever-expanding global population. Left unchecked, these phenomena have the potential to cause devastation on a previously untold scale. Fortunately, developments in AI could play an innovative role in helping us address these problems.

At the same time, the successful integration of AI technologies into our social and economic world creates its own challenges. They could either help overcome economic inequality or they could worsen it if the benefits are not distributed widely. They could shine a light on damaging human biases and help society address them, or entrench patterns of discrimination and perpetuate them. Getting things right requires serious research into the social consequences of AI and the creation of partnerships to ensure it works for the public good.

This is why I predict the study of the ethics, safety and societal impact of AI is going to become one of the most pressing areas of enquiry over the coming year. There has already been valuable work done in this area. For example, there is an emerging consensus that it is the responsibility of those developing new technologies to help address the effects of inequality, injustice and bias. In 2018, we're going to see many more groups start to address these issues.

It won't be easy: the technology sector often falls into reductionist ways of thinking, replacing complex value judgments with a focus on simple metrics that can be tracked and optimised over time. Of course, it's far simpler to count likes than to understand what it actually means to be liked and the effect this has on confidence or self-esteem. But these social consequences matter as they contribute either to an environment in which problems can be addressed, or to a climate of resentment and fear – with citizens expressing anger that their interests are marginalised for commercial gain. Progress in this area also requires the creation of new mechanisms for decision-making and voicing that include the public directly. This would be a radical change for a sector that has often preferred to resolve problems unilaterally – or leave others to deal with them.

Nonetheless, as someone who started out as a social activist, I can see many examples of people working in tech who are genuinely driven to improve the world. These people believe they have a natural affinity with those who have devoted their lives to understanding what this means. Inspiring people such as Tristan Harris, who founded the Time Well Spent movement, are forging new alliances by taking on the "attention economy" that distracts us so powerfully with nudges and bleeps – at the cost of our time and well-being. Kate Crawford and Meredith Whittaker, who were originally employed by Microsoft and Google respectively, have co-founded the AI Now group to research technology's social impacts. And the Partnership on AI has brought together many leading AI research labs (including my company, DeepMind) with renowned non-profits like the American Civil Liberties Union for the first time. This initiative is designed to allow technologists and activists to take part on an equal footing.

Getting these things right is not purely a matter of having good intentions. We need to do the hard, practical and messy work of finding out what ethical AI really means. If we manage to get AI to work for people and the planet, then the effects could be transformational. Right now, there's everything to play for.

http://www.wired.co.uk/article/mustafa-suleyman-deepmind-ai-morals-ethics