DeepMind now has an AI ethics research unit. We have a few questions for it…

DeepMind, the U.K. AI company which was acquired in 2014 for $500M+ by Google, has launched a new ethics unit which it says will conduct research across six “key themes” — including ‘privacy, transparency and fairness’ and ‘economic impact: inclusion and equality’.

The XXVI-Alphabet-owned company, whose corporate parent generated almost $90BN in revenue last year, says the research will consider “open questions” such as: “How will the increasing use and sophistication of AI technologies interact with corporate power?”

It will helped in this important work by a number of “independent advisors” (DeepMind also calls them “fellows“) to, it says, “help provide oversight, critical feedback and guidance for our research strategy and work program”; and also by a group of partners, aka existing research institutions, which it says it will work with “over time in an effort to include the broadest possible viewpoints”.

Although it really shouldn’t need a roster of learned academics and institutions to point out the gigantic conflict of interest in a commercial AI giant researching the ethics of its own technology’s societal impacts.

(Meanwhile, the issue of AI-savvy academics not already being attached, in some consulting form or other, to one tech giant or another is another ethical dilemma for the AI field that we’ve highlighted before.)

The DeepMind ethics research unit is in addition to an internal ethics board apparently established by DeepMind at the point of the Google acquisition because of the founders’ own concerns about corporate power getting its hands on powerful AI.

However the names of the people who sit on that board have never been made public — and are not, apparently, being made public now. Even as DeepMind makes a big show of wanting to research AI ethics and transparency. So you do have to wonder quite how mirrored are the insides of the filter bubbles with which tech giants appear to surround themselves.

One thing is becoming amply clear where AI and tech platform power is concerned: Algorithmic automation at scale is having all sorts of unpleasant societal consequences — which, if we’re being charitable, can be put down to the result of corporates optimizing AI for scale and business growth. Ergo: ‘we make money, not social responsibility’.

But it turns out that if AI engineers don’t think about ethics and potential negative effects and impact before they get to work moving fast and breaking stuff, those hyper scalable algorithms aren’t going to identify the problem on their own and route around the damage. Au contraire. They’re going to amplify, accelerate and exacerbate the damage.

Witness fake news. Witness rampant online abuse. Witness the total lack of oversight that lets anyone pay to conduct targeted manipulation of public opinion and screw the socially divisive consequences.

Given the dawning political and public realization of how AI can cause all sorts of societal problems because its makers just ‘didn’t think of that’ — and thus have allowed their platforms to be weaponized by entities intent on targeted harm, then the need for tech platform giants to control the narrative around AI is surely becoming all too clear for them. Or they face their favorite tool being regulated in ways they really don’t like.

The penny may be dropping from ‘we just didn’t think of that’ to ‘we really need to think of that — and control how the public and policymakers think of that’.

And so we arrive at DeepMind launching an ethics research unit that’ll be putting out ## pieces of AI-related research per year — hoping to influence public opinion and policymakers on areas of critical concern to its business interests, such as governance and accountability.

This from the same company whose 2015 data-sharing agreement with a London NHS Trust led to an investigation by the UK’s privacy watchdog — which this summer judged that UK privacy law had been broken after DeepMind’s health division was handed the fully identifiable medical records of some 1.6M people without their knowledge or consent.

And now DeepMind wants to research governance and accountability ethics? Full marks for hindsight guys.

Now it’s possible DeepMind’s internal ethics research unit is going to publish thoughtful papers interrogating the full spectrum societal risks of concentrating AI in the hands of massive corporate power, say.

But given its vested commercial interests in shaping how AI (inevitably) gets regulated, a fully impartial research unit staffed by DeepMind staff does seem rather difficult to imagine.

“We believe AI can be of extraordinary benefit to the world, but only if held to the highest ethical standards,” writes DeepMind in a carefully worded blog post announcing the launch of the unit.

“Technology is not value neutral, and technologists must take responsibility for the ethical and social impact of their work,” it adds, before going on to say: “As scientists developing AI technologies, we have a responsibility to conduct and support open research and investigation into the wider implications of our work.”

The key phrase there is of course “open research and investigation”. And the key question is whether DeepMind itself can realistically deliver open research and investigation into itself.

There’s a reason no one trusts the survey touting the amazing health benefits of a particular foodstuff carried out by the makers of said foodstuff.

Related: Google was recently fingered by a US watchdog for spending millions funding academic research to to influence opinion and policy making. (It rebutted the charge with a GIF.)

“To guarantee the rigour, transparency and social accountability of our work, we’ve developed a set of principles together with our Fellows, other academics and civil society. We welcome feedback on these and on the key ethical challenges we have identified. Please get in touch if you have any thoughts, ideas or contributions,” DeepMind adds in the blog.

The website for the ethics unit sets out five core principles it says will be underpinning its research. Principles I’ve copy pasted below so you don’t have to go hunting through multiple link trees* to find them, given DeepMind does not include ‘Principles’ as a tab on the main page so you do really have to go digging through its FAQ links to find them.

(If you do manage to find them, at the bottom of the page it also notes: “We welcome all feedback on our principles, and as a result we may add new commitments to this page over the coming months.”)

Update: A DeepMind spokeswoman has been in touch to point out the principles can also be located by scrolling down below the fold on the ethics homepage.

*Someone should really count how many clicks it takes to extract all the information from DeepMind’s Ethics & Society website, which, per the DeepMind Health website design (and indeed the Google Privacy website) makes a point of snipping text up into smaller chunks and snippets and distributing this information inside boxes/subheadings that each have to clicked to open up to get to the relevant information. Transparency? Looks rather a lot more like obfuscation of information to me, guys…

So here are those principles that DeepMind has lodged behind multiple links on its Ethics & Society website:

Social benefit
We believe AI should be developed in ways that serve the global social and environmental good, helping to build fairer and more equal societies. Our research will focus directly on ways in which AI can be used to improve people’s lives, placing their rights and well-being at its very heart.

Rigorous and evidence-based
Our technical research has long conformed to the highest academic standards, and we’re committed to maintaining these standards when studying the impact of AI on society. We will conduct intellectually rigorous, evidence-based research that explores the opportunities and challenges posed by these technologies. The academic tradition of peer review opens up research to critical feedback and is crucial for this kind of work.

Transparent and open
We will always be open about who we work with and what projects we fund. All of our research grants will be unrestricted and we will never attempt to influence or pre-determine the outcome of studies we commission. When we collaborate or co-publish with external researchers, we will disclose whether they have received funding from us. Any published academic papers produced by the Ethics & Society team will be made available through open access schemes.

Diverse and interdisciplinary
We will strive to involve the broadest possible range of voices in our work, bringing different disciplines together so as to include diverse viewpoints. We recognize that questions raised by AI extend well beyond the technical domain, and can only be answered if we make deliberate efforts to involve different sources of expertise and knowledge.

Collaborative and inclusive
We believe a technology that has the potential to impact all of society must be shaped by and accountable to all of society. We are therefore committed to supporting a range of public and academic dialogues about AI. By establishing ongoing collaboration between our researchers and the people affected by these new technologies, we seek to ensure that AI works for the benefit of all.

And here are some questions we’ve put to DeepMind in light of the launch of the ethics research unit. We’ll include responses when/if they reply  Update: Now with DeepMind’s responses below in italics:

    • Is DeepMind going to release the names of the people on its internal ethics board now? Or is it still withholding that information from the public?
      DeepMind declined to provide an on the record comment in response to this
    • If it will not be publishing the names, why not?
      DeepMind declined to provide an on the record comment in response to this
    • Does DeepMind see any contradiction in funding research into ethics of a technology it is also seeking to benefit from commercially?
      A DeepMind spokesperson said: “We believe companies building AI have the responsibility to conduct and support open research and investigation into the wider implications of their work. At DeepMind, we start from the premise that all AI applications should remain under meaningful human control, and be used for socially beneficial purposes. Understanding what this means in practice requires rigorous scientific inquiry into the most sensitive challenges we face.”
    • How will impartiality be ensured given the research is being funded by DeepMind?
      A DeepMind spokesperson said: “We’re fortunate to be advised by independent Fellows who are known for their expertise, integrity and independence. They, our internal staff, and other collaborators are free to explore the real impacts of AI in the near and future term. If their research leads them to explore potentially negative consequences of AI, that is within their remit as fellows and independent researchers. While criticism may be uncomfortable, this level of openness is the only way we can ensure progress for the benefit of all. To ensure the rigour, transparency and social accountability of our work, we’ve developed a set of public principles with our Fellows, other academics and civil society that guide everything we do—and we welcome feedback on them.”
    • How many people are staffing the unit? Are any existing DeepMind staff joining the unit or is it being staffed with entirely new hires?
      A DeepMind spokesperson said: “Today there are about 8 people on the Ethics & Society team and in the next year we plan to grow to ~25.”
    • How were the fellows selected? Was there an open application process?
      A DeepMind spokesperson said: “Our initial fellows were selected for their integrity and academic profile or achievements, as well as their reputation for asking challenging questions. Typically they come from the research community and specialise in a variety of areas relating to AI and its intersection with society. They have an advisory function, to help shape and determine the work and direction of DeepMind Ethics & Society as it grows. In future we will also accept proposals for research and other partnerships from people and organisations that want to work with us.”
    • Will the ethics unit publish all the research it conducts? If not, how will it select which research is and is not published? How many pieces of research will the unit aim to publish per year? Is the intention to publish equally across the six key research themes? And will all research published by the unit have been peer reviewed first?
      A DeepMind spokesperson said: “As with all things research, it’s hard to say, especially in the early days. But we hope to begin publishing papers early next year, and any papers produced by the Ethics & Society team will be made available through open access schemes. In general we are big believers in the academic tradition of peer review. Any subjects covered in the six ‘research themes’ are within scope for study by DMES Fellows and academics, and we are also open to ideas and suggestions for projects that may fall outside the scope we’ve identified here. This is a rapidly evolving field and we realize we may not have anticipated everything that’s worth researching at this stage.”
    • What’s the unit’s budget for funding research? Is this budget coming entirely from Alphabet? Are there any other financial backers?
      A DeepMind spokesperson said: “We’re just getting started so it’s too early to put a number on it. We are not taking contributions from any external organisations.”