Responsible AI: If Alphabet can’t do it, who can, asks Dr Christine Chow, a Director of Hermes EOS, the engagement arm of the British investment company.
Speaking ahead of Alphabet’s Annual Meeting of Stockholders, Dr Chow warned that “there are clearly many areas of concern for investors with Alphabet’s use of artificial intelligence.” She adds: “While we have seen the company make progress in some areas, we encourage the Board to be accountable for the responsible use of AI including its impact on society and establishing internal governance mechanisms. If Alphabet cannot do it, with all the resources and intellectual capital at its disposal, investors will question whether any company can.”
She says that Alphabet needs to:
- Establish a Societal Risk Oversight Committee of the Board
- Improve the internal governance structure overseeing AI technologies to harness employee/stakeholder ethical insights
- And regularly monitor and report on the human rights impact for content reviewers and provide sufficient support to staff and contractors
Ethical AI – the answer is clear
Explaining further, she added: “The power Alphabet possesses has never been greater, and its responsibilities have never been heavier. Investors are looking to the company and its Board to display leadership in the responsible use of AI and the minimisation of societal risks.”
Dr Chow did agree, however, that Alphabet’s Google has taken steps to improve responsible AI including publishing a set of principles, a white paper, introducing machine learning fairness education, and even pre-announcing search algorithm changes for the first time.
She said that the company is also strengthening interpretability (defined as the degree to which people can understand the cause of a decision made. “However, when expert opinions and human judgement are introduced into AI’s non-linear systems, unconscious bias is not necessarily resolved and may even increase, without careful monitoring and oversight.”
The importance of data ethics for your business in 2019
Dr Chow said that she supports ‘Stockholder Proposal 6’ regarding the establishment of an independent Societal Risk Oversight Committee of the Board. The committee will be required “to assess the potential societal consequences of the company’s products and services and should offer guidance on strategic decisions.”
Explaining the decision to support this, she said: “We have long been concerned about public access to violent or extremist online content, which was sadly highlighted by the terrorist attack in Christchurch. The establishment of this Committee will ensure that the company’s technology and its impact on society is considered and focused on at the very top of the organisation.
Ethics of AI, the machine human augmentation and why a Microsoft data scientist is optimistic about our technology future
“In our view, there is currently a gap in the necessary skills on the Alphabet Board to provide the required societal risk oversight. We ask the Board to consider director candidates with experience in statistical analysis, neuroscience and social sciences to ensure the probabilistic nature of AI systems is adequately explained and the social impact of technology is properly considered. To the extent sufficient expertise is not present on the Board, this Committee should consider convening an advisory Board of external stakeholders to access the necessary expertise to oversee the complex risks associated with AI The short-lived Advanced Technology External Advisory Council teaches us that candidate selection for the Committee should be transparent.
“In addition, we are concerned that the Audit Committee’s mandate includes the social impact of technology. We consider this Committee to be fully occupied with audit issues and believe it would therefore not have enough time for material non-audit risks.”
Will data ethics and regulation drive innovation in AI?
Ethical consciousness of employees and stakeholders can guide implementation
Active employee movements can help Alphabet to address controversial issues such as sexual harassment, gender inequality and workplace practices. Hermes EOS believes that the ethical consciousness of employees is a real asset to the company. Currently it is unclear how wide-ranging feedback is incorporated into the current internal governance structure.
EOS Hermes said: “We recommend a formal and inclusive feedback system from employees as well as other stakeholders from the AI ecosystem — covering contract developers and test users to ensure that technology deployment is subject to robust product design and impact assessment.
Human rights impact on the front line
In addition to the machine-led monitoring, Alphabet employs thousands of human moderators on the ‘front line’ who are required to review sensitive, disturbing or violent content and make content assessment decisions.
Dr Chow explains: “We call on Alphabet to give greater disclosure on the working practices and support, both psychologically and financially, that is given to staff and contractors globally, not only in the US, in these demanding roles. Whilst Alphabet is, in some ways, an exemplary employer, the human rights impact of jobs of this nature potentially, and inadvertently, exposes the company to risks. The company therefore needs to review the level of support given to staff and contractors to ensure it is sufficient.