Elizabeth Renieris – ‘Our robot overlords aren’t quite here just yet’

Elizabeth Renieris is a renowned artificial intelligence ethics expert, who believes that Big Tech is being disingenuous when it calls for a global AI super-regulator. Existing laws cover AI, she says, we just need to leverage them

Elizabeth Renieris is a renowned privacy expert, lawyer and author focused on the ethical and human rights implications of new technology, with a specific emphasis on artificial intelligence.

A senior research associate at Oxford’s Institute for Ethics in AI, Elizabeth Renieris has also held fellowships at Harvard and Stanford and was recently named as one of the Brilliant Women in AI Ethics.

Her book Beyond Data: Reclaiming Human Rights at the Dawn of the Metaverse was published by MIT Press in February 2023. She has also contributed to Wired, Slate, NPR, Forbes and The New York Times.

Elizabeth Renieris has a sceptical view as to why Big Tech is calling for a global super-regulator, which, she argues, is a way to kick artificial intelligence regulation down the road, when there are perfectly applicable and powerful existing laws covering technology.

She talks to Information Age about how we must not let Big Tech pull the wool over our eyes when it comes to regulation, how Big Tech uses start-ups as cover for what they are doing when it comes to government scrutiny, and how her indignation at witnessing the birth of what would become Facebook while she was at Harvard set her on the path of becoming a tech regulation expert and privacy lawyer.

We’re being told that AI is the adrenaline shot the world economy needs, boosting GDP by 7 per cent over a decade. Or its going to make one third of all jobs – or any job that involved a keyboard – redundant, with an estimated 300 million job losses worldwide. Which is it – or is it both?

I don’t particularly like making predictions about the future, but if I had to, I would guess it could potentially be both.

We’re already seeing a significant degree of FOMO – fear of missing out – on the part of industry, who are looking to deploy this tech in some way – anyway – it really doesn’t really matter; they just want to be able to tell their boards and their shareholders that they’re using AI in some fashion. So, I can certainly see it being a catalyst for growth in that sense.

On the job front, I am not entirely convinced that white-collar jobs are going to be replaced. I certainly think they’re going to be transformed. As with all other, previous technological revolutions, whether it’s the PC or the laptop or the mobile phone, we have seen shifts and transformations and changes in the way that we work, but ultimately, we have not been, as yet, replaced by technology. I don’t think our robot overlords are quite here just yet.

On a scale of one to ten, how worried are you about AI being used malevolently?

Ten. It’s already being used for deepfakes, such as non-consensual pornography. We’re seeing it being used for voice clones to impersonate people, perpetuating fraud and scams. We are seeing all kinds of discriminatory uses and exclusionary uses. I think the harms are real and present. Certainly, there’s a lot of conversation about future harms and where this might all go, but I’m personally very concerned about how it’s already being used.

Google CEO Sundar Pitchai believes that AI will help achieve world climate-change goals and turbocharge medical research. What do you think the best use cases are for this technology?

Those would be some of the best uses to deploy this technology in the service of. Unfortunately, I think that AI, like any other technology, operates in a hyper capitalist system, which means that it’s probably going to be deployed in the service of maximising profits — which, like many of the technologies that have sort of come before it, has meant largely maximising clicks and scrolls and attention.

While I was initially personally drawn to AI because of the potential in things such as, drug discovery or molecular research – the more hard-science applications – the incentives which are created by the for-profit companies which largely control these technologies means that the best and brightest minds are going to be put towards the other sort of less noble uses.

Of all the sectors, the one which seems most at risk is the arts, which is why writers are striking in Hollywood right now.

Yes, the creative industries is perhaps one of those sectors which is going to suffer the most. I am deeply concerned about this. The arts have always been at risk because of copyright and things like that, but it’s the speed and scale with which artificial intelligence is being deployed that is a unique threat and a unique challenge.

Maybe I’m being naive, but it seems to me that all AI can do is scrape and remix from stuff that’s already out there. It’s not in the business of making anything original.

The Copyright Office in the US has already said that there is not enough of a creative act in AI-generated visual content, only works of art mostly created by humans are copyrightable, and you could extend that to music or audio-visual content. However, originality is less important when it comes to maximising profits.

The creators of ChatGPT have said that AI should be regulated like nuclear power or nuclear weapons, and that specifically the AI may need a global regulatory body similar to the International Atomic Energy Agency (IAEA). Are they right?

A degree of global cooperation is certainly necessary when it comes to AI. That said, I think that level of global cooperation has been happening for at least a decade in the form of different multi-stakeholder multinational groups, including through the OECD. A lot of the recent arguments that have been made for global coordination seem to ignore the fact that we’ve already been working on this.

This push to start from a clean slate is a way to defer and deflect from the fact that we have existing laws and regulations at the local, regional and national level, which are applicable to parts of the AI development life cycle both to companies and those who are designing, developing and deploying these tools.

Once again, it’s an attempt to kick the regulation can down the road and sort of shift focus away from potentially powerful existing laws and regulations, which can be leveraged.

Are you saying that we have a plethora of laws and regulations that cover any area that AI can operate in, so is there any need for new regulation? Or is it already covered by existing laws?

I have been quite vocal in arguing for focusing on existing laws and regulations, at least in the first instance. Existing laws and regulations can capture potentially 90 per cent or more of use cases that are AI-related. Whether it’s healthcare-related use cases or financial services, we have sector-specific frameworks and context, which matter here. When it comes to consumer protection, the Federal Trade Commission has come out very strongly to say that have the legal jurisdiction and existing laws and regulations.

When it comes to data being used in these AI systems, we have lots of data protection and privacy laws and regulations around the world. When it comes to the sort of data involved in these systems, the corporations and the people who are designing and building them are all subject to existing laws and regulations.

The challenge is primarily one of resources. Certainly, the nature of some of the new advancements in AI machine learning, particularly generative AI use cases, require sort of upskilling or reskilling of existing regulatory authorities, and that requires sufficient resources.

My concern with the introduction of new governance mechanisms, new agencies, and suchlike, divert resources from those who already have the institutional memory and expertise, and who have applied these laws and regulations to other technologies. That knowledge is incredibly useful and we potentially risk losing all that institutional knowledge if we start from scratch.

So, you think actually there’s a kind of a subtle play being made here by saying, ‘Oh we know we need a new agency, new regulation’ as a way of kicking the can down the road?

Absolutely. And this is not new to AI. This is the argument, Google made in the 1990s and it’s the argument that John Perry Barlow made in the Declaration of Independence of Cyberspace. It’s the time-old playbook. And I’m growing tired of it because the risks are just that high now, we need to use what we have before we introduce anything new.

China, the US and the EU are already scribing their own AI regulation. Is it naive to think that countries are going to work together in what could be a new arms race?

We’re in a geopolitical climate where there is increased fracturing. There has been a push towards a lot of localisation when it comes to things like data sovereignty. Tech sovereignty pushes towards this additional fragmentation and the real clash of values. Some who are pushing this global oversight, global governance model are well aware of that. And again, it’s sort of advantageous because it’s less feasible and therefore it means there’s a potential regulatory vacuum they can purport to operate in.

Personally, I think the human rights framework is really relevant. I just authored a book on this couple months ago, about leveraging existing human rights laws to regulate emerging technologies. But what I acknowledge in my book is that we are in this fragile sort of geopolitical moment where we’re not in the same position we were when a lot of these human rights instruments were born in the wake of world wars. When it comes to global cohesion and a willingness to work together, we’re in a very different situation.

Tell me about your new book, Beyond Data. What is your central argument?

For about 50 years we’ve been so focused on protecting data that we’ve largely forgotten to protect people. I look at the evolution of data protection law as our primary vehicle for technology governance. I explore the limitations and shortcomings of that type of approach and how it’s left us extremely exposed and vulnerable. And then I propose a broader human rights framework that incorporates more than just privacy and free expression, but really looks at the full array of individual civil and political rights, as well as economic, social and cultural rights which are relevant to emerging technologies.

Am I thinking that what set you down the path of becoming an expert on tech regulation was your indignation having witnessed the birth of Facebook when you were at Harvard?

Yes, I had a a classmate at university by the name of Mark Zuckerberg, who decided to break into our residential house directories, scrape the photos of female undergraduates, and pit them against each other in a contest of attractiveness called Facemash.com, and that was sort of the genesis of what would become Facebook. So, it definitely planted a seed in me at the time as to what could be done about these types of violations and set me on a path to become a data protection and privacy lawyer. Then, as I explore in the book, I grew frustrated with the limitations of those tools. And now I advocate for a broader human rights-based framework.

If you were Emperor of the World, how would you go about regulating AI? What would be the one thing you would do?

I would start with the low hanging fruit, which is to enforce laws on the books related to competition and antitrust.

One strategy Big Tech uses is to invest in start-ups, which are unmonitored. OpenAI, for example, was sort of under the radar and largely unregulated before it was absorbed by Microsoft. We’ve been very focused on Big Tech and there’s been this strategy on the part of Big Tech to have these things developed in research and developed in smaller entities, which they then acquire or reintegrate, and suddenly these things scale to a level which seems impossible to govern.

I think that those types of regulations related to companies and their market power are really important. Consumer protection regulations are definitely important. I think privacy and data protection laws that exist need to be leveraged. Yes, these systems run on data, but I think they need to be leveraged in a way which respects the original aims of those laws, which have to do with respecting the human rights of people and not with this sort of technocratic protection of the security and privacy of data itself.

More Tech Leader Q&As

Is Web3 tech in search of a business model?For a business to benefit from Web3 means having to change your entire business model, argues digital transformation guru David Galea

Tony McCandless – The role of generative AI in intelligent automation – Tony McCandless, UK, Ireland and Benelux managing director at SS&C Blue Prism, spoke to Information Age about the keys to intelligent automation leadership, and the current generative AI trend

Mike Myer – ‘Generative AI is another huge transformation’ – Mike Myer, CEO and founder of Quiq, the AI-driven chatbot solution for customer service, whose clients include Nespresso, Wella and Spirit Airlines, believes that generative AI promises to change our world in the way the internet did back in the mid-1990s

Avatar photo

Tim Adler

Tim Adler is group editor of Small Business, Growth Business and Information Age. He is a former commissioning editor at the Daily Telegraph, who has written for the Financial Times, The Times and the...