How generative AI regulation is shaping up around the world

With generative AI developments heating up globally, we take a look at the regulation state of play for regions across the world

Discussions over regulation of generative artificial intelligence (AI) are heating up globally, with concerns over societal risks including misinformation and job replacement being raised by data science leaders and tech stakeholders alike.

OpenAI chief executive Sam Altman recently declared the possibility of “existential risk” involving generative AI to the US Senate, while data science leader Geoffrey Hinton made similar warnings upon leaving Google, and leaders including Elon Musk have called for a halt to development.

Regions across the world are at various stages of the drafting process regarding generative AI regulation. Here, we explore how generative AI regulation is playing out across the world.

Australia

The 2023 Federal Budget saw the governmental announcement of a Responsible AI Network, as well as funding of AU$41.2m ($26.9m) towards responsible roll-out of multiple AI technologies across the country. Regulators aim to quell concerns shared by Australia’s Human Rights Commissioner Lorraine Finlay, who stated that AI chatbots including ChatGPT and Bard are already harming society.

In addition, talks are ongoing among regulators regarding possible changes to the more general Privacy Act, in order to cover a lack of transparency that can occur with AI model training via feedback loops without the presence of human supervision. There are also discussions around the use of analytics and biometric data to train models, which could call for additional privacy rules to be put in place.

Tech Council of Australia board member Victor Dominello called for a “tech reg vanguard” that could analyse emerging AI developments and advise government bodies accordingly on how to go about monitoring them for possible risks. National risks had already been experienced through a “Robodebt”‘ scheme — automated and delivered by the federal government to calculate debts — which was found to be unlawful in 2020 on the grounds of averaged income data being used to calculate welfare overpayments.


Why now is the time to implement regulation of AIStrong, up-to-date regulation is vital towards mitigating societal impacts.


Brazil

The legislation process regarding AI in Brazil was advanced by a legal framework that was approved by government in September 2022, but widely criticised for being too vague.

Following the release of ChatGPT in November 2022, discussions among lobbyists led to a report being sent to the Brazilian government, detailing recommendations around how to regulate artificial intelligence. The study, written by legal and academic experts along with company leaders and members of national data protection watchdog ANPD, comprised of three main areas:

  • citizen rights — including “non-discrimination and correction of direct, indirect, illegal, or abusive discriminatory biases”, and ensured clarity over when users are interacting with AI
  • categorisation of risks — establishing and recognising levels of risk to citizens, with “high risk” including essential services, biometric verification and job recruitment; and “excessive risk” including exploiting of vulnerable people and social scoring
  • governance measures and administrative sanctions — how exactly businesses that breach the rules will be penalised; recommendations of two per cent of revenue for moderate non-compliance, and fines of $9m for serious harms were suggested

This document is currently being debated across the Brazilian government, with dates for further drafting yet to be announced.

California, USA

Regulating AI innovation in the hotbed that is Silicon Valley is set to be an ever-present challenge for regulators, with the likes of OpenAI and major investors Microsoft, along with Google being headquartered in the state and overseeing heavy involvement. To overcome this pressing challenge, regulators are planning a sweeping AI proposal.

Drawing from the national AI Bill of Rights framework, the pending legislation looks to prevent discrimination and harms across private sectors including education, utilities, healthcare and financial services. Annual impact assessments submitted to the California Civil Rights Department by developers and users – which would detail types of automated tools involved – and being publicly accessible, have both been suggested as safeguards. Alongside this, there are plans to ask developers to implement a governance framework detailing how the tech is being used and possible impacts.

Additionally, updates have been proposed by the California Civil Rights Council (CRC) over use of AI in employment, including more specific definitions of “adverse impact”, tasks which constitute automated-decision systems (excluding word processing and spreadsheet software), and “machine-learning data” to be changed to “automated-decision system data”.

Google CEO Sundar Pichai recently commented that building AI responsibly is the only race that concerns him, ensuring that “as a society we get it right”.

Canada

Laws relating to regulation of AI in Canada are currently subject to a mixture of data privacy, human rights and intellectual property legislation on a state-to-state basis. However, an Artificial Intelligence and Data Act (AIDA) is planned for 2025 at the earliest, with drafting having begun under the Bill C-27, the Digital Charter Implementation Act, 2022.

An in-progress framework for managing the risks and pitfalls of generative AI, as well as other areas of this technology across Canada, aims to encourage responsible adoption, with consultations reportedly planned with stakeholders.

A risk-based approach, according to the Canadian government, aligns itself with similar regulations in the US and the EU, with plans to build on existing Canadian consumer protection and human rights law to recognise the need for “high-impact” AI systems to meet human rights and safety legislation. Additionally, it’s said that the Minister of Innovation, Science, and Industry would be responsible for ensuring that regulation keeps up with tech evolution, and that new law provisions could be created for malicious use.

Six main obligation areas have been identified, for high-impact systems to adhere to:

  • accountability
  • fairness and equity
  • human insight and monitoring
  • safety
  • transparency
  • validity and robustness

AI development needs state control, says pioneerYoshua Bengio, one of the three ‘godfathers of AI’, says firms developing artificial intelligence systems need government oversight.


China

Regulation of AI-powered services that serve citizens across mainland China, including chatbots, is currently being drafted by the Cyberspace Administration of China (CAC). A proposal was created to prospectively call for Chinese tech companies to register generative AI models with the CAC before releasing products publicly.

Evaluation of these products will reportedly “legitimacy of the source of pre-training data”, with developers needing to demonstrate alignment of products with the “core value of socialism” under the draft law. Products will be restricted from using personal data for training, and will need to require users to verify their true identities. Additionally, AI models that share extremist, violent, pornographic content, or messages that call for the “subversion of state power” would be in breach of the regulations.

Currently, violations are set to be subject to fines of between 10,000 yuan ($1,454) and 100,000 yuan ($14,545), along with service suspension and possible criminal investigation. Additionally, vendors found to be sharing content deemed inappropriate will need to update their systems within three months to ensure prevention of repeat offences. The legislation is planned for finalisation by the end of the year.

UPDATE: Regulators in Beijing are looking to formalise regulation of AI development across China, with draft laws currently set to call on tech companies to register artificial intelligence products to the government within 10 working days following release. The Chinese government aims to have laws finalised by the end of July 2023. Read more here.

European Union

The European Union (EU) AI Act is currently being drafted through the European Parliament, with a plenary vote in June 2023 seeing 499 MEPs out of a total of 620 voting in favour of the new law. Regulation of artificial intelligence in the region has been in the offing for several years now, with the European Commission submitting a proposal in April 2021.

Italy in particular was the first country to ban the use of ChatGPT, with discussions around following suit happening over the last few months in other EU countries.

A framework for AI regulation proposed by the European Commission details four levels of risk:

  • Minimal or no risk — including systems such as AI-enabled video games or spam filters, which can involve generative AI
  • Limited risk — including use of chatbots, with users needing to be clearly informed that they are interacting with such systems from the outset
  • High risk — use of generative AI in critical infrastructure such as transport; educational contexts; law enforcement; hiring and recruitment; and healthcare robotics
  • Unacceptable risk — including all systems that pose a clear threat to the safety and rights of citizens, such as social scoring and voice assistants that encourage harm

The impact of ChatGPT on multi-factor authenticationExploring the security implications of generative AI tools like ChatGPT, and how businesses can adapt their authentication strategies.


India

The Indian government announced in March 2021 that it would apply a “light touch” to AI regulation in the aim of maintaining innovation across the country, with no immediate plans for specific regulation currently. Opting against regulation of AI growth, this area of tech was identified by the Ministry of Electronics and IT as “significant and strategic”, but the agency stated that it would put in place policies and infrastructure measures to help combat bias, discrimination and ethical concerns.

Voluntary frameworks have been proposed by the Indian government for the management of AI. Its 2018 National Strategy for Artificial Intelligence considered five key areas of AI development: agriculture, education, healthcare, smart cities, and smart mobility. Then in 2020, ethical use of AI were detailed in a draft of the National Artificial Intelligence Strategy, calling for all systems to be transparent, accountable, and unbiased.

South Korea

South Korea’s AI Act is currently in its final phases of drafting, with votes to be made within the National Assembly. The law, as it stands, looks to clarify that regulations must allow any user to create new models without needing to obtain any government pre-approval, with systems considered “high-risk” regarding the lives of citizens required to gain long-term trust.

The pending bill holds a prominent focus on national innovation with ethics in mind, with businesses using generative AI set to receive governmental support on how to responsibly develop systems.

Additionally, the country’s Personal Information Protection Commission has announced plans to create a taskforce dedicated to rethinking biometric data protection, in light of generative AI developments.

United Kingdom

As it stands, regulation of generative AI in the UK is set to be kept in the hands of sector regulators where AI is being used, with no general law planned beyond the UK GDPR. The Government has opted for a “pro-innovation approach” in official announcements around this topic. with country looking to take the lead in the global AI race. However, questions around how generative AI risks such as system breaches, misinformation and bias remain.

To help mitigate this, an Impact Assessment has been published by the UK government, which aims to determine suitable and fair regulation of AI developers. This measure comes as part of the wider National AI Strategy, with its summary stating: “A number of market failures exist (information asymmetry, misaligned incentives, negative externalities,
regulatory failure), meaning AI risks are not being adequately addressed.

“The UK government is best placed to put forward a suitable cross-sectoral regulatory regime to achieve these goals.”

Objectives laid out include driving AI SME growth, increasing public trust, and maintaining or improving the UK’s position in Stanford Global AI Index.

The Competition and Markets Authority (CMA), meanwhile, has launched a review into AI foundational models, examining development of tools including ChatGPT for competition and consumer protection considerations. AI developers are being called to demonstrate alignment with five overarching principles:

  • safety, security and robustness
  • appropriate transparency and explainability
  • fairness
  • accountability and governance
  • contestability and redress

According to the body, the review results will be published in early September 2023.

Related:

How to embrace generative AI in your enterpriseWhat are the use cases for embedding generative AI in your enterprise?

Avatar photo

Aaron Hurst

Aaron Hurst is Information Age's senior reporter, providing news and features around the hottest trends across the tech industry.

Related Topics

Generative AI
Regulation