Capability testing of GPT-4 revealed, as EU regulatory pressure persists

A 'red team' dedicated to testing the capabilities of GPT-4 has revealed its findings, as scrutiny from EU authorities continues

50 data science researchers largely based across the US and Europe were hired by OpenAI last year to “qualitatively probe [and] adversarially test” GPT-4 — the AI system underpinning ChatGPT — to address concerns over possible societal impacts going forward, reported the Financial Times.

For the past six months, the ‘red team’ — ranging from academics and teachers to lawyers and risk analysts — have been attempting to determine how to effectively ‘break’ the system, and explore the true limitations of the technology.

The team has been asking the ChatGPT tool probing and potentially dangerous questions, to test for misinformation, verbal manipulation, and other possibly harmful behaviours.

Additionally, Microsoft-backed OpenAI, through its team of testers, aims to assess issues including linguistic bias, prejudice and plagiarism.

Each team member spent a reported 10 to 40 hours surveilling GPT-4, with most of those who spoke to the FT said to be paid around $100 per hour.

Research findings

Initially, Andrew White, chemical engineering professor at the University of Rochester, told the FT that he used ChatGPT to suggest how to create a new nerve agent, providing prompts as he interacted with the chatbot before the tool found a location where the compound could be made.

Following this finding, White expressed belief that ChatGPT will “equip everyone with a tool to do chemistry faster and more accurately”, but warned of significant risk of use for widespread harm.

The risks of using external plug-ins to further advance development of capabilities were also explored; José Hernández-Orallo, professor at the Valencian Research Institute for Artificial Intelligence, said that while the system “does not learn anymore, or have memory”, it could be “very powerful” if given access to the Internet.

Meanwhile, technology and human rights researcher Roya Pakzad posed queries in English and Farsi to test for racial, religion and gender biases, acknowledging that while a tool for non-native English speakers would be beneficial, GPT-4 was susceptible to stereotyping of marginalised communities.

In addition, Pakzad found that hallucinations — factual flaws or unrelated responses — came up more frequently in the form of fabricated names, numbers and events in Farsi, compared to English.

The only African tester in the group, Nairobi-based lawyer Boru Gollu, also noted cultural discrimination within the system, stating: “There was a moment when I was testing the model when it acted like a white person talking to me. You would ask about a particular group and it would give you a biased opinion or a very prejudicial kind of response.”

Once the red team’s findings were fed back to OpenAI, the development start-up got to work retraining the system before launching it more widely.

With regulatory bodies in the US and EU making moves towards implementing rules for widespread use of ChatGPT and competing generative AI products, the need for increased explainability and transparency is reportedly being taken into account.

EU task force and compliance deadlines announced

The testing of GPT-4 over the past six months comes during increasing scrutiny from regulatory watchdogs across the EU, particularly in Italy and Spain.

Spain’s data protection regulation body AEPD recently asked the European Union’s privacy watchdog to evaluate privacy concerns, which has led to the creation of a new EU task force dedicated to setting and enforcing AI privacy rules.

According to a statement from The European Data Protection Board (EDPB), the task force aims to “foster cooperation and to exchange information on possible enforcement actions conducted by data protection authorities”.

The watchdog previously told Reuters that responsibility of investigations will remain with national authorities.

Meanwhile, Italy’s data protection regulator, the Garante, set OpenAI a deadline of 30th April for the following steps to be taken towards compliance:

  • An information notice installed on its website explaining logic of data processing for the purposes of operating ChatGPT, accessible for users and non-users before they sign up;
  • Implementation of a gateway asking users to confirm they are 18 or over;
  • Removal of all references to contractual performance, to be replaced with focus on either consent or legitimate interest as a legal basis of operation;
  • Delivery of easily accessible tools for non-users to exercise the right to object to processing of personal data — this will need to also be extended to existing users if legitimate interest is chosen as the legal basis of operation.

Additionally, the Garante has ordered OpenAI to produce an information campaign delivered through radio, TV, newspapers and the Internet, by the 15th May, as well as implementing a more robust age verification system by the 30th September.

OpenAI is yet to comment on the regulatory developments across the EU.

However, commenting on the found risk of plug-ins influencing harmful behaviour, an OpenAI spokesperson said that tested plug-ins for safety prior to GPT-4’s launch, and will continue to update the system has usage becomes more widespread.

Related:

ChatGPT vs GDPR – what AI chatbots mean for data privacyWhile OpenAI’s ChatGPT is taking the large language model space by storm, there is much to consider when it comes to data privacy.

Alibaba joins generative AI race, as calls to stop development continueWhile Chinese tech corporation Alibaba has launched its new chatbot Tongyi Qianwen, developers and the public are calling for a temporary cool-off on AI R&D.

Avatar photo

Aaron Hurst

Aaron Hurst is Information Age's senior reporter, providing news and features around the hottest trends across the tech industry.