April 18, 2023
Since its release in November 2022, ChatGPT has reached the masses like no AI ever has in history. The system’s user-friendly interface, combined with its promptness and massive database of knowledge make for the perfect short-term solution for a lot of potential issues – from solving a maths problem to writing an entire essay. It’s impressive, and it will doubtlessly take its place among history’s most iconic technological advances. But there is no reward without risk, and the question that ChatGPT has been raising is: exactly what or who is at risk in its development?

What is General Data Protection Regulation?

The General Data Protection Regulation (GDPR) came into effect on May 2018, in a bid to enhance the individual’s control over their personal data and how it is used. It prioritises consent for the data subject and how details are processed by businesses in the interest of personal security. GDPR is the reason you are always being asked to consent to a website’s cookies. In today’s age, even very minimal data exchanges, such as in the case of cookies, could put one’s personal details at risk if leaked.

GDPR and ChatGPT

One of the ways in which ChatGPT ‘learns’ is through a technique called Reinforcement Learning From Human Feedback, or RLHF. A human agent ranks the AI’s responses, which are then input into a scoring system. The AI’s responses are from then  modified based on the previous rankings, creating a ‘reward model’. Basically, the AI does a lot of its learning through a human mediator, but this mediator does not have complete control over the machine’s processes – essentially, they are only ranking the results of its ‘thinking’. 

There are also concerns regarding RLHF more generally. Namely, the AI’s responses are ultimately ranked based on an individual’s own sets of values or preferences, and this can become a problem when dealing with more complex tasks which are difficult to define or measure.

As a result, ChatGPT has been consistently improving its responses since its release based on the data that it has been processing. But some of this data comprises of the conversations people have had with the AI, including names, IP addresses and more.

Although OpenAI, ChatGPT’s founder, claim that they are compliant with security laws, governing bodies are growing increasingly concerned over its popularity and lack of internal regulation.

On the 20th of March 2023, ChatGPT experienced a data breach involving user conversations and personal data, including payment information. Italy’s data watchdog responded by saying there was no legal basis which could justify ‘the mass collection and storage of personal data for the purpose of training’ the AI, regardless of the bot’s RLHF system. 

In order for a company to collect and use its users’ data under GDPR, they need to comply with one of six legal justifications. The main two options are requesting the user’s consent or arguing that there are ‘legitimate interests’ at hand for data collection. OpenAI did not ask for consent and was particularly vague in its description of ‘legitimate interests’, explaining that the data used for training the AI might include ‘publicly available personal information’. But therein lies a catch – just because some data is public, it does not mean that companies can use it in their interest.

The other issue at hand is the minimal regulation which is currently in place for data usage in the development of artificial intelligence. Currently, there is no way to verify someone’s age when using ChatGPT, an aspect which could potentially affect the development of underage users.

Why Did Italy Ban ChatGPT? 

As a result of the 20th of March data breach and the EU’s mounting concerns over AI regulation, Italy banned the use of ChatGPT on its territory from the 31st of March. As a result, OpenAI faces a potential 20 million euro fine, or 4% of its global annual revenue.

Italy also published a list of demands for OpenAI. In this list, OpenAI is asked to:

  • Publish an information notice detailing its data processing
  • Adopt a system which would only allow users of age to access the AI
  • Clarify the legitimate legal basis for its data processing for training purposes
  • Provide ways for users to exercise rights over their personal data, including having the power to have their data deleted
  • Provide uses with the option to object their data being processed for training purposes
  • Conduct a local campaign to inform Italians of their data being processed for training purposes

OpenAI has been given until the 30th of April to implement most of these changes, and until May 31st to present a plan for implementing age verification technology, with the deadline for having a robust system in place being set for September 30th.

Provided that the required measures are put in place, Italy will lift its ban, but may act again if different implementations will be necessary.

Following in Italy’s Footsteps

Since Italy’s ban on ChatGPT, OpenAI has been facing unprecedented international scrutiny, including from Elon Musk, who is on OpenAI’s board of advisors. In an open letter, Musk and several other experts call for AI developments to be momentarily halted over privacy concerns and the potential risk of ‘losing control’ over its expansion. The recent open letter asks for a six-month stop in AI-related development, which would allow for deeper analysis of the systems put in place so far, and of the potential dangers at hand. 

Regulators in France and Canada have since launched their own investigation over ChatGPT’s data usage, with Spain also urging the EU’s watchdog to further their investigation. In fact, it is believed that Italy has set a data regulation precedent for other countries to follow. Currently, however, Italy is the sole Western country to have taken a step towards further regulating AI usage. Otherwise, ChatGPT is also banned in Russia, China, Iran, North Korea, Cuba, and Syria.

Times will continue changing and technology will continue evolving at a rapid pace regardless of whether ChatGPT is banned and regulated, or not. The major takeaway of this event – and of the consequences faced by OpenAI – is not that we will soon be unable to control technology, but that its accessibility and power, if unregulated, may fall into the wrong hands. 

Check out our Commercial Awareness Guide to AI to learn more.

By Ariana Serafinceanu 

CTA

Boost Your Commercial Awareness

Don’t miss any legal headlines with our CA newsletter

Newsletter sign-up

Loading

Loading More Content