With many countries in Europe integrating artificial intelligence with social security and welfare benefits, many issues and concerns have arisen in each case that were perhaps not considered at the time of proposal. The growing use of AI in social security and the increased commercial awareness means more people need to be aware of its influence.
An Automated Welfare System (AWS) is the integration of how the government calculates the social security benefit with automated technology. Whilst a standard welfare system would require human approval to decide how much welfare payments an application may get, an AWS would bypass that process and have an automatic algorithm in place to decide how much each person is entitled to every month based on changes in their earnings.
A recent example of this would be the UK’s automation of their welfare system under the ‘Universal Credit’ scheme. This scheme was put in place as a major overhaul of the previous system, with the intention to make benefits more accessible whilst also cutting administrative costs with a one monthly lump sum to be managed mainly online.
The key issue discovered by Human Rights Watch lies in the means-testing algorithm that Universal Credit uses. The amount people are entitled to per month should be dependent on changes in their earnings, but the data being used to measure these changes shows the wages people receive within the calendar month and not how regularly people are paid. This means the algorithm being used does not factor those who may receive multiple pay checks in one month, as is common for those in irregular or low paid jobs, which could result in an overestimation of earnings and therefore a reduced benefit payment. This automation of the welfare system has resulted in many vulnerable people being overlooked despite the system being brought in for their protection.
The Artificial Intelligence Act is a proposed Act to help regulate the use and integration of AI, which could make it one of the first acts of its kind. Within the act are different major risk categories cited: applications that create an ‘unacceptable risk’, ‘high risk’ applications, and applications that are not banned or listed as ‘high-risk’ that are left unregulated.
Whilst there are many pieces of legislation in place to preserve social security rights within the EU, the growing use of AI to monitor and control benefits and social security programs as well as the lack of concrete regulations around its applications is a major cause for concern.
Outside of the UK’s Universal Credit, until April 2020 the Netherlands had gone one step further to automise the prevention of benefits fraud through the Systeem Risico Indicatie or SyRI which calculates the likelihood of someone to commit such acts. The system used data such as education, employment, and personal debt. Before being discontinued it was heavily criticised for targeting individuals from lower-income areas and minority ethnic groups.
The major concerns surrounding AI are not just that they overlook those it intends to protect, but it could go one step further and cause more harm to social security than protection through enforcing stereotypes and discrimination already seen in human-operated systems.
This legislation is designed to limit the risks, including:
Algorithmic Discrimination is the use of data to develop ‘higher-risk systems’ such as biases and profiling already present in human-led systems. This is an essential hurdle to overcome for AI to have a place in modern society due to a need to reduce discrimination against vulnerable members of society. If such systems were to be put in place without such corrections, then those same biases considered problematic in human-led systems may be seen as more ‘objective’ given the mathematical nature of the AI’s results.
There are a few ways the Artifical Intelligence Act can be improved, including:
Loading More Content