May 28, 2022
Take a look below at how AI is impacting our welfare systems and what can be done to keep vulnerable users protected, to enhance your commercial awareness. By Deepak Chopra.

With many countries in Europe integrating artificial intelligence with social security and welfare benefits, many issues and concerns have arisen in each case that were perhaps not considered at the time of proposal. The growing use of AI in social security and the increased commercial awareness means more people need to be aware of its influence.

What Is an Automated Welfare System?

An Automated Welfare System (AWS) is the integration of how the government calculates the social security benefit with automated technology. Whilst a standard welfare system would require human approval to decide how much welfare payments an application may get, an AWS would bypass that process and have an automatic algorithm in place to decide how much each person is entitled to every month based on changes in their earnings.

A recent example of this would be the UK’s automation of their welfare system under the ‘Universal Credit’ scheme. This scheme was put in place as a major overhaul of the previous system, with the intention to make benefits more accessible whilst also cutting administrative costs with a one monthly lump sum to be managed mainly online.

The key issue discovered by Human Rights Watch lies in the means-testing algorithm that Universal Credit uses. The amount people are entitled to per month should be dependent on changes in their earnings, but the data being used to measure these changes shows the wages people receive within the calendar month and not how regularly people are paid. This means the algorithm being used does not factor those who may receive multiple pay checks in one month, as is common for those in irregular or low paid jobs, which could result in an overestimation of earnings and therefore a reduced benefit payment. This automation of the welfare system has resulted in many vulnerable people being overlooked despite the system being brought in for their protection. 

Key Artificial Intelligence Act Concerns

The Artificial Intelligence Act is a proposed Act to help regulate the use and integration of AI, which could make it one of the first acts of its kind. Within the act are different major risk categories cited: applications that create an ‘unacceptable risk’, ‘high risk’ applications, and applications that are not banned or listed as ‘high-risk’ that are left unregulated. 

Whilst there are many pieces of legislation in place to preserve social security rights within the EU, the growing use of AI to monitor and control benefits and social security programs as well as the lack of concrete regulations around its applications is a major cause for concern.

Outside of the UK’s Universal Credit, until April 2020 the Netherlands had gone one step further to automise the prevention of benefits fraud through the Systeem Risico Indicatie or SyRI which calculates the likelihood of someone to commit such acts. The system used data such as education, employment, and personal debt. Before being discontinued it was heavily criticised for targeting individuals from lower-income areas and minority ethnic groups.

The major concerns surrounding AI are not just that they overlook those it intends to protect, but it could go one step further and cause more harm to social security than protection through enforcing stereotypes and discrimination already seen in human-operated systems.

CTA

Commercial Awareness to Boost Your Career

Sign-Up to our Commercial Awareness Newsletter to receive updates start to your inbox.

Subscribe Now

What Are The Key Risks Of AI?

This legislation is designed to limit the risks, including:

  • A ban on ‘unacceptable risks’ (article 5). This includes certain types of ‘social scoring’ and ‘biometric surveillance’ that could risk privacy. It is deemed unjustifiable to ‘score’ someone’s trustworthiness using other metrics such as finance or employment
  • “High-risk” AI systems (articles 6 and 7). Any risks that would require extra safeguard to deploy such as biometric identification data or those that impact access to social security benefits. This would also include the ’fraud-prevention’ SyRI scheme recently adopted in the Netherlands.
  • Limited-risk AI systems (article 52). This would involve things such as emotion recognition and deep fake systems. Providers of services with said limited risks have less obligations to carry out, with the core ones being to inform the user that such technology is being used
  • Minimal-risk AI systems. This is for all other systems not covered by the regulation’s requirements and safeguards

What Is Algorithmic Discrimination?

Algorithmic Discrimination is the use of data to develop ‘higher-risk systems’ such as biases and profiling already present in human-led systems. This is an essential hurdle to overcome for AI to have a place in modern society due to a need to reduce discrimination against vulnerable members of society. If such systems were to be put in place without such corrections, then those same biases considered problematic in human-led systems may be seen as more ‘objective’ given the mathematical nature of the AI’s results.  

How Can The Artificial Intelligence Act Be Improved?

There are a few ways the Artifical Intelligence Act can be improved, including:

  • Increase specificity in AI Act Proposal. The current proposal has several examples of broad jargon that could let to further confusion. Replacing these terms with absolute prohibition on any type of behavioural scoring and discrimination would allow it to comply with relevant human rights stanrdards.
  • Implement a flexible mechanism for additional prohibited uses. AI technology is constantly growing and evolving, meaning so should the law. By having a mechanism in the proposal that allows for additional list of systems to be regulated means that protection against threats can be implemented in a faster, more specific manner
  • Human Rights assessment for ‘entire life cycle’. By mandating a human rights assessment throughout the entire life cycle of the ‘high risk’ system, it will allow any risks to be identified and rectified immediately to ensure now breaches of human rights occur. The Human Rights Watch have also published their own list of amendments to the proposal that include other things such as consideration for the vulnerable, minorities, and those of digital illiteracy.

Loading

Loading More Content