The use of AI recruitment tools and the Equality Act 2010

7th September, 2023

Isabel Baylis looks at how AI is being used in recruitment and the impact it may have on future Equality Act 2010 cases.

On 11 August 2023 the House of Commons (“HOC”) Library published a research paper on AI in employment law. The research focussed on a range of current and potential uses of AI in the workplace including recruitment, line management, and monitoring and surveillance.

AI is defined in the report as technologies that enable computers to simulate elements of human intelligence, such as perception, learning and reasoning. To achieve this, AI systems rely upon large data sets from which they can decipher patterns and correlations, thereby enabling the system to ‘learn’ how to predict future events.

This post looks at how AI is being used in recruitment and the impact it may have on future Equality Act 2010 cases. Specifically, this post will focus on potential claims for the failure to make reasonable adjustments, direct discrimination and indirect discrimination.

Current use of AI tools

The HOC research identifies AI being used in the following key recruitment tasks:

1. Building job descriptions

2. Chatbots guiding candidates through the application and interview processes

3. Sifting through application forms and CVs by extracting relevant information and ranking it against the job requirements

4. Designing and marking assessments (eg analytical thinking, problem solving) and psychometric tests

5. Evaluating online interview performance by analysing biometric data such as speech patterns, tone of voice, facial movements

AI and Reasonable Adjustments

All of the above uses suggest a system that is developed and applied to all candidates. However, the Equality Act 2010 places an active duty on employers to make reasonable adjustments to its systems if they are putting disabled people at a particular disadvantage.

To some extent this is not a new issue. Most employers use at least partially digitalised systems for recruitment and some people may have difficulties using those systems as a result of their disabilities. Companies have also long used psychometric tests, which may well require adjustments. However, the issues AI present have the potential to be more extensive. For example, chatbots can have rigid communication capabilities, which might cause difficulties for someone whose condition means they do not communicate in the way the chatbot has been trained to understand.

As technologies develop, more intelligent AI systems could be taught to make the adjustments themselves. For example, they could be taught to adjust the length of time for questions for those who have slower processing speeds. They could propose the most effective adjustments to their own tests based on an understanding of the particular disadvantage faced.

One problem with automating such processes is that disabilities affect people in different ways and so the adjustment process needs to be tailored to them. If there is any challenge to the adjustment, a human (presumably) will have to give evidence in the court room explaining why it felt that that adjustment was proportionate and effective, despite the fact it did not reach a decision it was an effective adjustment themselves.

Direct discrimination

AI, some people argue, is not subject to the same emotional responses humans have, who are informed by their likes, dislikes and political beliefs. It is therefore less likely to be directly discriminatory.

The problem is that AI machines learn from real world data and unfortunately, that data itself may contain biases that the machines are inadvertently learning. Those training the AI may also have their own biases.

A tool that could become prevalent, given its potential to save time, is using AI as a sifting tool for candidates with the best experience for the job. The HOC research identifies an example of where this went wrong in a way that employment lawyers would view as directly discriminatory. Amazon had to scrap its tool that sifted CVs for the best candidates because the tool had been taught by 10 years of previous data and most of that data happened to come from men. The machines began to teach themselves to penalise CVs that contained the words ‘women’s’ such as 'women’s chess team'.

If a claimant could point to facts from which it could be inferred there was discriminatory decision making, for example, if a statistically disproportionate amount of black people were not getting jobs, it could shift the burden of proof onto the decision maker to explain why that was. Again, this might be a difficult task for the human that did not make the decision. It may require expert evidence on the machine learning process and algorithms.

Indirect discrimination

If AI sifting processes are trained to select people with the most recent, relevant experience, they could be subject to challenges of indirect discrimination, for example, from women who have had career breaks to look after children, or people who have had time off because of a disability.

However, one area in which we have already seen new software develop to combat such potential biases in both human and digital recruitment, is ‘contextual recruitment’ platforms eg the graduate recruitment programme Rare, which measures performance based on contextual data, such as the average grade at the particular school a candidate attended. Such contextual recruitment programmes could become more common and could, for example, use contextual information to flag someone has had a career break and make sure their previous work experience, or potential in other areas, is given more weight.

Conclusions

Why employers make the decisions they make is key to Equality Act claims. One of the most interesting ways in which AI might influence cases is by creating the need for humans to be able to explain and understand increasingly complex decision making by non-humans. Employers should bear in mind any need to overview, understand and approve the decisions made.


The commentary and information contained within this blog is for general information only. It does not constitute legal advice. This blog should not be used as a substitute for properly obtained legal advice from a qualified practitioner with appropriate expertise. No warranty, whether express or implied, is given in relation to the accuracy of any matters published herein. The barristers of 9 St John Street nor any of them do not accept responsibility for any technical, editorial, typographical or other errors or omissions within the information provided on this website, whether such errors are the result of negligence or otherwise, nor shall they be responsible for the content of any web images or information linked to from this website.