Lucie Mitchell explores whether AI will make the recruitment process less biased or risks making it more so.
Artificial Intelligence (AI) has the ability to revolutionise recruitment by streamlining and automating the sourcing and selection stages of the hiring process. It can potentially save recruiters huge amounts of time by identifying and matching the best candidate for each job role, based on a set of predefined traits.
However, the use of this kind of technology in recruitment is still at an early stage and is not without its own challenges and risks, with many concerned that it could reinforce discriminatory hiring practices.
“As it stands, there are significant limitations of using AI and machine learning in the hiring process,” remarks Stuart Lewis, founder of jobs and volunteering site Rest Less. “The biggest challenge is that AI and machine learning work through the optimisation of historical data, which is inherently biased as it’s based on prejudices of the past. This means that the outcomes on which any AI or machine learning algorithms are based, may also be subject to significant second order bias.”
This is a problem encountered by Amazon when it adopted an AI recruitment tool to automate its selection process, but found that the algorithm discriminated against women. This was because the system had been trained to use data from CVs submitted by applicants over a 10-year period – and most of these came from men in technical roles. The tool then began to penalise CVs that included the word ‘women’.
“Not only are algorithms programmed by humans, but many of them operate by analysing data from your existing workforce or the use of ‘Big Data’ more generally,” comments Tom McLaughlin, senior associate at employment law firm BDBF. “The risk therefore, is that the tool is fed biased data and will consequently repeat biased decision-making.”
Charles Hipps, CEO and founder of recruitment tech firm Oleeo, stresses that it’s important that an AI tool never works based on historical data alone. “Efforts must be made to constantly improve the robustness of any tool, to help employers benefit from the best possible early evaluation of applicants, based on responses given within online application forms,” he says.
However, the use of AI in recruitment selection can also risk further marginalising anyone who is not ‘mainstream’, warns Lewis. “For example, it may place people from disadvantaged backgrounds at risk of being permanently excluded if they have not been offered the same training and education on how to build a CV that fits a machine’s view of success. When we optimise for the statistically proven ‘best outcome’ every time, we significantly risk ignoring people who don’t fit the mould.”
Yet Hipps believes that AI is not about the name on a CV or cultural background of an interviewee. “AI reveals which applicants are a better fit for positions within the company, correlating skills and work values to numbers and percentages. Companies can therefore focus on the candidates with the right expertise, experience and potential to be productive within their already established teams, provided the humans in the equation eschew their own biases. As long as skills and competences are aptly demonstrated, being non-traditional is not a detrimental characteristic.”
You too could join 217 existing franchise owners already working our easy to follow business model and earning in excess of £95,000 per year profit. Work only the hours that suit you and your family, from home.
Even so, problems can arise for those workers who have taken career breaks and are returning to work – or anyone who is not considered a ‘traditional’ job applicant in that sense – as they could risk being sifted out by automated systems.
“AI and machine learning has some catching up to do when non-traditional applicants apply for a role,” remarks Lewis. “In the short-term, offering a filter on an application for non-traditional applicants, which are then screened by a human rather than a machine, is one way to deal with the challenge. In the longer-term, to be truly unbiased, AI and machine learning programming will need to incorporate weighting factors and other methods to fairly assess less traditional applicants in the screening and selection process.”
According to a recent global survey by CEIPAL, 71% of respondents believe that AI will eliminate human bias from the recruitment process.
It is also thought that AI can reduce certain types of discrimination in recruitment because personal information can be removed, allowing a more unbiased and equal selection process.
“AI and automated screening can be programmed so that applicants with protected characteristics, such as age, gender or sexual preference, are not declined for these markers alone,” explains Lewis.
“Though this isn’t a full solution, it is beneficial in creating equality within workplaces, and allows the best candidate to be chosen without bias or judgement,” adds Barbara Jamieson, lawyer and owner of Protect Your Empire.
Hipps says that constant machine learning will work to reduce unconscious biases and enhance diversity, by uncovering strong candidates who may have gone unnoticed in a non-intelligent or manual process. “In turn, recruiters gain insight and reasoning into which characteristics score the strongest. Data in itself will not get you a decision – you need to have data, you need to be able to generate insight and you also need to be able to link that to action,” he says. “Humans should therefore never be taken out of the equation as the technology itself is just an enabler.”
Jamieson agrees that AI should be viewed as a supportive tool – and not the only tool in the recruitment process. “Training around diversity and equality is vital, as is ensuring that equality goes beyond the recruitment process,” she advises. “AI can select the best candidates, but if diversity and equality isn’t understood at interview stage, companies are simply delaying discrimination. AI needs to work alongside other strategies in order to meet diversity targets.”