A recent article in Harvard Business Review (HBR) raises an interesting question: do hiring algorithms used by companies to recruit staff prevent bias or amplify it?[i] Their conclusion is unclear. The article warns that the technology has to be “proactively built and tested” to remove any intentional or unintentional bias.”[ii] In this article, I want to make the case for why the business analyst (BA) is the organization’s best-hope for ensuring that AI technology is built and tested to avoid this bias.
But first, a little background related to how organizations are using AI/machine learning in various stages of the recruitment process. [iii] Companies are already using AI to help them recruit candidates. They want AI to help them:
- Reduce recruiting budgets
- Score resumes
- Find candidates who will fit the job description
- Advertise jobs in venues apt to draw the best candidates
- Assess candidates’ qualifications
- Add consistency to the recruiting process
However, these benefits can easily backfire. Let’s look at a couple of examples.
- Reduce recruiting budgets. With machines taking over some of the functions formerly done by live people, organizations hope that in the long run the cost of the recruiting processes will be reduced. However, the long run is very long and the road is rife with pitfalls so the expected cost savings may not be realized. Not only are there technical challenges, but it is likely that the organizational culture will need to change as well.
- Score resumes. When scoring is based on historical data that contains built-in biases, the machine learning algorithms can learn those biased patterns and use them going forward. Data such as the candidate’s name (Susan vs. Sujata for example) or sports played in school (hockey vs basketball perhaps), might produce unforeseen results.
- Find candidates who will fit the job description. Again, let’s say that historical data has shown that a certain type of candidate has traditionally been successful in the organization. It might be natural to program the algorithms to look for candidates with those same characteristics, thus replicating institutional biases.
RELATED BLOGS
In addition, it can increase bias in unforeseen ways.
- Predictive algorithms help advertise job openings and play the role of head hunter. That is, it can find both candidates who are actively seeking jobs and those who are not. On the surface, this sounds good. But if algorithms suggest advertising the job in venues that cater to a certain class of candidates, such as men, chances are only men will apply. The organization might be able to say, “Well, we looked for a woman, but none applied.”
- Such biases may not reflect the diversity of the company’s customers. This is particularly true for large organizations with diverse customers and/or global companies.
- Some algorithms have been known to predict who will click on an ad rather than who’s apt to be the most successful candidate.
One way organizations avoid some of these digital pitfalls is to ensure that business analysts are included on these digital projects.
A business analyst can help in many ways. Here are just a few examples:
- Evaluate software options. They can help in the evaluation of AI tools and recommend only those that do not promote the kinds of biases discussed above. Helping with commercial software selection and implementation has always been something BAs do well. This assumes, of course, that the BA has done their homework and has become familiar not only with various options available but also with how AI is being or will be used throughout the organization.
- Examine the algorithms. This means that the BA has to actively engage with the data scientist (or person creating the algorithms) to understand the type of algorithm being used and why. The BA needs to ensure that the algorithms being used will promote the goals and objectives of the organization and that the AI effort is meeting a real business need. Part of examining the algorithms is to look at how to measure the success of potential candidates. BAs need to look the end-to-end recruitment process and where AI is used in each part of the process in order to detect where the potential for built-in biases may occur.
- Cleaning the data. It is well-known that one of the aspects of AI that most people dread is cleaning the data. Yet data cleansing has to be done if the results of the machine’s predictions are to be trusted. Part of this cleansing process is to examine the historical data to ensure it doesn’t contain underlying biases.
- Testing the software. BAs can help proactively test these tools with the goal of removing biases in mind. The BA can review test cases to ensure that any biases are thoroughly tested and that anomalies are called out and removed.
To summarize, there are many ways for bias to find its way into AI recruiting technology. Business analysts can add tremendous value to organizations by helping them recognize and remove biases from these applications.
[i] All the Ways Hiring Algorithms Can Introduce Bias, by Miranda Bogen, May 06, 2019, HBR, https://hbr.org/2019/05/all-the-ways-hiring-algorithms-can-introduce-bias
[ii] Ibid.
[iii] In this article I’m going to use the terms AI and machine learning interchangeably although there is a distinction.
Elizabeth Larson, PMP, CBAP, PMI-PBA, CSM Elizabeth Larson, PMP, CBAP, CSM is a consultant and advisor for Watermark Learning/PMA. She has over 35 years of experience in project management and business analysis.
Elizabeth has co-authored four books and chapters published in five additional books, as well as articles that appear regularly in BA Times, Project Times, and Modern Analyst. Elizabeth was a lead author/expert reviewer on all editions of the BABOK® Guide, as well as the several of the PMI standards.
Elizabeth also enjoys giving presentations, and her speaking history includes repeat keynotes and presentations for national and international conferences on five continents. Elizabeth enjoys traveling, hiking, reading, theater, and spending time with her 7 grandkids.