The Continuity of Barriers: How AI Job Recruiters Discriminate
AI is here to stay. Ever since the groundbreaking release of ChatGPT back in 2022, the growth of artificial intelligence has skyrocketed. The promising field of developing similar generative AI technologies that could increase user accessibility, content creation, and workplace productivity quickly led to tech giants investing billions of dollars in this technology. This rapid advancement is reflected in the AI market’s staggering evolution: valued at just $40 billion in 2022, it is now projected to reach $1.3 trillion over the next decade.
This immense growth underscores not only AI’s current influence but also the vast potential yet to unfold. However, as we continue heading to an AI-driven future, one pressing question emerges: “Are we prepared for these rapid advancements ahead?”
While several studies claim that AI tools are the future due to their ability to perform repetitive tasks, which could free up workers’ time to focus on more meaningful workload, concerns arise when AI is used to make decisions that deeply impact people’s lives.
For instance, companies have started implementing AI recruiting tools to filter out resumes from applicants. This initially seems like a good system because recruiters can provide input to the AI on the objective qualities wanted from applicants. However, unexpected issues might arise when an algorithm is biased.
According to a Bloomberg study, ChatGPT 3.5 displayed biased preferences against job applicants based on their names. Using eight different versions of a resume, only changing names to match certain ethnicities or gender, and running this experiment 1000 times to observe the behavior of the AI, Bloomberg discovered that 32% of the top-ranked resumes were distinct to the Asian American community. Meanwhile, only 18% of the top-ranked resumes were associated with the Black American community. On the same track, Black American resumes were the most likely to be bottom-ranked, with a 29% share of the lowest-rated entries.
A similar case occurred with Amazon’s recruiting engine in 2015, when machine-learning specialists discovered that their new hiring system was applying gender biases in assessing applicants. The algorithm assigned lower ratings to resumes that included the word “women,” or referenced leadership or sports activities involving this term. Additionally, graduates who had attended all-women’s colleges were also unfairly judged.
The company declared that the recruiting machine had been trained using resumes submitted over a 10-year period, most of which came from men. This underrepresentation of data collection of women applicants is one of the ways in which algorithmic biases can develop. That’s because if the collected data fails to represent a particular gender or race, the final system will take it as disregarding particular groups in its performance.
As AI takes on a bigger role in the hiring process, the risks of deepening existing inequalities can’t be ignored. While these tools could truly contribute to work efficiency, they can unintentionally perpetuate discrimination. As of right now, some of the true dangers of AI lie in how hard it is to detect these biases, especially as hiring decisions are often described as behind-the-curtain selections, making it almost impossible for applicants to know why they were rejected. Even recruiters themselves might be ignorant of the bias of recruiting machines, truly considering its feedback as precisely evaluating a candidate.
Without proper oversight, AI could become more of a barrier than a bridge to fair treatment in the workplace, highlighting the pressing need for preventative measures. Fortunately, progress has been made in different approaches to mitigate bias in AI. One such approach is pre-processing the data used to train AI models to ensure that they are representative of the entire population, often involving techniques such as undersampling, oversampling, or using synthetic data. Pre-processing the data allows for addressing and identifying biases even before the AI is trained, and for AI models to be resilient against specific types of bias.
Other approaches include model selection techniques such as regularization, which penalizes models for making discriminatory predictions. For instance, in a hiring algorithm, regularization can be applied to reduce the model’s reliance on variables that correlate with protected attributes like race or gender, thereby discouraging the model from making biased decisions. By adjusting the regularization parameters, the algorithm can be guided to focus on more fair criteria, promoting equity in its outcomes.
In a world where AI is becoming increasingly integral to hiring, we must prevent technologies from reinforcing existing inequalities. While significant strides have been made in developing techniques like data preprocessing and regularization to mitigate bias, these efforts are just the beginning. Moving forward, companies must prioritize the ethical development of AI to ensure that it serves as a bridge to fairness in the workplace rather than a barrier.
Pingback: Can Machines Act With Integrity? by Hamilton Mann - Global Peter Drucker Forum BLOG
Pingback: AI and Racial Justice: Navigating the Dual Impact on Marginalized Communities - Non Profit News | Nonprofit Quarterly
Pingback: AI and Racial Justice: Navigating the Dual Impact on Marginalized Communities – Non Profit News
Pingback: Exploring the security risks underneath generative AI services - Your Source for B2B Tech Trends
Pingback: Exploring the security risks underneath generative AI services - The TechBriefs