×
How AI in hiring amplifies gender and racial bias
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Artificial intelligence is increasingly used in hiring processes at major corporations, but a new research study reveals alarming biases in how these systems evaluate job candidates based on gender and race. The study shows that AI resume screening tools significantly favor men over women and white candidates over Black candidates, with Black men experiencing the most severe discrimination. These findings raise urgent questions about fairness in AI hiring systems as their adoption accelerates across corporate America.

The big picture: AI-powered hiring tools have become ubiquitous in corporate recruitment, with 98.4% of Fortune 500 companies now employing these systems in their hiring processes.

Key findings: The research uncovered significant discrimination patterns in AI resume screening based on both gender and race.

  • Men’s names were favored 51.9% of the time, while women’s names were preferred in only 11.1% of cases.
  • White-associated names were preferred in 85.1% of evaluations, while Black-associated names led in just 8.6% of cases.

Intersectional bias: Black men faced the most severe discrimination when AI systems evaluated their resumes.

  • When compared directly to white men’s resumes, Black men’s resumes were selected 0% of the time.
  • Even when compared to Black women’s resumes, Black men’s were selected only 14.8% of the time.

Methodology: Researchers conducted a comprehensive analysis using diverse test cases across multiple occupational categories.

  • The study utilized 554 resumes with names signaling different racial and gender identities.
  • Testing spanned 9 different occupations and employed three massive text embedding models.
  • Nearly 40,000 resume-job description comparisons were analyzed to establish the patterns.

Recommendations: The researchers proposed several measures to address these biases in AI hiring systems.

  • Implement more rigorous auditing practices for AI tools used in recruitment.
  • Develop deeper understanding of how intersectional identities impact algorithmic assessment.
  • Increase transparency in AI-driven hiring processes.
  • Create policies specifically designed to monitor and regulate AI systems in employment contexts.

Why this matters: As AI becomes the gatekeeper to employment opportunities, these biases could systematically exclude qualified candidates and perpetuate workplace inequalities while potentially violating employment discrimination laws.

Gender, race, and intersectional bias in AI resume screening via language model retrieval

Recent News

Scaling generative AI 4 ways from experiments to production

Organizations face significant hurdles when moving generative AI initiatives from experimentation to production-ready systems, with most falling short of deployment goals despite executive interest.

Google expands Gemini AI with 2 new plans, leak reveals

Google prepares to introduce multiple subscription tiers for Gemini, addressing the gap between its free and premium AI offerings.

AI discovers potential Alzheimer’s cause and treatment

AI identifies PHGDH gene as a direct cause of Alzheimer's disease beyond its role as a biomarker, offering a new understanding of spontaneous cases and potential treatment pathways.