Oklahoma State University Scholar Says Artificial Intelligence Can Eliminate Bias in the Hiring Process

A new study by Kimberly Houser, an assistant professor of legal studies in the department of management in the College of Business at Oklahoma State University, finds that five years after the tech industry started to pay attention to the lack of women in their workforce, the number and percentage of women has not changed.

Silicon Valley has tried to address the problem with widespread employee diversity training to make its workers aware of biases, but training has had little impact, Houser notes. As an example, Google has provided training to more than 70,000 employees since 2014 but the percentage of male versus female tech workers there has not changed. “The system for increasing diversity in the tech industry is broken,” Houser argues

Professor Houser argues that using machine decision-making through artificial intelligence (AI) can remove unconscious bias and “noise” from the hiring and promotion process and begin making the workplace reflect a diverse society. Houser’s research shows that AI could make the hiring process blind to gender and race and result in the best people hired for jobs, and more diversity.

“Research has shown that if you take race and gender off of resumes, more women and minorities get interviews and are hired,” Houser said. In a study she cites, researchers found that simply replacing a woman’s name on a resume with a man’s name resulted in improving the odds of being hired by 61 percent.

“We have an industry dominated by White males from universities like Stanford, MIT, Harvard, Yale and Cornell,” Houser said. “When you have a male from Stanford interviewing a group of people, they tend to like males who graduated from Stanford. It’s called affinity bias and its unconscious. You’re not aware of it as a bias and you’re not sure why, but you think the male Stanford graduate is best for the job. It is not a conscious effort to ignore everyone else.”

But using AI cannot totally eliminate bias without other fixes, Houser found. This is due to the fact that the predominantly White, male programmers who write programs introduce biases into their algorithms resulting in machines “learning” that biases are the accepted norm. The key is to make sure both the data sets used and the humans involved in creating the AI are a diverse population to begin with.

Professor Houser is a graduate of the University of Texas at Austin, where she majored in international business. She earned a juris doctorate at the University of Illinois.

The study, “Can AI Solve the Diversity Problem in the Tech Industry? Mitigating Noise and Bias in Employment Decision-Making,” will be published in the Stanford Technology Law Review. It may be accessed here.

Filed Under: Research/Study


RSSComments (0)

Leave a Reply