Yikes — Researchers Have Accidentally Been Making Their Software Sexist

© Pavel Timofeev / Adobe Stock

Male computer programmers

© Pavel Timofeev / Adobe Stock

Alex Wilson
Alex Wilson
April 18, 2024 at 2:9AM UTC
As our workplaces continue to increase their reliance on technology, artificial intelligence ensures that newer technologies keep evolving so they remain the best possible resource. But how do we make sure that technologies develop the right kind of perspective and that they avoid the unconscious biases humans have?
Computer science professors at the University of Virginia recently tested the existence of unconscious bias within software they were building. They taught machines using basic photo collections and quickly discovered that the materials they were using were inadvertently teaching machines sexist views of women.
Researchers found that major research-image collections — including one supported by Microsoft and Facebook — displayed a predictable gender bias.  As an example, these images associated photos of coaching with men, while women were tied to images of shopping and washing.
Professor Vicente Ordóñez, who spearheaded the study, told Wired how the software magnified its bias in other functions. “It would see a picture of a kitchen and more often than not associate it with women, not men,” he said. The software would recognize a photo of a person in a kitchen and assume that that person, just because they were in a kitchen, was a woman.
Ordóñez realized that the software didn’t develop its sexist views on its own; the biases displayed by the software were unconsciously injected by the researchers who built it and the data it learned from. Mark Yatskar, a researcher who also worked on the project, stressed that technological unconscious biases must be actively avoided.
“This could work to not only reinforce existing social biases,” he said. “But actually make them worse.”
To his point, machine-learning software didn’t just mirror existing biases; it amplified them. If the software analyzed a photo set that generally associated women with cooking, the software then created an even stronger association between the person and their environment.  As major companies rely on this software to accurately train consumer-facing tech on how to view people, the biases within the data are incredibly concerning.
“A system that takes action that can be clearly attributed to gender bias cannot effectively function with people,” Yatskar said.
Fortunately, these biases can be addressed. Researchers can prevent (and de-program) unconscious biases, but in order to do so they must actively seek out specific, shared prejudices within the software.  This neutralizes the bias, but as larger tech companies like Microsoft have shown, it is a Herculean task. 
“I and Microsoft as a whole celebrate efforts identifying and addressing bias and gaps in data sets and systems created out of them,” Eric Horvitz, director of Microsoft Research told Wired. In response to this, Horvitz's team has developed an ethics code for all of its consumer-facing technology. If the technology doesn't meet those standards, it does not move further in development.
If this strategy sounds vaguely similar to your company’s diversity training program, that’s because it is. Diversity training programs require employees to undergo a lot of self-analysis to determine what their biases are. When aiming to eliminate unconscious bias in machinery, researchers must do the same thing.
Sheryl Sandberg, Facebook COO and author of “Lean In,” acknowledges that technology used as the foundation for consumer products needs to be held to a higher standard. “At Facebook, I think about the role marketing plays in all this, because marketing is both reflective of our stereotypes and reinforces stereotypes,” she told The New York Times. “Do we partner into sexism or do we partner against sexism?”
Sandberg’s decision to partner against sexism is one of the reasons her nonprofit, Lean In, partnered with Getty Images to create the “Lean In Collection” — a series of stock photos that feature diverse women in a multitude of different careers.
“You can’t be what you can’t see,” Sandberg said in reference to the photo collection.
Sandberg’s steps forward are great steps for the immediate future, but ensuring that the data used to train new technology remains bias free remains a pressing issue.  If replicated in larger products, these biases could create erroneous digital ideas about women and eliminate much of the progress we have made.
 
 

Why women love us:

  • Daily articles on career topics
  • Jobs at companies dedicated to hiring more women
  • Advice and support from an authentic community
  • Events that help you level up in your career
  • Free membership, always