How algorithms encode and reveal our biases

Kodak's "Shirley Card" was given to photo processors to judge coloring in photos (Photo: the Nick DeWolf Foundation /  Susan Etlinger)

Kodak’s “Shirley Card” was given to photo processors to judge coloring in photos (Photo: the Nick DeWolf Foundation / Susan Etlinger)

Is artificial intelligence bound to its makers’ prejudices? At TEDxBerlin, data analyst Susan Etlinger turns to the past to investigate the future of AI.

In the 1950s, photography in the U.S. was dominated by the Kodak company, and its staff’s opinions of what is normal, Etlinger says. The company sent photo processors color-correction cards based on a single model named Shirley — a white woman — and Shirley became the poster woman for “normal” coloring in photos. “If Shirley looked good, the prints looked good,” Etlinger says, “…and this was terrible for photographs of people of color.”

With Shirley cards, people of color were essentially written out of the algorithm for color photo prints, Etlinger says, and it wasn’t until the 1970s that non-white models were included in Kodak color-correction cards, decades after color photography came to prominence.

Computer vision is the next big frontier in imaging technology, Etlinger says, and she warns that we must not let it be a repeat of Shirley cards. “Computers have to [analyze a photo] pixel by pixel by pixel,” she says, “[and] has to figure things like color and shape and sometimes weather; and things like, ‘Is it a photograph safe for work or that’s not safe for work?’ and ‘Is it a concert; is it a tennis match; is it a wedding?’” There’s a lot of room for error, bias and confusion, Etlinger says, and so every bit of software must be considered carefully.

Etlinger points to a computer’s analysis of Édouard Manet’s painting Olympiaa historically scandalous and controversial work. The computer does well in identifying many elements of the painting, Etlinger says, but — curiously — it labels Olympia with the tag “religion,” and chooses religious paintings as the related images. “If you know anything about this painting, you know that religion was about the furthest thing on Manet’s mind when he painted it,” Etlinger says. “What does this tell us? It tells us that a lot of the images this particular [computer] had seen before it saw the Manet were images of religious paintings. In artificial intelligence, if all you have is religious paintings, every painting is a religious painting.”

Even more outrageous, says Etlinger, is the computer’s inspection of the famous Kim Kardashian Paper Magazine cover. Here, the related images are objects — a corkscrew, a lamp, a hookah pipe, a candlestick. “What does this tell us?” she asks. “It tells us that the data set did not contain anything that would help us to understand Ms. Kardashian, or … the algorithm had never seen a naked woman before.”

A computer attempts to classify a picture of Kim Kardashian (Photo: Paper Magazine / Susan Etlinger / Clarifai)

A computer attempts to classify a picture of Kim Kardashian (Photo: Paper Magazine / Susan Etlinger / Clarifai)

What we teach computers matters, Etlinger says: “[Artificial intelligence] doesn’t just encode our biases, it actually reveals them. We have this incredible ability to stop and take stock of the digital future that we are building and ensure that it’s a future that we and our children and our children’s children will want to live in.”

To learn more, watch Etlinger’s whole talk below:

Leave a Reply

Your email address and name are required fields marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>