AI is worse at identifying household items from lower-income countries
Get link
Facebook
X
Pinterest
Email
Other Apps
-
An example of AI bias reflecting global inequalities.
Illustration by Alex Castro / The Verge
Object recognition algorithms sold by tech companies, including Google, Microsoft, and Amazon, perform worse when asked to identify items from lower-income countries.
These are the findings of a new study conducted by Facebook’s AI lab, which shows that AI bias can not only reproduce inequalities within countries, but also between them.
In the study (which we spotted via Jack Clark’s Import AI newsletter), researchers tested five popular off-the-shelf object recognition algorithms — Microsoft Azure, Clarifai, Google Cloud Vision, Amazon Rekognition, and IBM Watson — to see how well each program identified household items collected from a global dataset.
The dataset included 117 categories (everything from shoes to soap to sofas) and a diverse array of household incomes and geographic locations (from a family in Burundi making $27 a month to a family in Ukraine with a monthly income of $10,090).
The researchers found that the object recognition algorithms made around 10 percent more errors when asked to identify items from a household with a $50 monthly income compared to those from a household making more than $3,500. The absolute difference in accuracy was even greater: the algorithms were 15 to 20 percent better at identifying items from the US compared to items from Somalia and Burkina Faso.
These findings were “consistent across a range of commercial cloud services for image recognition,” write the authors.
Sample images of “soap” from the dataset and the guesses from different commercial object recognition algorithms.
This sort of bias is a well-known problem in AI and has a number of root causes. One of the most common is that the training data used to create algorithms often reflects the life and background of the engineers responsible. As these individuals are often white men from high-income countries, so too is the world they teach their programs to identify.
One of the most well-known examples of AI bias is withfacial recognition algorithms, which regularly perform worse when identifying female faces, particularly women of color. This sort of bias can worm its way into all sorts of systems, from algorithms designed to calculate parole to those assessing your CV ahead of anupcoming job interview.
In the case of object recognition algorithms, the authors of this study say that there are a few likely causes for the errors: first, the training data used to create the systems is geographically constrained, and second, they fail to recognize cultural differences.
Training data for vision algorithms, write the authors, is taken largely from Europe and North America and “severely undersample[s] visual scenes in a range of geographical regions with large populations, in particular, in Africa, India, China, and South-East Asia.”
Similarly, most image datasets use English nouns as their starting point and collect data accordingly. This might mean entire categories of items are missing or that the same items simply look different in different countries. The authors give the example of dish soap, which is a bar of soap in some countries and container of liquid in another, and weddings, which look very different in the US and India.
Why is this important? Well, for a start, it means that any system created using these algorithms is going to perform worse for people from lower-income and non-Western countries. Because US tech companies are world leaders in AI, that could affect everything from photo storage services and image search functionality to more important systems like automated security cameras and self-driving cars.
But this is probably only the tip of the iceberg. Vision algorithms are relatively easy to evaluate for these sorts of biases, but the pipeline that creates these programs is also feeding an entire industry full of algorithms that will never receive the same scrutiny.
Silicon Valley often promotes its products — and, particularly in recent years, its AI products — as egalitarian and accessible to all. Studies like this show that tech companies continue to evaluate, define, and shape the world in their own image.
Source: James Vincent (The Verge). ~Best Feeds™...
The resulting fakes could be used to shame, harass, and intimidate their targets. The DeepNude app creates AI fakes at the click of a button. A new AI-powered software tool makes it easy for anyone to generate realistic nude images of women simply by feeding the program a picture of the intended target wearing clothes. The app is called DeepNude and it’s the latest example of AI-generated deepfakes being used to create compromising images of unsuspecting women. The software was first spotted by Motherboard’s Samantha Cole, and is available to download free for Windows, with a premium version that offers better resolution output images available for $99. THE FAKE NUDES AREN’T PERFECT BUT COULD EASILY BE MISTAKEN FOR THE REAL THING Both the free and premium versions of the app add watermarks to the AI-generated nudes that clearly identify them as “fake.” But in the images created by Motherboard , this watermark is easy to remove. (We were unable to test t...
Giertz got tired of waiting for Elon Musk to release Tesla’s first pickup truck, so she made one herself. Simone Giertz was tired of waiting for Elon Musk to unveil his new Tesla pickup truck, so she decided to make one herself. The popular YouTuber and self-described “queen of shitty robots” transformed a Model 3 into an honest-to-god pickup truck, which she dubs “Truckla” — and naturally you can watch all the cutting and welding (and cursing) on her YouTube channel. There’s even a fake truck commercial to go along with it. Giertz spent over a year planning and designing before launching into the arduous task of turning her Model 3 into a pickup truck. And she recruited a ragtag team of mechanics and DIY car modifiers to tackle the project: Marcos Ramirez, a Bay Area maker, mechanic and artist; Boston-based Richard Benoit, whose YouTube channel Rich Rebuilds is largely dedicated to the modification of pre-owned Tesla models; and German des...
Chelsea will face Arsenal in the Europa League final on May 29 after knocking out Eintracht Frankfurt on penalties. Chelsea beat Eintracht Frankfurt on penalties on Thursday to set up an all-English Europa League final with local rivals Arsenal in Baku on May 29. Arsenal beat La Liga side Valencia 7-3 on aggregate, while in the other semi-final Chelsea and Eintracht Frankfurt finished 2-2 on aggregate after extra time. Chelsea won 4-3 on penalties to finally overcome their Bundesliga challengers. The result means both of Europe's club competitions will feature all-English finals after Tottenham Hotspur set up a June 1 title decider with Liverpool in the Champions League. Chelsea won the penalty shoot-out 4-3 and with it a place in the final against Arsenal. Kepa Arrizabalaga saved from Martin Hinteregger and Goncalo Paciencia before Hazard converted the decisive kick. Scores, Results & Fixtures THURSDAY 9TH MAY 2019 Chelsea 1 Frankfurt 1 ( Agg 2-2 ...
Comments
Post a Comment