4/19/2024
Today from Hiiraan Online:  _
advertisements
Facial-Recognition Software Suffers From Racial Bias, U.S. Study Finds


By Asa Fitch
Sunday December 22, 2019

A far-reaching government analysis of the most widely used facial recognition algorithms found most of them appeared to suffer from racial bias, misidentifying Asian- and African-Americans far more often than Caucasians.

The study released Thursday is the largest ever of its kind. It amplifies concerns artificial intelligence algorithms don’t treat individuals equally. Other research performed by academia and government investigators has shown facial-recognition algorithms sold by numerous tech companies fail to identify minorities and women at higher rates than white men.

The research, conducted by the National Institute of Standards and Technology—a laboratory affiliated with the Commerce Department—found significant differences in accuracy when an algorithm is used to compare two photos to determine whether it is the same person. Such checks might, for instance, be performed by an immigration officer trying to identify a person in a passport photo.

advertisements
The study also found bias when algorithms are used to pick a person out of an image of a crowd, in instances such as when police are looking for a person of interest.

“We found empirical evidence for the existence of demographic differentials in the majority of the face recognition algorithms we studied,” said Patrick Grother a scientist at NIST and the report’s primary author. The study examined 189 facial-recognition algorithms made by 99 companies, which it called a majority of the industry.

Rep. Bennie Thompson (D., Miss.), chairman of the House Homeland Security Committee, said the study raised troubling questions, and he urged the Trump administration to rethink its approach to rolling out facial-recognition technology.

“This report not only confirms these concerns, but shows facial recognition systems are even more unreliable and racially biased than we feared,” he said in a statement. “It is clear these systems have systemic design flaws that have not been fixed and may well negate their effectiveness.”

Academics have warned repeatedly about racial and gender bias in facial-recognition systems sold by tech companies large and small. Joy Buolamwini, a Massachusetts Institute of Technology researcher and the founder of the Algorithmic Justice League, which seeks to combat bias in algorithms, found in recent studies that facial-recognition technology sold by Microsoft Corp. and International Business Machines Corp. was less accurate for women and darker-skinned people.

IBM said in June it would release a collection of images of people of different races and genders to help train algorithms to eradicate biases. Microsoft executives have said the company is careful about who can buy its facial-recognition software. Microsoft President Brad Smith said in April that the company had turned down a proposal by a police department in California that wanted to use the technology to scan people’s faces when they were pulled over.

Researchers believe that the bias in such artificially intelligent systems is created by the information they are provided early to effectively train the software. If more images of men and Caucasians are used as the models are constructed, they are likely to be more accurate at identifying those people. The NIST study found that algorithms developed in China tended to do better at identifying Asian faces.

The issue of racial bias in AI has grown in political and social significance as facial recognition technology becomes omnipresent, residing in people’s phones, home security systems and in cameras used by police. Numerous jurisdictions have been weighing curbs on the technology. San Francisco in May became the first U.S. city to ban its use by local agencies.



 





Click here