In a sent on June 8th, IBM’s CEO Arvind Krishna made a bold statement regarding the company’s policy toward facial recognition. “IBM no longer offers general purpose IBM facial recognition or analysis software,” says Krishna. “IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency.” The company has halted all facial recognition development and disapproves of any technology that could lead to racial profiling.
The Problem with Face Recognition
The ethics of have been in question for years. However, there has been little to no movement in the enactment of official laws barring the technology. In fact, numerous law enforcement agencies actively use face recognition tech today. As an example, the has an entire section that details how they use facial recognition to catch fugitives, via a global database of images from 160 countries. However, according to IBM, this technology may not yet be ready for use in law enforcement. “We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies,” says Krishna. “Artificial Intelligence is a powerful tool that can help law enforcement keep citizens safe. But vendors and users of Al systems have a shared responsibility to ensure that Al is tested for bias, particularity when used in law enforcement, and that such bias testing is audited and reported.”
Bias in Modern Face Recognition Algorithms
In December of 2019, The National Institute of Standards and Technology (NIST) published a study which found large variations in accuracy across numerous contemporary face recognition algorithms.
False Positives
A false positive is when the system says that a face matches an entry in the database, when in fact they are not of the same person. The biggest glaring issue was that false positive rates were much higher among those of African and Asian descent. Oppositely, they were lowest among Eastern Europeans. The study found that “This effect is generally large, with a factor of 100 more false positives between countries.”
Additionally, they found that false positives were higher in faces of women, the elderly, and young children.
False Negatives
A false negative is when the system says that a face does not match any entries in the database, when in fact it is in the database. This is especially dangerous at airports or border crossings. If the system checks individual faces against a criminal database, it could potentially allow dangerous individuals past a security check.
The found that false negatives were highest among Asians and American Indians and lowest among Caucasians and African Americans.
How Will IBM’s Boycott on Face Recognition Tech Affect the Industry?
IBM has been at the forefront of machine learning development for a long time. Their natural language processing system, IBM Watson, is regarded as one of the best question-and-answer systems in the world. Aside from NLP, IBM has also done impressive work in Computer Vision with . Their advancements have garnered much respect in the AI community, and as a result their voice carries weight.
Positive Impact of IBM’s Stance on Facial Recognition
When such a large company takes a strong position on facial recognition, it is bound to make waves. In fact, IBM’s decision already seems to be causing a domino effect in the industry.
Following IBM’s announcement, of their own saying they are halting sales of facial recognition technology to American police departments. On the other hand, other companies may take advantage of IBM halting facial recognition development. There is now more room in the market for other companies and startups to move in. The worst case scenario is that nothing changes. Companies might continue developing face recognition algorithms to sell to law enforcement agencies, without worrying about the consequences.
Final Thoughts
Ideally, other developers will start investing more time and money to ensure that their algorithms are free of bias. Hopefully, more companies will take a step back to study the effects of emerging technologies before releasing them.IBM’s CEO believes that the biases that exist in modern facial recognition algorithms warrant a complete halt of their usage. They are calling on governments and vendors to ensure that their algorithms are free of bias before being implemented in vital areas of society, such as law enforcement.Many AI technologies are emerging faster than governments are able to regulate. Hopefully, IBM’s stance paves the way for other large tech companies and startups to think about ethics before advancement.
Previously published on: