Microsoft’s decision to ban police departments from using its Azure OpenAI Service for facial recognition reflects the technology industry’s struggle to balance the promises and perils of artificial intelligence (AI).
The move signals that Big Tech companies are increasingly enforcing guardrails around AI systems amid concerns about potential societal harms. Experts say it’s an example of why businesses need to be careful how they use AI.
“Facial recognition technology carries immense privacy harms, which is why the EU AI Act has denominated it as an unacceptable risk and severely limited its usage,” Gal Ringel, co-founder and CEO at data privacy firm Mine, told PYMNTS. “Even without a federal AI law in place in the U.S., companies need to be aware of the AI capabilities they are developing and restrict unnecessary usage accordingly.”
Last year, the Federal Trade Commission (FTC) warned that the increased use of biometrics raises serious concerns about security, privacy, and discrimination.
In an email to PYMNTS, a Microsoft spokesperson revealed that the company had updated its code of conduct on Thursday (May 2). The new language prohibits Microsoft’s artificial intelligence (AI) service from being used for facial recognition purposes by or for law enforcement agencies within the United States.
“Microsoft is banning it because, as a whole, there is still some hesitancy and trepidation regarding the use of facial recognition by police,” Bob Eckel, the CEO of Aware, a biometric solutions provider, told PYMNTS. “Some argue that facial recognition fosters discrimination by being less accurate for certain races, nationalities and ethnicities. However, this is not true.”
Emphasizing the advancements in facial recognition accuracy, Eckel said, “Today’s facial recognition tools are tested and validated by trustworthy third parties, and certain states require police agencies using facial recognition to only use software deemed to be at least 98 percent accurate across all demographics.”
Ringel said the U.S. government has not always supported tech companies’ efforts to protect user privacy. He noted that in the past, the government has pressured companies like Apple to unlock phones to assist law enforcement investigations.
“I hope, given these are only police departments and thus this is more of a local issue, that Microsoft won’t face any retribution for trying to safeguard its AI usage,” he said.
Tech consultant John Bambenek emphasized the global implications of Microsoft’s ban, telling PYMNTS, “It’s important to note that this applies to law enforcement everywhere, so this couldn’t be used by governments that have a different idea of civil rights to identify, for instance, members of persecuted groups or political opposition.”
The ban raises questions about the broader implications for enterprises utilizing facial recognition technology. “So far, facial recognition in law enforcement seems to be the only area where the risks have slowed down adoption,” Bambenek said. “But Microsoft is essentially saying they can’t solve the problem, which begs the question … what other risks are there (or will there be), and can they be prevented before harm is done.”