A research group affiliated with two tech giants is calling for more regulation of facial recognition and other artificial intelligence (AI) tools.
According to Bloomberg, AI Now, which is run by employees from technology companies including Google and Microsoft, released a report warning about the dangers of AI and facial technology technology used in law enforcement, finance and education. The group especially raised concerns about AI applications that claim to read people’s emotions and mental well-being, which is known as affect recognition.
“These tools are very suspect and based on faulty science,” said Kate Crawford, a co-founder of the group who works for Microsoft Research. “You cannot have black box systems in core social services.”
The group has requested that outside groups audit government AI services, including automated decision-making software and facial recognition systems. As a result, AI Now is asking for government vendors to waive claims to trade secrecy.
“There is no longer a question of whether there are issues with accountability,” said AI Now co-founder Meredith Whittaker, who works at Google. “It’s what we do about it.”
This report comes after it was revealed earlier this week that the U.S. Secret Service plans to test the use of facial recognition in and around the White House. The agency wants to test whether its system can identify certain volunteer staff members through the scanning of video feeds from existing cameras “from two separate locations on the White House Complex, and will include images of individuals passing by on public streets and parks adjacent to the White House Complex” in order to identify persons of interest.
But the ACLU has spoken out against the plan, saying that there is no clear guidance as to how the Secret Service will decide if a person is a “subject of interest.” The agency noted that individuals could be flagged through means including “social media posts made in public forums” as well as suspicious activity reports and media reporting.