Algorithms and artificial intelligence — artfully applied, but for less-than-ideal purposes?
To that end, on Tuesday a U.S. Senate panel examined the ways social media giants — marquee names among them such as Google and Facebook — interact with users.
The hearing, held by the Senate Commerce subcommittee on Communications, Technology and Innovation, queried researchers and others who were critical of using AI to suggest content to end users.
The hearing came in the wake of, and continuance of, debate on privacy concerns for online users, as Reuters noted. Proposed protections would curtail at least some use of data with the intent to suggest content.
In one example of criticism, Senator Brian Schatz, ranking Democrat on the Senate Commerce subcommittee, said the algorithms “feed a constant stream of increasingly more extreme” content. The social media firms, he said, need to be held accountable for the impact of those algorithms. “If YouTube, Facebook or Twitter employees, rather than computers, were making the recommendations, would they have recommended these awful videos in the first place?” Schatz asked. “Companies are letting algorithms run wild and only using humans to clean up the mess.”
In an appearance before the panel, Maggie Stanphill, Google’s director of user experience, said “No, we do not use persuasive technology at Google.” She went on to say that “dark patterns and persuasive technology are not core to our design.”
As quoted by The Hill, Sen. Richard Blumenthal, Democrat from Connecticut, said that “On the issue of persuasive technology, I find, Ms. Stanphill, your contention that Google does not build systems with the idea of persuasive technology in mind somewhat difficult to believe, because I think Google tries to keep people glued to its screens, at the very least.”
Also appearing before the panel was Tristan Harris, known as a former Google programmer who has been critical of social media practices. Harris said that at least some platforms can predict behavior and “things about us that we don’t know about ourselves.”
The June hearing comes after U.S. lawmakers in April proposed a bill that would make tech companies detect — and remove — any discriminatory biases found in their technologies such as algorithms.
The Algorithmic Accountability Act of 2019, introduced by Democratic Sens. Ron Wyden and Cory Booker, would give new power to the U.S. Federal Trade Commission (FTC). In other tenets of the bill, tech companies with annual revenue above $50 million would be mandated to study if race, gender or other biases are embedded in their computer models. The rules, as noted in this space back in April, would also apply to data brokers and businesses with over a million consumers’ data.