In a Sunday opinion column in The Financial Times (FT), Google and Alphabet chief executive officer (CEO) Sundar Pichai wrote about the importance of having government oversight regarding artificial technology (AI).
“Growing up in India, I was fascinated by technology. Each new invention changed my family’s life in meaningful ways,” he wrote.
Now that he is in the position to frame new tech advances, he said he believes that international cooperation for oversight is vital in order to impose workable global standards.
“Now there is no question in my mind that artificial intelligence needs to be regulated. It is too important not to. The only question is how to approach it,” he wrote, pointing to historical examples that “technology’s virtues aren’t guaranteed.”
He said people “need to be clear-eyed” about the many possible negative consequences of technology, particularly when it comes to AI.
The market should not dictate how technology is used, and big tech firms like Google have a responsibility to make sure “technology is harnessed for good and available to everyone,” he wrote.
In 2018, Google created its own AI policies to provide guidance as well as open-source tools and code for the ethical development of AI that also avoids bias and ensures privacy. The policy also outlined Google’s opposition to mass surveillance and the infringement of human rights.
“We believe that any company developing new AI tools should also adopt guiding principles and rigorous review processes,” he said in the article. “Government regulation will also play an important role.”
He pointed to Europe’s General Data Protection Regulation (GDPR) as being a good start for a “strong foundation.”
Google wants to partner with regulators to extend its own expertise and tools and “navigate these issues together.”
Earlier this month the Trump administration submitted new rules governing any future federal regulation of AI. Any new rules will not impact how federal agencies like law enforcement utilize “facial recognition and other forms of AI.”
AI watchdogs point to the current lack of accountability as more computer infrastructure and software systems replace human workers in multiple professional environments.