In a shift toward ethical technology use, companies across the globe are intensifying their efforts to develop responsible artificial intelligence (AI) systems, aiming to ensure fairness, transparency and accountability in AI applications.
OpenAI, Salesforce and other tech companies recently signed an open letter highlighting a “collective responsibility” to “maximize AI’s benefits and mitigate the risks” to society. It’s the tech industry’s latest effort to call for building AI responsibly.
The concept of responsible AI is gaining attention following Elon Musk’s recent lawsuit against OpenAI. He accuses the ChatGPT creator of breaking its original promise to operate as a nonprofit, alleging a breach of contract. Musk’s concern was that the potential dangers of AI should not be managed by profit-driven giants like Google.
OpenAI has responded aggressively to the lawsuit. The company has released a sequence of emails between Musk and top executives, revealing his initial support for the startup’s transition to a profit-making model. Musk’s lawsuit accuses OpenAI of violating their original agreement with Microsoft, which went against the startup’s nonprofit AI research foundation. When Musk helped launch OpenAI in 2015, his aim was to create a nonprofit organization that could balance Google’s dominance in AI, especially after its acquisition of DeepMind. Musk’s concern was that the potential dangers of AI should not be managed by profit-driven giants like Google.
The AI firm said in a blog post that it remains committed to a mission to “ensure AGI [artificial general intelligence] benefits all of humanity.” The company’s mission includes building safe and beneficial AI and helping to create broadly distributed benefits.
The goals of responsible AI are ambitious but vague. Mistral AI, one of the letter’s signatories, wrote that the company strives “to democratize data and AI to all organizations and users” and talks about “… ethical use, accelerating data-driven decision making and unlocking possibilities across industries … .”
Some observers say there is a long way to go before the goals of responsible AI are broadly achieved.
“Unfortunately, companies will not attain it by adopting many of the ‘responsible AI’ frameworks available today,” Kjell Carlsson, head of AI strategy at Domino Data Lab, told PYMNTS in an interview.
“Most of these provide idealistic language but little else. They are frequently disconnected from real-world AI projects, often flawed, and typically devoid of implementable advice.”
Carlsson said that building responsible AI involves developing and improving AI models to ensure that they perform accurately and safely and comply with relevant data and AI regulations. The process entails appointing leaders in AI responsibility and training team members on ethical AI practices, including model validation, bias mitigation, and change monitoring.
“It involves establishing processes for governing data, models and other artifacts and ensuring that appropriate steps are taken and approved at each stage of the AI lifecycle,” he added. “And critically, it involves implementing the technology capabilities that enable practitioners to leverage responsible AI tools and automate the necessary governance, monitoring and process orchestration at scale.”
While the aims of responsible AI can be a bit fuzzy, the technology can have a tangible impact on lives, Kate Kalcevich of the digital accessibility company Fable pointed out in an interview with PYMNTS.
She said that if not used responsibly and ethically, AI technologies could create barriers to people with disabilities. For example, she questioned whether it would be ethical to use a video avatar that isn’t disabled to represent a person with a disability.
“My biggest concern would be access to critical services such as healthcare, education and employment,” she added. “For example, if AI-based chat or phone programs are used to book medical appointments or for job interviews, people with communication disabilities could be excluded if the AI tools aren’t designed with access needs in mind.”