IBM has teamed with Casper Labs to help companies gain more insight into their AI systems.
The partnership, announced Thursday (Jan. 11), will see Casper and IBM Consulting develop a solution that establishes an “additional analytics and policy enforcement layer” for governing artificial intelligence (AI) training data across organizations.
“The process of training, developing and deploying generative AI models happens across multiple organizations, from the original model creator to the end user organization,” the companies said in a news release. “As different organizations integrate new data sets or modify the models, their outputs change accordingly, and many organizations need to be able to track and audit those changes as well as accurately diagnose and remediate issues.”
The solution will be built on Casper, a tamper-resistant and highly serialized ledger, and use IBM watsonx.governance and watsonx.ai to monitor and measure “highly serialized input and output data for training generative AI systems across organizations,” the release said.
The companies say the hybrid nature of Casper’s blockchain will help companies better protect sensitive data from being accessed by external actors and control who can access what data.
“An AI system’s efficacy is ultimately as good as an organization’s ability to govern it,” said Shyam Nagarajan, global partner, blockchain and responsible AI leader at IBM Consulting. “Companies need solutions that foster trust, enhance explainability, and mitigate risk.”
As noted here last year, with companies embracing AI, it has become increasingly important for IT, data and security teams to understand the inherent exposure risks each stage of the AI development process can bring.
“Open data sharing is a key component of AI training, with researchers collecting and sharing massive amounts of external and internal data to build out the required training information for their AI models,” that report said. “But sharing larger amounts of data leaves companies exposed to larger risks if that data shared incorrectly.”
After all, as PYMNTS has written, AI is one of the first technologies that can violate nearly all of a company’s internal policies in one fell swoop.
“At the center of many business use concerns around the integration of generative AI solutions lies ongoing questions around the integrity of data and information fed to the AI models, as well as the provenance and security of those data inputs,” PYMNTS wrote. “In order to effectively and securely leverage AI tools, businesses must ensure first that they have the appropriate data infrastructure in place to avoid AI’s foundational pitfalls.”
For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.