Microsoft Launches Measures to Keep Users From Tricking AI Chatbots

Microsoft

Microsoft unveiled tools to prevent users from tricking artificial intelligence chatbots for malevolent purposes.

The tech giant rolled out a series of offerings for its Azure AI system, including a tool to block so-called “prompt injection” attacks, according to a Thursday (March 28) blog post.

“Prompt injection attacks have emerged as a significant challenge, where malicious actors try to manipulate an AI system into doing something outside its intended purpose, such as producing harmful content or exfiltrating confidential data,” Sarah Bird, chief product officer of Responsible AI at Microsoft, wrote in the post.

“In addition to mitigating these security risks, organizations are also concerned about quality and reliability,” added Bird in the post. “They want to ensure that their AI systems are not generating errors or adding information that isn’t substantiated in the application’s data sources, which can erode user trust.”

Among the tools now available or soon to be released are prompt shields to detect and block prompt injection attacks, as well as “groundedness” detection to spot AI “hallucinations,” per the post

Microsoft will also soon launch safety system messages “to steer your model’s behavior toward safe, responsible outputs,” and is now previewing safety evaluations to determine an application’s vulnerability to jailbreak attacks and to generating content risks, the post said.

PYMNTS earlier this week looked at Microsoft’s role in the “battle for generative AI” that was kicked off by the success of ChatGPT, developed by Microsoft partner OpenAI.

Although top tech companies like Microsoft and Google have an edge over their competitors, the contest for the AI crown involves more than Big Tech.

Open-source projects, collaborations and a focus on ethics and accessibility have emerged as factors in the fight to dethrone OpenAI. Stretching the boundaries of AI frequently requires investments in computational power and research talent.

“The hurdle for building a broad foundational model is that training on increasingly large data sets is extraordinarily expensive,” Gil Luria, a senior software analyst at D.A. Davidson & Co., said in an interview with PYMNTS. “The only reason OpenAI can afford to do so is the backing of Microsoft and the Azure resources it makes available to OpenAI. The broad models, such as the ones leveraged by ChatGPT, have ingested huge portions of human knowledge and continue to train on new content, which is what makes them so versatile in many domains of expertise.”

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.