One area where companies involved in artificial intelligence (AI) are focusing most of their efforts is in reducing unfair bias, Matteo Quattrocchi, director for policy at BSA, The Software Alliance, told PYMNTS, and working with regulators to design good guidelines and frameworks is essential.
Quattrocchi, who works with global leaders and enterprise software around the world, highlighted that for these companies, reducing or eliminating unfair bias is key, not only to avoid regulatory concerns but even as a business proposition to ensure that the results don’t favor a specific group.
Regulators around the globe are trying to design rules that minimize this concern, and one of them is the European Commission. The EC proposed last year the Artificial Intelligence Act, which is the first comprehensive legislation pertaining to AI. The proposed regulation still needs to be debated in the European Parliament, but it offers a good idea of what areas will be regulated, and how.
The AI Act is a risk-based approach to specific high-risk scenarios that pertain to AI. For instance, facial recognition or the use of biometric information are among the high-risk practices. According to Quattrocchi, the act is structured in a way that will encompass specific use cases and then provide requirement compliance obligations.
What does it mean in practice? That the requirements a company will have to comply with will depend on the type of activity that it provides (high or low risk). The idea behind the AI Act is that AI is a horizontal technology, which means that it can be deployed in different sectors, almost any sector we can think of from public to private. “Therefore, to regulate every possible aspect of AI will be fairly impossible in one piece of legislation, but taking a risk-based approach, which assigns specific requirements to those categories of AI that are considered at risk is a better way to go,” Quattrocchi said.
But not everything in the AI Act is perfect, he said, and there is room for improvement. Perhaps the first thing is a good definition of AI. There isn’t a very strong consensus in the scientific community for a specific definition and Quattrocchi would like to see a good definition.
“There are a number of processes that are traditionally considered software that through some readings in the original proposal might fall in the scope of the AI Act. That is probably less than desirable, because this is a piece of legislation that pertains to AI. We want to make sure that this applies to AI,” he said.
Another aspect that is important is how to allocate responsibility. There are a number of entities that are involved in the processes of AI, and it’s important to make sure that the allocation of responsibilities between these entities in complying with the obligation of the AI act is as balanced as it can be. In other words, the responsibility needs to be allocated to the entity that is in the best position to mitigate those risks. It could be the developer in certain stages, or the designer or another party. The important aspect of this is to make sure that the risk and the party that can best mitigate it are well identified.
While the current structure of the legislation follows this risk-based approach, it is still early to know how the final version will look like, although it is likely to continue with the same principles. While it is difficult to predict when the AI Act will be approved, Quattrocchi predicted that by the end of 2022 the European Parliament may have a position which could lead to an approval in 2023.
Read more: EU Parliament Committee Urges Member States to Design a Roadmap for AI