Europe’s proposed artificial intelligence (AI) rules should try to benefit from the technology instead of hindering it, an EU lawmaker told Reuters this week.
“There should be a general positive approach towards artificial intelligence,” European Parliament member Svenja Hahn said in an interview published Wednesday (March 23).
Last year, the European Commission proposed rules governing AI that include fines of as high as 6% of global turnover for violations and stringent safeguards for high-risk applications used for areas such as recruitment, infrastructure, migration, credit scoring and law enforcement.
The European Parliament and EU countries are due to arrive at negotiating positions by the end of the year, with Hahn and other lawmakers charged with coming up with a compromise.
At issue is the potential use of facial recognition applications, which positions EU countries that hope to use them for law enforcement and security against concerned civil rights groups.
Read more: EU Parliament Committee Urges Member States to Design a Roadmap for AI
Hahn, who sits on a cross-party parliamentary committee that this week accepted report outlining a long-term AI strategy for the EU, said the proposed legislation would benefit from using some of the ideas from the report.
For example, the report says regulators should counter fears about AI by pointing to the role of the technology in batting climate change, providing healthcare innovations, boosting the EU’s competitiveness worldwide and bolstering its democratic systems.
Hahn told Reuters the definition of AI and the types of risks as set out by the Commission still needs some work.
“The whole regulation needs to be innovation friendly. It should not bring other aspects, for example GDPR aspects, into it,” she said, referring to EU privacy rules.
See also: AI in Financial Services in 2022: US, EU and UK Regulation
As PYMNTS reported earlier this month, the AI Act would have an impact on the financial sector. For example, it would add mandatory requirements for “high-risk” activities, like AI systems used for creditworthiness or to establish credit records.
This includes obligations like monitoring the operation of high-risk AI and keeping logs generated by the AI. The law also says companies must provide human oversight when using AI for recruitment or for making decisions on promotion and termination of work.
In addition, the law would create transparency requirements for specific types of AI, requiring companies to notify customers that they are interacting with a chatbot.