A PYMNTS Company

AI Regulation and Human Rights: Building Trust through Multi-Stakeholder Collaboration

 |  June 3, 2024

A EU Delegation, in collaboration with the Office of the High Commissioner for Human Rights (OHCHR), the Global Network Initiative (GNI), and Humane Intelligence, organized a pivotal event to address this issue. Over 70 experts from international organizations, diplomatic missions, private tech companies, and NGOs convened to explore the intersection of human rights and technology, as digital technologies become more powerful and widespread.

The event underscored the importance of establishing regulatory frameworks that address the potential harms of AI but also harness its capabilities to empower individuals. Ambassador Lotte Knudsen, Head of the EU Delegation, emphasized, “It’s through this multi-stakeholder approach that we can most effectively not just address the potential harm of these new technologies, but also make sure that they truly empower individuals. We heard today how important it is to establish AI guardrails, and that we don’t have to choose between safety and innovation. They should go hand in hand! Only when society trusts AI and other new technologies, can these be scaled up.”

The EU’s Digital Services Act (DSA) and the newly adopted EU AI Act are at the forefront of these regulatory efforts. The DSA focuses on risk assessment, mitigation, auditing, and data transparency to hold large digital services accountable while protecting fundamental rights. The EU AI Act, the world’s first comprehensive legal framework on AI, aims to ensure that AI systems respect fundamental rights, safety, and ethical principles by addressing the risks posed by powerful AI models.

Similar regulatory initiatives are emerging globally. Latin American countries are preparing their own AI regulations, and the African Union Commission is actively working on AI governance. These efforts are expected to build on voluntary practices like transparency reporting, human rights risk assessments, and auditing developed under the UN Guiding Principles on Business and Human Rights (UNGPs).

However, there remains a need for guidance on how companies and assessors can implement risk assessments and auditing mechanisms aligned with the UNGPs. Additionally, meaningful engagement from civil society and academia is crucial for these processes to be robust and comprehensive.

The UN Human Rights B-Tech project, in collaboration with BSR, GNI, and Shift, has developed several papers to guide approaches to risk management related to generative AI. These documents emphasize the need for business and human rights practices to inform AI risk assessments, especially in the context of regulations like the DSA and the EU AI Act. There is also a pressing need to engage the technical community on these implications.

Read more: New Report Says AI Regulations Lag Behind Industry Advances

The event delved into key questions surrounding AI and human rights, including:

  • What are the key global trends regarding regulation requiring tech companies to assess human rights risks?
  • How can stakeholders, including engineers, encourage comparable AI risk assessment and auditing benchmarks?
  • What might appropriate methodologies for AI auditing look like, and what data is needed to perform accountable AI audits?
  • What is the role of enforcing/supervisory mechanisms?
  • How can civil society and academia most meaningfully engage around these processes?
  • How can AI risk assessments and audits be used by companies and external stakeholders to ensure accountability and catalyze change?

Notable speakers at the event included Juha Heikkila, Adviser for AI in the European Commission Directorate-General for Communications Networks, Content and Technology (CNECT); Rumman Chowdhury, CEO of Humane Intelligence; Lene Wendland, Chief of Business and Human Rights at the United Nations Human Rights; Mariana Valente, Deputy Director of Internet Lab Brazil and Professor of Law at the University of St. Gallen; Alex Walden, Global Head of Human Rights at Google; and Jason Pielemeier, Executive Director of the Global Network Initiative.

Source: EEAS