Global Multi-Stakeholder Consultation for the Global Dialogue on AI Governance
By Nate Edwards
This statement was delivered during the Global Multi-Stakeholder Consultation for the Global Dialogue on AI Governance.
Thank you, co-chairs, for the opportunity to share civil society perspectives for the first Global Dialogue on AI Governance and congratulations on making this a very inclusive process.
I am speaking on behalf of the Secretariat of Pathfinders for Peaceful, Just and Inclusive Societies at the Center on International Cooperation at New York University.
We would like to focus on two modalities of the global dialogue: capacity building and human rights, from the perspective of equality, justice, and peace.
First, we consider that AI governance should be guided towards the public interest as a public good, and this means increasing technological capacity, closing the digital gap, and designing AI governance that avoids perpetuating entrenched inequalities.
Governments must make commitments to work with those that are further behind in the digital curve, sharing knowledge, building digital public infrastructure and capacity, and ensuring technology benefits everyone equally.
In an era of disruption and fragility, AI-tools should be used to strengthen social contracts and to deliver better inclusive public services for all.
As we look towards the implementation of AI governance, a people-centered justice approach, one that starts by understanding people’s problems and needs to inform innovative cost-effective justice solutions, can support the design of meaningful accountability, redress, and access to remedy for AI-related harms.
Co-chairs,
Effective and inclusive AI governance can only exist in a world where everyone has equal access to its enforcement.
Domestic justice actors will be the first to see where AI governance policy fails in practice and where rights violations occur as a result of AI use.
Therefore, we must bring them upstream in the design of AI governance, and we must build their capacity to respond to AI-related harms.
Finally, let me stress that AI use in early-warning, risk-monitoring, and surveillance systems can be an important tool in preventing violence and crime.
However, if these tools are implemented without proper safeguards, they risk replicating and exacerbating inequalities, stigmatizing populations, and directly contributing to the violation of human rights.
In addition to being inherently problematic, this can actually exacerbate the risks of violence and conflict.
While security and human rights are sometimes framed as a tradeoff, this is a false dichotomy. Security and human rights actually reinforce one another.
As member-states invest in the implementation of these technologies, it is crucial that they equally invest in building human-centered accountability systems and guardrails.
To conclude, at the Global Dialogue on AI Governance, discussions on capacity building must include a focus on enabling our world to adopt technology in an equal, inclusive, and just manner, guided by human rights frameworks, the rule of law, and with due consideration for security and public safety.
Thank you.
Related Resources