The accelerating use of artificial intelligence (AI) in immigration and visa systems, especially for criminal history scoring, poses a critical global governance challenge. Without a multinational, privacy-preserving, and interoperable framework, AI-driven criminal scoring risks violating human rights, eroding international trust, and creating unequal, opaque immigration outcomes.
While banning such systems outright may hinder national security interests and technological progress, the absence of harmonized legal standards, privacy protocols, and oversight mechanisms could result in fragmented, unfair, and potentially discriminatory practices across
countries.
This policy brief recommends the creation of a legally binding multilateral treaty that establishes:
1. An International Oversight Framework: Including a Legal Design Commission, AI Engineers Working Group, and Legal Oversight Committee with dispute resolution powers modeled after the WTO.
2. A Three-Tiered Criminal Scoring System: Combining Domestic, International, and Comparative Crime Scores to ensure legal contextualization, fairness, and transparency in cross-border visa decisions.
3. Interoperable Data Standards and Privacy Protections: Using pseudonymization, encryption, access controls, and centralized auditing to safeguard sensitive information.
4. Training, Transparency, and Appeals Mechanisms: Mandating explainable AI, independent audits, and applicant rights to contest or appeal scores.
5. Strong Human Rights Commitments: Preventing the misuse of scores for surveillance or discrimination, while ensuring due process and anti-bias protections.
6. Integration with Existing Governance Models: Aligning with GDPR, the EU AI Act, OECD AI Principles, and INTERPOL protocols for regulatory coherence and legitimacy.
An implementation plan includes treaty drafting, early state adoption, and phased rollout of legal and technical structures within 12 months. By proactively establishing ethical and interoperable AI systems, the international community can protect human mobility rights while maintaining national and global security.
Without robust policy frameworks and international cooperation, such tools risk amplifying discrimination, violating privacy rights, and generating opaque, unaccountable decisions.
This policy brief proposes an international treaty-based or cooperative framework to govern the development, deployment, and oversight of these AI criminal scoring systems. The brief outlines
technical safeguards, human rights protections, and mechanisms for cross-border data sharing, transparency, and appeal. We advocate for an adaptive, treaty-backed governance framework with stakeholder input from national governments, legal experts, technologists, and civil society.
The aim is to balance security and mobility interests while preventing misuse of algorithmic
tools.