Brazil hosts the G20 World Leaders' Summit in Rio de Janeiro (November, 18th and 19th) and Artificial Intelligence is perhaps the most intersectional topic among the engagement groups. An incessant agenda in 2024, the exponential interest in AI goes beyond investments and commercial interests, with strong movement towards its regulation. Reaching a global and binding denominator on the subject may be too great an ambition for the G20, but that's what the first AI treaty led by the European Union is trying to do.
Throughout the current presidency of the G20, Brazil has successfully embraced digital economy issues. It is the first time, for example, that the fight against disinformation has been mentioned - and even signaled its importance for democratic processes - in the final inter-ministerial declaration of the digital economy working group. A quick analysis of the document points to a strong alignment with what the rest of the G20 ecosystem - especially the final policy brief of the Digitalization and Technology group of the C20 (civil society) and T20 (think tanks) - has discussed in the areas of information integrity, meaningful connectivity, digital public infrastructures and artificial intelligence1.
A summary list of documents from the G20 ecosystem on the subject includes:
- Digital Economy Working Group (Maceió Interministerial Declaration);
- C20 ( Civil Society 20);
- T20 (Think Tanks);
- S20 ( Science 20);
- W20 (Women 20);
- L20 (Labor 20);
- B20 ( Business 20);
- Y20 ( Youth 20).
- In statements involving more than one group, it is worth mentioning the C20/T20 Convergence Dialogue and the São Luís Declaration on Artificial Intelligence.
The inclusion of these issues in the common agenda recognized by the group of the world's twenty largest economies is positive, even if the final declaration that culminates the processes has no binding effects. However, not long ago, the first international convention on artificial intelligence was signed by its first signatories. Led by the European Union, the treaty constitutes a significant step forward in international discussions involving the governance of artificial intelligence. So, at least, it seems.
In May 2024, a historic achievement was announced in the field of discussions on Artificial Intelligence: after two years of negotiations, the first international treaty on artificial intelligence was adopted by the European Council. In September, the European Union, the United States of America and the United Kingdom signed the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law. Negotiated by 57 countries, it is the first international treaty to establish a framework of obligations aimed at reconciling the mitigation of risks associated with technology, the protection of human rights and the promotion of responsible innovation.
The convention imports many of the concepts from the European Union's Artificial Intelligence Regulation - such as the definitions of artificial intelligence and the AI lifecycle - and presents seven principles that should inform signatory parties in adopting and maintaining measures related to the AI lifecycle, always in accordance with domestic law and applicable international obligations. The convention also indicates the need for measures that enable redress for human rights violations caused by AI systems and the installation of procedural safeguards, as well as mechanisms for assessing and mitigating risks and adverse impacts.
The Conference of the signatory parties should be the body responsible for monitoring the implementation of the treaty, including through mandatory reports provided by the signatory parties during the first two years of signing the commitment. The last highlight is the indication that there needs to be international cooperation on AI, with the exchange of relevant information on technology that can have considerable positive and negative effects on the enjoyment of human rights.
There is a notable convergence between the treaty and what has been discussed in other international spaces, such as the Global Digital Compact and the G20 ecosystem. Despite the milestone of being the first international convention on the subject, criticism is circulating that the agreement is eminently declaratory in nature. The negotiation of the agreement went through a process of weakening. After pressure from the US, which was participating in the discussions as an observer, one of the most advanced versions of the treaty considered applying an automatic exemption from its provisions to private companies, leaving the application of the treaty to this sector to the discretion of the signatory countries.
In an open letter published in January, more than a hundred civil society organizations expressed concern about the exclusion of the private sector and the exemption from the convention's application to national security and defence fields. In March, the European Data Protection Supervisor (EDPS) issued a statement indicating a state of alert regarding the course of the negotiations. The authority went so far as to say that the agreement could end up being a “missed opportunity” to establish an effective regulatory framework for the development of reliable artificial intelligence. It also disagreed with the exclusion of the private sector and pointed out that the eminently declaratory and general nature of the convention could lead to inconsistent application of the agreed obligations.
In May, the adoption of the final version of the treaty by the European Council was followed by criticism. The exclusion of the private sector was not maintained in the final version, but the exclusion of the application of the agreement to AI systems developed in national security and defense contexts remained. The European Center for Not-for-Profit Law (CNEL) maintained its criticism on this point, also echoing what the highest data protection authority in the European Union had already indicated: the imprecise language used in the obligations could call into question legal certainty on the application and effectiveness of the convention.
Known for its protective system of rights, the European Union was at the forefront in 2021 when it began negotiations to approve the first international instrument with legal effects involving AI, democracy, human rights and the rule of law. The multilateral process also included the observant participation of the private sector, academia and civil society.
Faced with the intense pace of development of AI systems and in a scenario in which regulatory efforts at national level are gradually taking shape, the European Union's framework convention is intended to be an instrument that guarantees that the development and application of the technology does not jeopardize the protection of rights and the public interest. The parallel with Convention 108, the first international instrument with legal effect in the area of personal data protection, is striking. Opened for signatures in 1981, the convention became a paradigm in the field of personal data protection, helping to establish a basic productive standard in regulations on the subject in different countries. The framework convention on AI, however, has the problems mentioned above. It is right to provide for a mechanism to monitor its implementation (the Conference), but its general wording with imprecise placement of the obligations imposed on signatory countries threatens the real effectiveness of the convention.
The current acceleration in the development of technological systems that are so disruptive to world power has been described as a race similar to that of nuclear energy. Decades after the nuclear race, the scenario is more complex, with several countries and stakeholders occupying the same board. Other spheres of discussion - such as the G20 and the Global Digital Compact - may even indicate alignment in terms of governance, regulation and the development of AI systems, but the treaty is the only document with legal effects on its signatories. Amendments to the framework convention could be useful to ensure successful implementation of the agreement. Convention 108, for example, was amended to reflect updates in the area of data protection in 1999 and 2018. The problem is that we don't have that much time.
Footnotes
- 1
These were the topics chosen by Brazil as priorities in the discussions involving the digital economy. In the ecosystem of the G20 engagement groups, the working groups and task forces dedicated to discussions involving digital technologies have also looked into the issues.