In a new declaration on the impact of the use of algorithms on democracy, human rights, and the rule of law, the Council of Europe’s Committee of Ministers warns that artificial intelligence and other machine-learning technologies must not be used to unduly influence or manipulate individuals’ thoughts and behavior. See Council of Europe Committee of Ministers, Declaration by the Committee of Ministers on the manipulative capabilities of algorithmic processes, Decl(13/02/2019)1, 13 February 2019. The first of its kind, the declaration calls on States to take steps to ensure that technologies facilitating algorithmic persuasion, particularly those that “micro-target” individuals, do not interfere with people’s ability to enjoy their human rights and to make independent political, personal, and purchasing decisions. See id. at paras. 8, 9. The Declaration, which builds on ongoing study and analysis by Council of Europe organs, adds to the growing body of guidance and recommendations concerning the regulation of machine learning to safeguard human rights, including from the United Nations Special Rapporteur on freedom of expression.
The Declaration
At the heart of the Declaration is the concern that technology that seeks to shape our preferences and alter information flows is becoming an “ever growing presence in our daily lives,” and the finding that digital forms of targeted persuasion pose a threat to democracy, human rights, and the rule of law. See Declaration by the Committee of Ministers on the manipulative capabilities of algorithmic processes, at paras. 4, 9. Specifically, the Committee of Ministers states that the use of “fine grained, sub-conscious and personalized levels of algorithmic persuasion” risks significantly interfering with the principles of individual autonomy and the right to form opinions and take independent decisions. See id. at para. 9. Undermining the autonomy and independence of individuals, according to the Committee, threatens the fundamental belief “in the equality and dignity of all humans as independent moral agents.” See id. at paras. 9.
The Committee of Ministers recognizes that digital services have become an essential tool for modern communication, including political communication, and that advanced technologies provide the opportunity to enhance human rights. See id. at paras. 2-3. However, the Committee also notes that public awareness “remains limited regarding the extent to which everyday devices collect and generate vast amounts of data” that “are used to train machine-learning technologies. . . to predict and shape personal preferences, to alter information flows, and, sometimes, to subject individuals to behavioral experimentation.” See id. at para. 4. Additionally, the Committee warns that the ability to infer “intimate and detailed information” from the data collected “supports the sorting of individuals into categories,” which allows companies to reinforce discrimination along cultural, religious, legal, and economic lines. See id. at para. 6.
The Committee of Ministers concludes that machine-learning technologies, when coupled with mass collection of data, pose a danger for democratic societies if these technologies are used—by either public or private entities—to “manipulate and control not only economic choices but also social and political behaviours.” See id. at para. 8. Moreover, the Committee notes that machine-learning tools are demonstrating an increasing ability to “not only to predict choices but also influence emotions and thoughts and alter an anticipated course of action, sometimes subliminally.” See id. at para. 8. The Committee emphasizes that the 47 Member States to the Council of Europe have an obligation to protect democracy, human rights, and the rule of law during the rapid social transformation that is occurring as the result of recent advances in technology. See id. at para. 1.
Recommendations
The Declaration makes five specific recommendations about how States should address the risks to democracy and human rights posed by machine learning. The first is that States should pay attention to the inter-disciplinary nature of this concern and ensure that the above concerns do not fall between the mandates of current administrative agencies. See id. at para. 9(a). Second, States should consider enacting laws that regulate the collection and use of personal data that go beyond preexisting privacy protections. See id. at para. 9(b). In addition, States should specifically legislate against forms of “illegitimate interference,” which would include forms of persuasion and interference that compromise democratic principles. See id. at para. 9(d). Lastly, the Committee asserts that States should both promote a public debate on the issue of what counts as permissible persuasion and empower users of digital technologies by promoting digital literacy, including awareness of data collection. See id. at para. 9(c) and 9(e).
Recent Developments under Human Rights Law Concerning AI
International human rights experts and civil society have recently sought to address the human rights impact of machine learning and algorithmic decision-making. [OHCHR; UNDP; AccessNow; Data&Society] In a groundbreaking 2018 report, the Special Rapporteur on freedom of opinion and expression found that both States and private companies have obligations under international human rights law that constrain their use of machine learning technologies. See Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, UN Doc. A/73/348, 29 August 2018, para. 61. Drawing primarily in the rights to freedom of opinion, expression, privacy, non-discrimination, and an effective remedy, the Special Rapporteur finds that States should ensure that human rights are at the core of private sector design of these technologies. See id. at para. 62. This requires that States update and enforce data protection regulations with respect to machine learning technologies. See id. at para. 63. Moreover, States should enact policies that create a diverse and pluralistic information environment, which may include the regulation of technology monopolies in the area of artificial intelligence. See id. at para. 64.
With respect to the responsibility of private companies, the Special Rapporteur found that companies should create and apply guidelines for the deployment of artificial intelligence that are grounded in human rights principles. See id. at para. 65. Moreover, companies should be transparent and open for audit concerning the use of artificial intelligence. See id. at paras. 66, 69. Companies should also prevent discrimination at both the input and output levels of artificial intelligence systems. See id. at para. 67.
Additional Information
The Council of Europe, based in Strasbourg, France, is an intergovernmental organization with 47 Member States. The Committee of Ministers, a decision-making body charged with monitoring the implementation of several human rights treaties including the European Convention on Human Rights, is composed of the ministers of foreign affairs of the Member States. Additionally, the European Court of Human Rights, the European Committee of Social Rights, and the Commissioner for Human Rights all operate under the auspices of the Council of Europe.
For more information about the European Human Rights System or the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, visit IJRC’s Online Resource Hub. To stay up-to-date on international human rights law news, visit IJRC’s News Room or subscribe to the IJRC Daily.