Artificial Intelligence, algorithms and freedom of expression

Artificial Intelligence can be presented as an ally when moderating violent content or apparent news, but its use without human intervention that contextualizes and adequately translates the expression leaves open the risk of prior censorship. At present this is under debate within the international...

Descripción completa

Detalles Bibliográficos
Autores principales: Larrondo, Manuel Ernesto, Grandi, Nicolas Mario
Formato: Revistas
Lenguaje:Español
Inglés
Publicado: Universidad Politécnica Salesiana (Ecuador) 2021
Acceso en línea:https://universitas.ups.edu.ec/index.php/universitas/article/view/34.2021.08
Descripción
Sumario:Artificial Intelligence can be presented as an ally when moderating violent content or apparent news, but its use without human intervention that contextualizes and adequately translates the expression leaves open the risk of prior censorship. At present this is under debate within the international arena given that, since Artificial Intelligence lacks the ability to contextualize what it moderates, it is presented more as a tool for indiscriminate prior censorship, than as a moderation in order to protect the freedom of expression. Therefore, after analyzing international legislation, reports from international organizations and the terms and conditions of Twitter and Facebook, we suggest five proposals aimed at improving algorithmic content moderation. In the first place, we propose that the States reconcile their internal laws while respecting international standards of freedom of expression. We also urge that they develop public policies consistent with implementing legislation that protects the working conditions of human supervisors on automated content removal decisions. For its part, we understand that social networks must present clear and consistent terms and conditions, adopt internal policies of transparency and accountability about how AI operates in the dissemination and removal of online content and, finally, they must carry out prior evaluations impact of your AI on human rights.