Artificial Intelligence, algorithms and freedom of expression
Artificial Intelligence can be presented as an ally when moderating violent content or apparent news, but its use without human intervention that contextualizes and adequately translates the expression leaves open the risk of prior censorship. At present this is under debate within the international...
Autores principales: | , |
---|---|
Formato: | Revistas |
Lenguaje: | Español Inglés |
Publicado: |
Universidad Politécnica Salesiana (Ecuador)
2021
|
Acceso en línea: | https://universitas.ups.edu.ec/index.php/universitas/article/view/34.2021.08 |
_version_ | 1782340257740488704 |
---|---|
author | Larrondo, Manuel Ernesto Grandi, Nicolas Mario |
author_facet | Larrondo, Manuel Ernesto Grandi, Nicolas Mario |
author_sort | Larrondo, Manuel Ernesto |
collection | Revista |
description | Artificial Intelligence can be presented as an ally when moderating violent content or apparent news, but its use without human intervention that contextualizes and adequately translates the expression leaves open the risk of prior censorship.
At present this is under debate within the international arena given that, since Artificial Intelligence lacks the ability to contextualize what it moderates, it is presented more as a tool for indiscriminate prior censorship, than as a moderation in order to protect the freedom of expression.
Therefore, after analyzing international legislation, reports from international organizations and the terms and conditions of Twitter and Facebook, we suggest five proposals aimed at improving algorithmic content moderation.
In the first place, we propose that the States reconcile their internal laws while respecting international standards of freedom of expression. We also urge that they develop public policies consistent with implementing legislation that protects the working conditions of human supervisors on automated content removal decisions.
For its part, we understand that social networks must present clear and consistent terms and conditions, adopt internal policies of transparency and accountability about how AI operates in the dissemination and removal of online content and, finally, they must carry out prior evaluations impact of your AI on human rights. |
format | Revistas |
id | oai:revistas.ups.edu.ec:article-4535 |
institution | Universitas |
language | Español Inglés |
publishDate | 2021 |
publisher | Universidad Politécnica Salesiana (Ecuador) |
record_format | ojs |
spelling | oai:revistas.ups.edu.ec:article-45352021-03-09T15:14:24Z Artificial Intelligence, algorithms and freedom of expression Inteligencia Artificial, algoritmos y libertad de expresión Larrondo, Manuel Ernesto Grandi, Nicolas Mario Inteligencia Artificial moderación automática de contenidos fakenews libertad de expresión, redes sociales Artificial Intelligence automatic content moderation fake news freedom of expression social networks Artificial Intelligence can be presented as an ally when moderating violent content or apparent news, but its use without human intervention that contextualizes and adequately translates the expression leaves open the risk of prior censorship. At present this is under debate within the international arena given that, since Artificial Intelligence lacks the ability to contextualize what it moderates, it is presented more as a tool for indiscriminate prior censorship, than as a moderation in order to protect the freedom of expression. Therefore, after analyzing international legislation, reports from international organizations and the terms and conditions of Twitter and Facebook, we suggest five proposals aimed at improving algorithmic content moderation. In the first place, we propose that the States reconcile their internal laws while respecting international standards of freedom of expression. We also urge that they develop public policies consistent with implementing legislation that protects the working conditions of human supervisors on automated content removal decisions. For its part, we understand that social networks must present clear and consistent terms and conditions, adopt internal policies of transparency and accountability about how AI operates in the dissemination and removal of online content and, finally, they must carry out prior evaluations impact of your AI on human rights. La Inteligencia Artificial puede presentarse como un aliado al momento de moderar contenidos violentoso de noticias aparentes, pero su utilización sin intervención humana que contextualice y traduzca adecuadamente la expresión deja abierto el riesgo de que se genere censura previa En la actualidad esto se encuentra en debate dentro del ámbito internacional dado que, al carecer la Inteligencia Artificial de la capacidad para contextualizar lo que modera, se ésta presentando más como una herramienta de censura previa indiscriminada, que como una moderación en busca de proteger la libertad de expresión.Por ello luego de analizar la legislación internacional, informes de organismos internacionales y lostérminos y condiciones de Twitter y Facebook, sugerimos cinco propuesta tendientes a mejorar la moderación algorítmica de contenidos.En primer término proponemos que los Estados compatibilicen sus legislaciones internas respetando losestándares internacionales de libertad de expresión. También instamos a que desarrollo en políticas públicasconsistentes en implementar legislaciones protectoras de las condiciones laborales de supervisores humanos sobre las decisiones automatizadas de remoción de contenido. Por su parte, entendemos que las redes sociales deben presentar términos y condiciones claros y consistentes, adoptar políticas internas de transparencia y rendición de cuentas acerca de cómo opera la IA en la difusión y remoción de contenido en línea y, finalmente, deben realizar evaluaciones previas de impacto de su IA a los derechos humanos. Universidad Politécnica Salesiana (Ecuador) 2021-02-22 info:eu-repo/semantics/article info:eu-repo/semantics/publishedVersion application/pdf text/html application/zip application/pdf https://universitas.ups.edu.ec/index.php/universitas/article/view/34.2021.08 10.17163/uni.n34.2021.08 Universitas; No. 34 (2021): (March 2021- August 2021): Fake News, Communication and Politics; 177-194 Universitas; Núm. 34 (2021): (marzo 2021- agosto 2021): Fake News, comunicación y política; 177-194 1390-8634 1390-3837 10.17163/uni.n34 spa eng https://universitas.ups.edu.ec/index.php/universitas/article/view/34.2021.08/4344 https://universitas.ups.edu.ec/index.php/universitas/article/view/34.2021.08/4383 https://universitas.ups.edu.ec/index.php/universitas/article/view/34.2021.08/4384 https://universitas.ups.edu.ec/index.php/universitas/article/view/34.2021.08/4397 Derechos de autor 2021 Universidad Politénica Salesiana |
spellingShingle | Larrondo, Manuel Ernesto Grandi, Nicolas Mario Artificial Intelligence, algorithms and freedom of expression |
title | Artificial Intelligence, algorithms and freedom of expression |
title_full | Artificial Intelligence, algorithms and freedom of expression |
title_fullStr | Artificial Intelligence, algorithms and freedom of expression |
title_full_unstemmed | Artificial Intelligence, algorithms and freedom of expression |
title_short | Artificial Intelligence, algorithms and freedom of expression |
title_sort | artificial intelligence, algorithms and freedom of expression |
url | https://universitas.ups.edu.ec/index.php/universitas/article/view/34.2021.08 |