Upload-Filters: Bypassing Classical Concepts of Censorship?

Authors

  • Amélie Pia Heldt

Keywords:

Freedom of expression, censorship, democratic legitimation, prior restraint, upload-filters

Abstract

Protecting human rights in the context of automated decision-making might not be limited to the relationship between intermediaries and their users. In fact, in order to adequately address human rights issues vis-à-vis social media platforms, we need to include the state as an actor too. In the German and European human rights frameworks, fundamental rights are in principle only applicable vertically, that is, between the state and the citizen. Where does that leave the right of freedom of expression when user-generated content is deleted by intermediaries on the basis of an agreement with a public authority? We must address this question in light of the use of artificial intelligence to moderate online speech and its (until now lacking) regulatory framework. When states create incentives for private actors to delete user-content pro-actively, is it still accurate to solely examine the relationship between platforms and users? Are we facing an expansion of collateral censorship? Is the usage of soft law instruments, such as codes of conduct, enhancing the protection of third parties or is it rather an opaque instrument that tends to be conflated with policy laundering? This paper aims to analyse the different layers of the usage of artificial intelligence by platforms, when it is triggered by a non-regulatory mode of governance. In light of the ongoing struggle in content moderation to balance between freedom of speech and other legal interests, it is necessary to analyse whether or not intelligent technologies could meet the requirements of freedom of speech and information to a sufficient degree.

Downloads

Published

2019-05-05

URN