Please use this identifier to cite or link to this item: https://hdl.handle.net/10419/235968 
Year of Publication: 
2021
Citation: 
[Journal:] Internet Policy Review [ISSN:] 2197-6775 [Volume:] 10 [Issue:] 2 [Publisher:] Alexander von Humboldt Institute for Internet and Society [Place:] Berlin [Year:] 2021 [Pages:] 1-29
Publisher: 
Alexander von Humboldt Institute for Internet and Society, Berlin
Abstract: 
Policymakers have recently expressed concerns over the role of recommendation algorithms and their role in forming "filter bubbles". This is a particularly prescient concern in the context of extremist content online; these algorithms may promote extremist content at the expense of more moderate voices. In this article, we make two contributions to this debate. Firstly, we provide a novel empirical analysis of three platforms' recommendation systems when interacting with far-right content. We find that one platform-YouTube-does amplify extreme and fringe content, while two-Reddit and Gab-do not. Secondly, we contextualise these findings into the regulatory debate. There are currently few policy instruments for dealing with algorithmic amplification, and those that do exist largely focus on transparency. We argue that policymakers have yet to fully understand the problems inherent in "de-amplifying" legal, borderline content and argue that a co-regulatory approach may offer a route towards tackling many of these challenges.
Subjects: 
Filter bubble
Online radicalisation
Algorithms
Extremism
Regulation
Persistent Identifier of the first edition: 
Creative Commons License: 
cc-by Logo
Document Type: 
Article

Files in This Item:
File
Size





Items in EconStor are protected by copyright, with all rights reserved, unless otherwise indicated.