
In chapter 6, we turn to social media, where so-called recommendation algorithms are used for creating the personalized feeds we scroll through. AI is also used for determining which content violates policies and must be taken down; this process is called content moderation. The chapter is primarily about content moderation AI, with a brief discussion of recommendation algorithms. The central question we examine is whether AI has the potential to remove harmful content such as hate speech from social media without curbing free expression, as tech companies have often promised.
In this debate, much attention has been paid to the inevitable errors of enforcement, such as a piece of content being mistakenly flagged as unacceptable and taken down. But even if these errors are fixed, the more fundamental issue is that platforms have this power to regulate speech in the first place, with little accountability. We lack a democratic process to decide the rules by which online speech should be governed and to find a balance between values such as free speech and safety. Given this reality, AI will remain impotent at easing our frustrations with social media.