When Racial Violence Goes Viral

Algorithmic racism causes trauma and mental health challenges for Black youth, while platforms profit.

A Black teen girl holding a phone

First it was Eric Garner. Then Sandra Bland. Then Philando Castile.

I, like many other people, accidentally came across these gruesome videos of Black people dead and dying because social media platforms are set to autoplay popular video content without censorship or content warning. As a graduate student, I regularly turned to social media to stand up against racial injustice, get involved in community activism, and learn more about Black histories that I wasn't taught in the standard K–12 curriculum. Social media offered me a wealth of resources, a sense of community, and a platform where my voice could be heard.

But seeing these videos was deeply traumatizing, and drastically changed my experience with social media. For weeks on end, my social media timelines were rife with the gruesome images, and to make matters worse, the commentary that followed often included racial slurs, insensitive jokes, and accusations that they deserved to die.

Try as I might, I could not escape images of Black people being killed by police, and these images eventually took a profound emotional and psychological toll on me. As I struggled to manage symptoms of digitally mediated PTSD, I began to worry about the impact these images were having on young people, particularly Black young people, who spend the greatest amount of time online compared to other groups.

Concerned about the mental health impacts, I decided to interview Black teens about their experiences with social media, paying close attention to the social, emotional, and educational consequences of racially traumatizing content. New research from Common Sense reveals that roughly two-thirds of girls of color who use TikTok (66%) and Instagram (64%) report having ever come across racist content on these platforms, with one in five saying they come across it daily or more (20% on TikTok; 21% on Instagram).

After conducting interviews with nearly 20 Black girls from the United States and Canada, my suspicions were confirmed: Viewing viral videos of police killings was linked to a number of mental health concerns, including anxiety, depression, insomnia, numbness, desensitization, and chronic worry. Not surprisingly, these PTSD-like symptoms are affecting their academic and scholarly pursuits, with Black students reporting increased difficulties studying, doing homework, attending class, and completing college applications following a high-profile police killing.

More than simply exposing the social, emotional, and educational harms of racially traumatizing digital content, research on viral police killings has shed light on the racist underpinnings of content virality, where gruesome images of Black people dead and dying go viral not because they bring about racial justice or political reform, but because they are highly profitable.

According to Google Trends, images of Black people being killed by police are some of the most popular search queries in the company's history. When videos of Black people dead and dying can garner nearly 2.4 million clicks in 24 hours, and the average "cost per click" for related content reaches $6 per click, the virality of Black death is not only incentivized, but nearly guaranteed.

While videos of Black people being choked, shot, and killed can trend across platforms for hours —even days—on end with no algorithmic intervention, posts that call for systemic reform, express racialized grief or anguish, or call attention to issues of race and racism in America are regularly flagged and deleted as hate speech by biased content moderation systems. Leaked training documents for Facebook's content moderation program highlight how internet systems are designed in ways that harm Black young people, noting that while "White men" are a protected category, "Black children" are not.

This means that comments on a post admonishing White men for murdering Black people would immediately be flagged as hate speech, while racial slurs and threats aimed at Black children would be upheld as "legitimate political expression." In the context of my study, these algorithmic biases left Black young people overexposed and hypervulnerable to racially violent images, comments, and direct messages, while simultaneously punishing them for grieving or daring to speak out.

These and other revelations highlight the profound effects that algorithmic racism has on youth mental health and educational well-being, and the subsequent need to foster young people's critical algorithmic literacies. We must prepare young people to use social media and digital technologies in ways that protect their social, emotional, and mental health as well as prepare them to "fight back" against digitally mediated harms and algorithmic racism.

Tiera Tanksley, Ph.D

Dr. Tiera Tanksley is an assistant professor of equity, diversity, and justice in education at the University of Colorado Boulder. Her scholarship, which theorizes a critical race technology theory (CRTT) in education, extends conventional education research to include sociotechnical and technostructural analyses of artificially intelligent technologies. Specifically, Dr. Tanksley's research examines anti-Blackness as "the default setting" of digital technology, and examines the socioemotional and academic consequences of algorithmic racism in the lives and schooling experiences of Black youth. Her work simultaneously recognizes Black youth as digital activists and civic agitators, and examines the complex ways they subvert, resist, and rewrite racially biased technologies to produce more just and joyous digital experiences for communities of color across the diaspora.