Glenlead submits generative AI recommendations to the House of Lords
September 5, 2023
Parliamentary roundtable on generative AI and education group shot
Glenlead offers policy to parliamentary roundtable on generative AI and education
September 19, 2023
Show all

Emotional Labour Offsetting: Unpacking the racial capitalist fauxtomation behind algorithmic content generation

Author: Kat Zhou, Guest Blog

In this blog, designer Kat Zhou recounts how her online experience of hate speech made her delve into the world of content moderation and ‘fauxtomation’, and how content moderators in the Global South are fighting back.

15 September 2023

In October of 2022, I experienced what it was like to get swept up in a misinformation maelstrom on social media. I had tweeted about my own encounters as an Asian-American woman with racism, not expecting my thread to go viral. While my thread was initially met with overwhelming support, the solidarity was soon outshadowed. As my words gained traction, an influx of hate speech proliferated on multiple different social media platforms in response. Reflecting back on the entire ordeal, what truly stood out to me was the emotional trauma I felt due to the inadequacy of these platforms at curtailing blatant disinformation. Many of the posts that were removed were done so due to manual reporting of said content by concerned friends, families, and netizens. It was through my own terrifying encounter with the insufficiencies of online content moderation that I began to wonder about its operationalization. What safeguards were set in place to moderate problematic content?

Content moderation, defined by Roberts (2016) as the “organized practice of screening user-generated content posted to Internet sites, social media and other online outlets, in order to determine the appropriateness of the content,” is a laborious task that involves continuously watching and flagging images and videos with self-harm, death, sexual violence, racism, and other problematic visuals (p. 35). It can be an incredibly isolating, underpaid, and traumatic job, with content moderators reporting increased depression from the job (Steiger et al., 2021). Surveying the available reportage on the topic, I found that while automated processes did exist, humans were still heavily involved, and their jobs as moderators were often traumatic. In 2018, Selena Scola, a white American content moderator based in California, sued Facebook for exposure to “highly toxic, unsafe, and injurious content during her employment as a content moderator” (Koebler, Cox, 2018). Facebook settled the case for $52 million (Scola, n.d.). Four years later, Facebook (now called Meta) was sued once more by another content moderator. This time, the lawsuit came from Daniel Motaung, a Black, South African content moderator employed in Kenya. Motaung also sued Sama, the contractor for Meta at which he was directly employed (Perrigo, 2023).

The temporal space between these two lawsuits was ripe with new developments for content moderation. Social media platforms saw an international proliferation of user-generated content (UGC), catalyzing a shift towards algorithmic content moderation. In an effort to cut costs, scale for growing online communities, and reduce their reliance on human content moderation, many companies pivoted from purely analog content moderation and began incorporating algorithmic content moderation.

Gorwa et al. (2020) define algorithmic content moderation as “systems that classify user-generated content based on either matching or prediction, leading to a decision and governance outcome (e.g. removal, geoblocking, account takedown)” (p. 3). This foray into automated alternatives for content moderation would primarily manifest in two ways: companies would either develop and train their own, in-house algorithmic tools, or they would contract third-party companies to handle algorithmic content moderation. Third-party, intermediary, artificial intelligence (AI) companies that offer algorithmic content moderation have proliferated during the last few years.

While these algorithmic content moderation companies tout their abilities to replace human content moderators with AI, their claims often obscure the amount of human labor that is still needed to train their models. By constructing such fantastical representations of the data labeler roles they outsource to workers, these companies are not only erasing the trauma of performing commercialized care work for these social media platforms, but also reinforcing the supposed inevitability of an automated future for moderation. This type of obfuscation has been described by Astra Taylor (2018) as fauxtomation, which represents the gap between the corporate marketing of automated tools and realities of what those tools can accomplish.

This erasure is dangerous, especially when one considers the psychologically traumatic nature of the work itself. Increasingly, moderating content and labeling problematic UGC have been classified as forms of emotional labor, a term originally coined by Arlie Hochschild (1983) to describe the regulation of one’s own emotions in order to maintain a particular state of emotions for others. Initially used to describe face-to-face roles (such as cashier at a grocery store), the term itself has gradually expanded and shifted, gaining traction in other industries beyond the ones mentioned in Hochschild’s writing. The psychological side effects of this type of digital emotional labor cannot be ignored. Paired with stringent and unrelenting quotas to fulfill on the job, it is no surprise that symptoms such as burnout and vicarious trauma (a severe form of post-traumatic stress disorder) flourish amongst content moderators and data labelers (Steiger et al., 2021).

Furthermore, there is a spatial distinction between where these algorithmic content moderation companies are headquartered and where they recruit data labelers and content moderators. These companies are primarily headquartered in the Global North, while many of the data labelers they contract are sourced from countries in the Global South, such as the Philippines and Kenya.1 This racialized and spatial power imbalance echoes historical projects of imperial exploitation of material resources and human labor. Racial capitalism, the intersection between our systems of exploitation and our societal constructions of race, provides a helpful lens through which we can not only trace the flow of care work that is provided and received (Robinson, 1983). It is via the logic of racial capitalism that these Global North companies glorify and legitimize their offsetting of emotional labor to underpaid workers in the Global South.

Problematizing the fauxtomation from Global North technology companies is crucial for illuminating the tensions and contradictions behind the phenomena of these companies offsetting the emotional labor of content moderation onto workers in the Global South. I did not realize how much I took content moderation for granted until my own traumatic encounter with abuse on the Internet. While my experience as a victim of digital harassment was horrible, I certainly do not have to consume hours of the most violent content imaginable on a daily basis. One wonders, what can we do to improve this process – for end consumers like myself and the content moderators that protect us? Content moderation is not the only way to mitigate the overwhelming deluge of UGC that exists today on the Internet. Bemoaning the inclinations of our capitalist marketplace, Roberts (2019) notes that “one obvious solution might seem to be to limit the amount of user-generated content being solicited by social media platforms…[but] the content is just too valuable a commodity to the platforms” (p. 208).

Thus, content moderation work remains a necessary evil. Try as these companies might to market the capabilities of their AI programs to take on the care work of content moderation, as the technology currently stands, there will always be a dependence on human emotional labor. It is this dependence on human emotional labor that provided the starting point for this dissertation.

If the premise remains true that content moderation is absolutely necessary, what can we collectively do to confront the discourse employed by Global North corporations and mitigate the exploitation of workers in the Global South? Although I do not have a panacea, I recognize there are many places from which to draw inspiration. One particular, galvanizing event is currently unfurling in Nairobi. On May 1, 2023, TIME reported that over 150 workers in the capital city of Kenya established the African Content Moderators Union, setting a historic precedent for tech workers in the Global South (Perrigo, 2023). Four days later, a group of content moderators in Nairobi led a passionate protest outside the Sama office, chanting for Wendy Gonzales, the CEO of Sama, to meet them for discussions. In a video of the protest that was posted to Twitter by Siasa Place (@siasaplace), a local NGO in the region, workers can be heard demanding for their money. The caption accompanying the video is moving: “We are not machines, we are human beings” (Siasa Place, 2023). In an industry that enforces the instrumentalization of workers in the Global South, this eight-word declaration underscores the needed resistance to the dehumanizing discourse issued by algorithmic content moderation companies from the Global North.

1 I used the terms “Global North” and “Global South” while acknowledging how they are imperfect conceptual apparatuses for describing not only the complex, geopolitical webs of social media companies but also the locales where content moderators work to benefit said companies (Levander & Mignolo, 2011). For example, while China might have qualified as a “Global South” country throughout most of the 20th century, does its current accumulation of capital and technological advancements disqualify it from such an identifier? Chinese companies such as ByteDance, the owner of TikTok, depend on content moderation efforts sourced from regions in Latin America and Africa. However, because terminology that can sufficiently encompass the nations that churn out massive technology companies or the nations to which content moderation is outsourced does not exist to my knowledge, I will be leveraging the terms “Global North” and “Global South,” albeit with the caveat that they are imperfect constructs.

References:

Gorwa, R., Binns, R., & Katzenbach, C. (2020). Algorithmic content moderation: Technical and political challenges in the automation of platform governance. Big Data & Society, 7(1). https://doi.org/10.1177/2053951719897945

Hochschild, A. R. (1983). The Managed Heart: Commercialization of Human Feeling. Berkeley, CA: University of California Press.

Koebler, J., & Cox, J. (2018), September 24). Content Moderation Sues Facebook, Says Gob Gave Her PTSD. Vice. https://www.vice.com/en/article/zm5mw5/facebook-content-moderation-laws-uit-ptsd

Levander, C., & Mignolo, W. (2011). Introduction: The Global South and World Dis/Order. The Global South, 5(1), 1-11. https://doi.org/10.2979/globalsouth.5.1.1.

Perrigo, B. (2023, May 1). 150 African Workers for AI Companies Vote to Unionize / Time. Time. https://time.com/6275995/chatgpt-facebook-african-workers-union/

Roberts, S. (2016). Commercial Content Moderation: Digital Laborer’s Dirty Work. Media Studies Publications. https: //ir.lib.uwo.ca/commpub/12

Roberts, S. (2019). Behind the Screen: Content Moderation in the Shadows of Social Media. Yale University Press.

Robinson, C. (1983). Black Marxism: the Making of the Black Radical Tradition. Penguin Books.

Scola, S. (n.d.). Selena Scola, et al. v. Facebook, Inc. Superior Court of the. Selena Schola. Retrieved April 22, 2023 from https://selenascola.com/overview-scola-v-facebook

Siasa Place [@siasaplace]. (2023, May 5). They are demanding to see Wendy Gonzales Samasource CEO. “We are not machines, we are human beings.” https://t.co/SYAzVxPGOA [Tweet]. Twitter. https://twitter.com/siasplace/status/1654384909715161089

Steiger, M., Bharucha, T. J., Venkatagiri, S., Reidl, M. J., & Lease, M. (2021). The Psychological Well-Being of Content Moderators: The Emotional Labor of Commercial Moderation and Avenues for Improving Support. Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, 1-14. https://www.doi.org/10.1145/3411764.3445092

Taylor, A. (2018, August 1). The Automation Charade. Logic(s) Magazine, 5. https:/www.logicmag.io/failure/the-automation-charade/

Kat Zhou (she/her) is the creator of the <Design Ethically> project, which started out as a framework for applying ethics to the design process and has now grown into a toolkit of speculative activities that help teams forecast the consequences of their products.

Through her work with <Design Ethically> , she has spoken at events hosted by the European Parliament (2022) and the US Federal Trade Commission (2021), as well as an assortment of tech conferences. Kat has been quoted in the BBC, WIRED, Fast Company, Protocol, and Tech Policy Press.

Outside of <Design Ethically>, Kat has worked as a designer in the industry for years. She also recently completed a masters in AI Ethics and Society at the University of Cambridge, and this is an adapted excerpt from her dissertation submitted for the program.

Emotional Labour Offsetting: Unpacking the racial capitalist fauxtomation behind algorithmic content generation
This website uses cookies to improve your experience. By using this website you agree to our Privacy Policy.
Read more