Adio Dinika is a research fellow at the Distributed Artificial Intelligence Research Institute (DAIR).
A blurred screen flashes before our eyes, accompanied by a deceptively innocuous “sensitive content” message with a crossed-out eye emoji. The warning’s soft design and playful icon belie the gravity of what lies beneath. With a casual flick of our fingers, we scroll past, our feeds refreshing with cat videos and vacation photos. But in the shadows of our digital utopia, a different reality unfolds.
In cramped, poorly lit warehouses around the world, an army of invisible workers hunches over flickering screens. Their eyes strain, fingers hovering over keyboards, as they confront humanity’s darkest impulses — some darker than their wildest nightmares. They cannot look away. They cannot scroll past. For these workers, there is no trigger warning.
Tech giants trumpet the power of AI in content moderation, painting pictures of omniscient algorithms keeping our digital spaces safe. They suggest a utopian vision of machines tirelessly sifting through digital detritus, protecting us from the worst of the web.
But this is a comforting lie.
The reality is far more human and far more troubling. This narrative serves multiple purposes: it assuages user concerns about online safety, justifies the enormous profits these companies reap and deflects responsibility — after all, how can you blame an algorithm?
However, current AI systems are nowhere near capable of understanding the nuances of human communication, let alone making complex ethical judgments about content. Sarcasm, cultural context and subtle forms of hate speech often slip through the cracks of even the most sophisticated algorithms.
And while automated content moderation can, to a degree, be implemented for more mainstream languages, content in low-resourced languages typically requires recruiting content moderators from those countries where it is spoken for their language abilities.
Behind almost every AI decision, a human is tasked with making the final call and bearing the burden of judgment — not some silicon-based savior. AI is often a crude first filter. Take Amazon’s supposedly automated stores, for instance: It was reported by The Information that instead of advanced AI systems, Amazon relied on around 1,000 workers, primarily based in India, to manually track customers and record their purchases.
Amazon told AP and others that they did hire workers to watch videos to validate people shopping, but denied that they had hired 1,000 or the implication that workers monitored shoppers live. Similarly, Facebook’s “AI-powered” M assistant is more human than software. And so, the illusion of AI capability is often maintained at the cost of hidden human labor.
“We were the janitors of the internet,” Botlhokwa Ranta, 29, a former content moderator from South Africa now living in Nairobi, Kenya, told me two years after her Sama contract was terminated. Speaking from her home, her voice was heavy as she continued. “We cleaned up the mess so everyone else can enjoy a sanitized online world.”
And so, while we sleep, many toil. While we share, these workers shield. While we connect, they confront the disconnect between our curated online experience and the reality of raw, unfiltered human nature.
The glossy veneer of the tech industry conceals a raw, human reality that spans the globe. From the outskirts of Nairobi to the crowded apartments of Manila, from Syrian refugee communities in Lebanon to the immigrant communities in Germany and the call centers of Casablanca, a vast network of unseen workers power our digital world. The stories of these workers are often a tapestry of trauma, exploitation and resilience, ones that reveal the true cost of our AI-driven future.
We may marvel at the chatbots and automated systems that Sam Altman and his ilk extol, but this belies the urgent questions below the surface: Will our godlike AI systems serve as merely a smokescreen, concealing a harrowing human reality?
In our relentless pursuit of technological advancement, we must ask: What price are we willing to pay for our digital convenience? And in this race towards an automated future, are we leaving our humanity in the dust?
Abrha’s Story
In February 2021, Abrha’s world shattered as his town in Tigray came under fire from both Ethiopian and Eritrean defense forces in the Tigray conflict, the deadliest modern-day conflict, which has been rightly called a genocide according to a report by the U.S.-based New Lines Institute.
With just a small backpack and whatever cash he could grab, Abrha, then 26, fled to Nairobi, Kenya, leaving behind a thriving business, family and friends who couldn’t escape. As Tigray suffered under a more than two-year internet shutdown imposed by Ethiopia’s government, he spent months in agonizing uncertainty about his family’s fate.
Then, in a cruel twist of irony, Abrha was recruited by the Kenyan branch of Sama — a San Francisco-based company that presents itself as an ethical AI training data provider, because the company needed people fluent in Tigrinya and Amharic, languages of the conflict he had just fled — to moderate content mostly originating from that same conflict.
Five days a week, eight hours a day, Abrha sat in the Sama warehouse in Nairobi, moderating content from the very conflict he had escaped — even sometimes a bombing from his hometown. Each day brought a deluge of hate speech directed at Tigrayans, and dread that the next dead body might be his father, the next rape victim his sister.
An ethical dilemma also weighed heavily on him: How could he remain neutral in a conflict where he and his people were the victims? How could he label retaliatory content generated by his people as hate speech? The pressure became unbearable.
Though Abrha once abhorred smoking, he became a chain smoker who always had a cigarette in hand as he navigated this digital minefield of trauma — each puff a futile attempt to soothe the pain of his people’s suffering.
The horror of his work reached a devastating peak when Abrha came across his cousin’s body while moderating content. It was a brutal reminder of the very real and personal stakes of the conflict he was being forced to witness daily through a computer screen.
After he and other content moderators had their contracts terminated by Sama, Abrha found himself in a dire situation. Unable to secure another job in Nairobi, he was left to grapple with his trauma alone, without the support or resources he desperately needed. The weight of his experiences as a content moderator, coupled with the lingering effects of fleeing conflict, took a heavy toll on his mental health and financial stability.
Despite the situation in Tigray remaining precarious in the aftermath of the war, Abrha felt he had no choice but to return to his homeland. He made the difficult journey back a few months ago, hoping to rebuild his life from the ashes of conflict and exploitation. His story serves as a stark reminder of the long-lasting impact of content moderation work and the vulnerability of those who perform it, often far from home and support systems.
Kings’ Nightmarish Reality
Growing up in Kibera, one of the world’s largest slums, Kings, 34, who insisted Noema solely use his first name to freely discuss personal health matters, dreamed of a better life for his young family. Like many young people raised in the Nairobi slum, he was unemployed.
When Sama came calling, Kings saw it as his chance to break into the tech world. Starting as a data annotator, who labeled and categorized data to train AI systems, he was thrilled despite the small pay. When the company offered to promote him to content moderator with a slight pay increase, he jumped at the opportunity, unaware of the implications of the decision.
Kings soon found himself confronting content that haunted him day and night. The worst was what they coded as CSAM, or child sexual abuse material. Day after day, he sifted through texts, pictures and videos vividly depicting the violation of children. “I saw videos of children’s vaginas tearing from the abuse,” he recounted, his voice hollow. “Each time I closed my eyes at home, that’s all I could see.”
The trauma infected every aspect of Kings’ life. At the age of 32, he had trouble being intimate with his wife; images of abused children plagued his mind. The company’s mental health support was grossly inadequate, Kings said. Counselors were seemingly ill-equipped to handle the depth of his trauma.
Eventually, the strain became too much. Kings’ wife, unable to cope with the sexual deprivation and the changes in his behavior, left him. By the time Kings left Sama, he was a shell of his former self — broken both mentally and financially — his dreams of a better life shattered by a job he thought would be his salvation.
Losing Faith In Humanity
Ranta’s story begins in the small South African township of Diepkloof, where life moves in predictable cycles. A mother at 21, she was 27 years old when we spoke, and she reflected on the harsh reality faced by many young women in her community: six out of ten girls become pregnant by 21, entering a world where job prospects are already scarce and single motherhood makes them even more elusive.
When Sama came recruiting, promising a better life for her and her child, Ranta saw it as her ticket to a brighter future. She applied and soon found herself in Nairobi, far from everything familiar. The promises quickly unraveled upon her arrival. Support for reuniting with her child, whom she had left behind in South Africa, never materialized as promised.
When she inquired about this, company representatives told her that they could no longer cover the full cost as initially promised, and offered only partial support, to be deducted from her pay. Attempts to get an official audience with Sama were unsuccessful, with unofficial sources citing the ongoing legal proceedings with workers as the reason.
When Ranta’s sister died, she said her boss gave her a few days off but wouldn’t let her switch to less traumatic content streams when she returned to moderating content — even though there was an opening. It was as if they expected her and other workers to operate like machines, capable of switching off one program and booting up another at will.
Things came to a head during a complicated pregnancy. She wasn’t allowed to stay on bedrest as ordered by her doctor, and then just four months after giving birth to her second daughter, the infant was hospitalized.
She then learned that the company had stopped making health insurance contributions shortly after she started working, despite continued deductions from her paycheck. Now she was saddled with bills she couldn’t afford to pay.
Ranta’s role involved moderating content related to female sexual abuse, xenophobia, hate speech, racism and domestic violence, mostly from her native South Africa and Nigeria. While she appreciated the importance of her job, she lamented the lack of adequate psychological counseling, training and support.
Ranta found herself losing faith in humanity. “I saw things that I never thought possible,” she told me. “How can human beings claim to be the intelligent species after what I’ve seen?”
Sama’s CEO has expressed regret over signing the content moderation contract with Meta. A Meta spokesperson said they require all partner companies to provide “24/7 on-site support with trained practitioners, an on-call service, and access to private healthcare from the first day of employment.”
The representative also said it offered “’technical solutions to limit exposure to graphic material as much as possible.” However, the experiences shared by workers like Abrha, Kings, and Ranta paint a starkly different picture, suggesting a significant gap between Meta’s stated policies and the lived realities of content moderators.
Global perspectives: Similar struggles across borders
The experiences of Abrha, Kings and Ranta are not isolated incidents. In Kenya alone, I spoke to more than 20 workers who shared similar stories. Across the globe, in countries like Germany, Venezuela, Colombia, Syria and Lebanon, data workers we spoke to as part of our Data Workers Inquiry project told us they faced similar challenges.
In Germany, despite all its programs to help new arrivals, immigrants with uncertain status still end up in roles like Abrha’s, reviewing content from their home countries. These workers’ precarious visa situations added a layer of vulnerability. Many told us that despite facing exploitation, they felt unable to speak out publicly. Because their employment is tied to their visas, the risk of being fired and deported looms.
In Venezuela and Colombia, economic instability drives many to seek work in the data industry. While not always directly involved in content moderation, many data annotators often work with challenging datasets that can negatively impact their mental well-being.
Reality often doesn’t match what was advertised. Even if data workers in Syria and Syrian refugees in Lebanon aren’t moderating content, their work often intersects with digital remnants of the conflict they’ve experienced or fled, adding a layer of emotional strain to their already demanding jobs.
The widespread use of Non-Disclosure Agreements (NDAs) is yet another layer in the uneven power dynamic involving such vulnerable individuals. These agreements, required as part of workers’ employment contracts, silence workers and keep their struggles hidden from public view.
The implied threat of these NDAs often extends beyond the period of employment, casting a long shadow over the workers’ lives even after they leave their jobs. Many workers who spoke to us insisted on anonymity out of fear of legal repercussions.
These workers, in places like Bogotá, Berlin, Caracas and Damascus, reported feeling abandoned by the companies profiting off their labor. The so-called “wellness programs” offered by Sama were often ill-equipped to address the deep-seated trauma these workers were experiencing, employees told me.
Their stories make clear that behind the sleek facade of our digital world lies a hidden workforce that bears immense emotional burdens, so we don’t have to. Their experiences raise urgent questions about the ethical implications of data work and the human cost of maintaining our digital infrastructure. The global nature of this issue underscores a troubling truth: The exploitation of data workers is not a bug, it’s a systemic feature of the industry.
It’s a global web of struggle, spun by tech giants and maintained by the silence of those trapped within it, as documented by Mophat Okinyi and Richard Mathenge, former content moderators and now co-researchers in our Data Workers’ Inquiry project. The two have seen these patterns repeat across a slew of different companies in multiple countries. Their experiences, both as workers and now as advocates, underscore the global nature of this exploitation.
The Trauma Behind the Screen
Before I traveled to Kenya, I thought I understood the challenges data workers face through my conversations with some online. However, upon arrival, I was confronted with stories of individual and institutional depravity that left me with secondary trauma and nightmares for weeks. But for the data workers themselves, their trauma manifests in two primary ways: direct trauma from the job itself and systemic issues that compound the trauma.
1. Direct Trauma
Every day, content moderators are forced to confront the darkest corners of humanity. They wade through a toxic swamp of violence, hate speech, sexual abuse and graphic imagery.
This constant exposure to disturbing content takes a toll. “It goes beyond what makes people human,” Kings told me. “It’s like being forced to drink poison every day, knowing it’s killing you, but you can’t stop because it’s your job.” The images and videos linger after work, haunting their dreams and infiltrating their personal lives.
Many moderators report symptoms of post-traumatic stress and vicarious trauma: nightmares, flashbacks and severe anxiety are common. Some develop a deep-seated mistrust of the world around them, forever changed by the constant exposure to human cruelty. As one worker told me, “I came into this job believing in the basic goodness of people. Now, I’m not sure I believe in anything anymore. If people can do this, then what’s there to believe?”
When the shift ends, trauma follows these workers home. For Kings and Okinyi, like so many others, their relationships crumbled under the weight of what they saw but could not speak of. Children grow up with emotionally distant parents, partners become estranged, and the worker is left isolated in their pain.
Many moderators report a fundamental shift in their worldview. They become hypervigilant, seeing potential threats everywhere. Okinyi mentioned how one of his former colleagues had to move from the city to the less crowded countryside due to paranoia over potential outbursts of violence. In a zine she created for the Data Workers Inquiry about Sama’s female content moderators, one of Ranta’s interviewees spoke of how the job made her constantly question her worth and ability to mother her children.
2. Systemic Issues
Beyond the immediate trauma of the content itself, moderators face a barrage of systemic issues that exacerbate their suffering:
- Job Insecurity: Many moderators, especially those in precarious living situations like refugees or economic migrants, live in constant fear of losing their jobs. This fear often prevents them from speaking out about their working conditions or seeking help. Companies often exploit this vulnerability.
- Lack of Mental Health Support: While companies tout their wellness programs, the reality falls far short. As Kings experienced, the counseling provided is often inadequate, with therapists ill-equipped to handle the unique trauma of content moderation. Sessions are often brief and fail to address more underlying, deep-seated trauma.
- Unrealistic Performance Metrics: Moderators often must review hundreds of pieces of content per hour. This relentless pace leaves no time to process the disturbing material they’ve seen, forcing them to bottle up their emotions. The focus on quantity over quality not only affects the accuracy of moderation but also exacerbates the psychological toll of the work. As Abrha told me: “Imagine being expected to watch a video of someone being killed, and then immediately move on to the next post. There’s no time to breathe, let alone process what we’ve seen.”
- Constant Surveillance: As if the content itself wasn’t stressful enough, moderators are constantly monitored. Practically every decision and essentially every second of their shift is scrutinized, adding another layer of pressure to an already overwhelming job. This surveillance extends to bathroom breaks, idle time between tasks and even facial expressions while reviewing content. Supervisors monitor workers through computer tracking software, cameras, and in some cases, physical observation. They tend to pay attention to facial expressions to gauge workers’ reactions and ensure they maintain a level of detachment or “professionalism” while reviewing disturbing content. As a result, workers told me they felt like they couldn’t even react naturally to the disturbing content they were viewing. Workers were given an hour of break time daily for all their extraneous needs — eating, stretching, the bathroom — any additional time engaged in those or other non-work activities would be scrutinized and time would be added to their shifts. Abrha also mentioned that workers had to put their phones in lockers, further isolating them and limiting their ability to communicate with the outside world during their shifts.
And the ripples extend beyond the family: Friends drift away, unable to relate to the moderator’s new, darker perspective on life; social interactions become strained, as workers struggle to engage in “normal” conversations after spending their days immersed in the worst of human behavior.
In essence, the trauma of content moderation reshapes entire family dynamics and social networks, creating a cycle of isolation and suffering that extends far beyond the individual.
Traumatizing Humans To Create “Intelligent” Systems
Perhaps the cruelest irony is that we’re traumatizing people to create the illusion of machine intelligence. The trauma inflicted on human moderators is justified by the promise of future AI systems that will not require human intervention. Yet, their development requires more human labor and often the sacrifice of workers’ mental health.
Moreover, the focus on AI development often diverts resources and attention from improving conditions for human workers. Companies invest billions in machine learning algorithms while neglecting the basic mental health needs of their human moderators.
The AI illusion distances users from the reality of content moderation, much like factory farming distances us from the treatment of egg-laying chickens. This collective willful ignorance allows exploitation to continue unchecked. The AI narrative is a smokescreen that obscures a deeply unethical labor practice that trades human well-being for a facade of technological progress.
Digital Workers Of The World Rise!
In the face of exploitation and trauma, data workers have not been passive. Across the globe, workers have attempted to unionize, but their efforts have often been hindered by various actors. In Kenya, workers formed the African Content Moderators Union, an ambitious effort to unite workers from different African countries.
Mathenge, who is also part of the union’s leadership, told me he believes he was dismissed from his role as a team lead due to his union activities. This retaliation sent a chilling message to other workers who were considering organizing.
The struggle for workers’ rights recently gained significant legal traction. On Sept. 20, a Kenyan court ruled that Meta could be sued there for dismissing dozens of content moderators by its contractor, Sama. The court upheld earlier rulings that Meta could face trial over these dismissals and could be sued in Kenya over alleged poor working conditions.
The latest ruling has potentially far-reaching implications for how the tech giant works with its content moderators globally. It also marks a significant step forward in the ongoing battle for fair treatment and recognition of data workers’ rights.
The obstacles continue beyond the company level. Organizations employ union-busting tactics, often firing workers who agitate for unionization, Mathenge said. During conversations with workers, journalists and civil society officials in the Kenyan digital labor space, whispers of senior government officials demanding bribes to formally register the union emerged, adding another layer of complexity to the unionization process.
Perhaps most bizarrely, according to an official from the youth-led civic organization Siasa Place, when workers in Kenya attempted to form their own union, they were instead told to join the postal and telecommunication union, a suggestion that ignores the vast differences between these industries and the unique challenges faced by today’s data workers.
Despite these setbacks, workers have continued to find innovative ways to organize and advocate for their rights. Okinyi, together with Mathenge and Kings formed the Techworker Community Africa, a non-governmental organization focused on lobbying against harmful tech practices like labor exploitation.
Other organizations have also stepped up to help the workers, like Siasa Place, and digital rights lawyers like Mercy Mutemi have petitioned the Kenyan parliament to investigate the working conditions at AI firms.
A Path To Ethical AI & Fair Labor Practices
Industry-wide Mental Health Protocols
We need a comprehensive, industry-wide approach to mental health support. Based on my research and conversations with workers, I propose a multi-faceted approach not offered by existing support systems.
Many existing company programs are often superficial “wellness programs” that fail to address the deep-seated trauma experienced by data workers. These may include occasional group sessions or access to general counseling services, but they are typically insufficient and not tailored.
My proposed approach includes mandatory, regular counseling sessions with therapists trained specifically in trauma related to data work. Additionally, companies should implement regular mental health check-ins, provide access to 24/7 crisis support, and offer long-term therapy services, which are largely absent in current setups.
Crucially, these services must be culturally competent, recognizing the diverse backgrounds of data workers globally. This is a significant departure from the current one-size-fits-all approach that often fails to consider the cultural contexts of workers in places like Nairobi, Manila or Bogotá. The proposed system would offer support in workers’ native languages and be sensitive to cultural nuances surrounding mental health — aspects sorely lacking in many existing programs.
Moreover, unlike the current system where mental health support often ends with employment, this new approach would extend support beyond the tenure of the job, acknowledging the long-lasting impacts of this work. This comprehensive, long-term and culturally-sensitive approach represents a fundamental shift from the current tokenistic and often ineffective mental health support offered to data workers.
“Trauma Cap” Implementation
Just as we have radiation exposure limits for nuclear workers, we need trauma exposure limits for data workers. This “trauma cap” would set strict limits on the amount and type of disturbing content a worker can be exposed to within a given timeframe.
Implementation could involve rotating workers between high-impact and low-impact content, mandatory breaks after exposure to particularly traumatic material, limits on consecutive days working with disturbing content and the allocation of annual “trauma leave” for mental health recovery.
We need a system that tracks not just the quantity of content reviewed, but one that accounts for emotional impact. For example, a video of extreme violence should count more toward a worker’s cap than a spam post.
Independent Oversight Body
Self-regulation by tech companies has proven insufficient; it’s essentially entrusting a jackal with the chicken coop. We need an independent body with the power to audit, enforce standards and impose penalties when necessary.
This oversight body should consist of ethicists, former data workers, mental health professionals and human rights experts. It should have the authority to conduct unannounced inspections of data work facilities, set and enforce industry-wide standards for working conditions and mental health support, and provide a safe channel for workers to report violations without fear of retaliation. Crucially, any oversight body must include the voices of current and former data workers who truly understand the challenges of such work.
The Role Of Consumers & The Public In Demanding Change
While industry reforms and regulatory oversight are crucial, the power of public pressure cannot be overstated. As consumers of digital content and participants in online spaces, we all have a role to play in demanding more ethical practices. This involves informed consumption, educating ourselves about the human cost behind content moderation.
Before sharing content, especially potentially disturbing material, we should consider the moderator who might have to review it. This awareness might influence our decisions about what we post or share. We must demand transparency from tech companies about their content moderation practices.
We can use companies’ own platforms to hold them accountable by publicly asking questions about worker conditions and mental health support. We should support companies that prioritize ethical labor practices and consider boycotting those that don’t.
Moreover, as AI tools become increasingly prevalent in our digital landscape, we must also educate ourselves about the hidden costs behind these seemingly miraculous technologies. Tools like ChatGPT and DALL-E are the product of immense human labor and ethical compromises.
These AI systems are built on the backs of countless invisible individuals: content moderators exposed to traumatic material, data labelers working long hours for low wages and artists whose creative works have been exploited without consent or compensation. In addition to the staggering human cost, the environmental toll of these technologies is alarming and often overlooked.
From the massive energy consumption of data centers to the mountains of electronic waste generated, the ecological footprint of AI is a critical issue that demands our immediate attention and action. By understanding these realities, we can make more informed choices about the AI tools we use and advocate for fair compensation and recognition of the human labor that makes them possible.
Political action is equally important. We need to advocate for legislation that protects data workers, urge our political representatives to regulate the tech industry, and support political candidates who prioritize digital ethics and fair labor practices.
It’s crucial to spread awareness about the realities of data work through use of our platforms so that we can inform people about the stories of people like Abrha, Kings, and Ranta and encourage discussions about the ethical implications of our digital consumption.
We can follow and support organizations like the African Content Moderators Union and NGOs focused on digital labor rights and amplify the voices of data workers speaking out about their experiences to help bring about meaningful change.
Most people have no idea what goes on behind their sanitized social media feeds and the AI tools they use daily. If they knew, I believe they would demand change. Public support is necessary to ensure the voices of data workers are heard.
By implementing these solutions and harnessing the power of public demand, we can work toward a future where the digital world we enjoy doesn’t come at the cost of human dignity and mental health. It’s a challenging path, but one we must traverse if we are to create a truly ethical digital ecosystem.
This article is based on interviews conducted with data workers from Kenya, Syria, Lebanon, Germany, Colombia, Venezuela and Brazil as part of the Data Workers Inquiry project, a community action research project borne out of a collaboration between the Distributed Artificial Intelligence Research Institute and Weizenbaum Institute.