BehanBox Talkies: ‘Data Work Under Big Tech Is A Form Of Modern Slavery’
Activists Joan Kinyua and Michael Geoffrey Asia about mental health and PTSD, unionising challenges in Kenya, and the racial logic that sustains the modern tech economy

Joan Kinyua once loved technology, but she is now betrayed by its promise. Enticed by the pay and glimmering future of the rising data annotation industry in Africa, she found disillusionment setting in swiftly and suddenly: the pay didn’t justify the work, there was no social security, and the job and environment often led to burnout and trauma. Worse, the manual labour powering AI revolution went unrecognised, and these digital futures replicated imperial power structures of the past.
She, along with her colleague Michael Geoffrey Asia and Ephantus Kanyugi, started the Data Labellers Association that enables workers to have a shared space, vocabulary, and bargaining power against global tech companies. “It is African Intelligence powering European intelligence, which they are now calling Artificial Intelligence,” Michael says.
In the last two years, the DLA has mobilised workers across the country, developed standards for dignified work conditions, and sparked a conversation on mental health, an occupational hazard in this field.
In a conversation with Saumya Kalia, Joan and Michael talk about the painstaking labour of mobilising data workers and the fight against racial discrimination. There is a need for cross border organising with the Philippines and India, which house the majority of data workers, they say. And the world needs to see beyond the manufactured hype sustained by Big Tech, governments, and media. “Big Tech needs to understand this is someone’s job, someone’s economy. They need to understand that these are human beings who are making them billionaires…It’s not a crime to have a win-win situation. It’s practical, it’s been proven, and it can still work,” Michael says.
Edited excerpts below.
How did you hear about data labelling jobs, how did you get into this space?
Joan: I started in an organisation called Sama (back then it was called Sama Source). I am professionally trained as a business administrator and was working in a school earlier, but I did not like that work. When my sister got a job at Sama, she would tell me about what this work is and how they have an open office and are working eight hours and how everyone is really friendly and how they’re working on this technology of self-driving cars. I felt this is what I wanted to do.
I soon resigned from my job and applied there; within a week or two, they called me for a data labelling job. The training was supposed to happen in a slum called Kibera, where I live. The companies wanted candidates to present themselves as having only a school education, regardless of whether they had degrees, professional training, or other expertise. If you mentioned additional qualifications, you risked being labeled “overqualified”. I followed the expectation and only stated that I had finished school in order to secure the job. I love technology and I love computers, and I was extremely good at the training which lasted for two weeks.
It was only three days before I was informed of an opportunity. The conversations in the start were very welcoming. They told us about working on cutting-edge technologies like self-driving cars, but we were also told not to communicate with anybody in case we saw something bad or if something wasn’t working. As a newbie, you’re excited and grateful, so I obeyed the rules.
Fast forward to around eight months later, I started noticing things. Firstly, the salary did not make sense. It was only facilitating your conditions of work, but not doing anything to make you independent in the long run. It’s like you’re just fueling the system but you’re not benefiting from the system.
Things became very bad when the company started directly targeting me when I became vocal about issues.
What was the consequence of speaking up?
Once our team leader warned they were going to box [isolate] us. I asked: if we are grown-ups, why would you box anybody for bad quality of work? I was called out for “speaking too much”. It got bad. I was targeted for my work; I used to get a quality [rating] of 100% on my tasks but they started removing chunks and finding mistakes to the point I was getting 56%. Then I was given a PIP, a Performance Improvement Plan. I moved from having good quality work to a PIP stage four, which meant that I was seen as a liability to the organisation.
[PIP is an example of manufacturing performance decline to justify dismissing workers who raise voices. Employers also routinely employ surveillance systems that alert management when people are talking in a bathroom or if they say the word ‘union’, to track “troublemakers” and manipulate task allocation to hinder organising, per this research.]
I did not want that bad reputation to affect my career prospects so I started looking for another job in the same sector. I was lucky enough to get a job at another organisation called CloudFactory; I transitioned from a night shift at Sama and started at CloudFactory the very same morning. I thought I had found a better opportunity, CloudFactory was marketed as a top-ranked BPO with higher pay and better projects. However, the reality was quite different. At Sama, the organisation was bad but you had medical cover and a contract. At the new organisation, you sign monthly contracts and are expected to earn 24,000 Kenyan Shillings or below because anything above that is taxable. There were no medical covers, social protections, or other benefits – you were essentially on your own.
Here, I also learned about Remotasks, a separate data annotation platform owned by Scale AI, where individual tasks fetched $30–40, significantly more than what I was earning.
Back then, I didn’t have a laptop or Wi-Fi at home, so I would wake up extremely early, around 4 am, so that by 5.30 am, I’d have reported to the office. During the day, I focused on CloudFactory work, and later in the evening, I would pretend to leave, hide in the washroom, and access work for Remotasks once the scene was clear.
Eventually, I was found out and fired. The official reason given was that I had breached my contract by working for another organisation performing the same type of tasks. But I continued with the formation of the association and talking to people.
Is this when the Data Labellers Association came up?
Somebody reached out to me for a research project in this field. My role as a research assistant was to interview 50 to 100 people of the same background as me and speak about the landscape. People spoke to us about many different challenges. For instance, the skilling issue, there is no way we can take the skills that we’ve got here to other spaces; once you’re fired from this organisation, you look for another bad organisation and then just continue the cycle because you do not have the [other] skills for the market.
When it comes to mental health, we are constantly living on the edge. People are asked to watch damaging, devastating content. These jobs are also not predictable… When it comes to the salary, back then when we started doing it, we were being paid very well. As more people started working on this platform, now you’re being paid some 0.something dollars for a task you’re working for five hours. Things are starting to get bad.
When I was working, I just thought these work conditions are normal and this is how people are supposed to be treated. But everyday, as we discussed the interviews with the head of the research, we realised more and more the exploitation built within the system. These things were not supposed to be happening.
I sat my friends down, Michael was one of them. We wanted to come up with an institution that can address the skilling risk. But the researcher noted this is a bigger issue than just skilling, and encouraged us to form a body that speaks about worker challenges.
We researched for a year and spent the whole of 2024 just speaking to people. Back then, it was difficult to get people to understand what data labelling was. There was conversation about gig workers but people were focused on ride-hailing sectors. We had to establish a space for ourselves. We formalised the association at the end of the year. It was recognising these systemic challenges that pushed us to form the association.
How did you connect with other workers? In India, unions have started using digital methods to mobilise gig workers but their reach is still low among data workers, who work remotely and include a lot of women with carework responsibilities.
Joan: Workers sign NDAs that explicitly state they’re not supposed to be part of unions so we had to be careful. People do not want to lose jobs so we have to come up with mechanisms to protect the workers. That meant no online interaction, because you do not know who is going to be part of the conversation and they might face retaliation from the organisation. [Platforms like Urban Company and Blinkit in India have employed similar union-busting tactics.]
Because we are very protective of the workers, there is no direct retaliation from the employer because our numbers are not made public. We do not give anybody information about who is or who isn’t part of the association.
In this field, we both know and don’t know each other. I knew Michael personally from earlier but I first saw his face in 2024. We have worked with these people for more than five years, but we do not really know them. These groups – on Telegram, WhatsApp, Signal – give us a new opportunity to reach workers.
Also, besides the three of us, the rest of the members are still actively working their data labelling jobs. That makes it easier for us to have conversations because we haven’t lost touch with people on the ground. We all started in the same place so there is that trust.
Michael: It also depends on how you’re related to them. Personally, I was very vocal at work and so someone meets you somewhere and is like, “I think I’ve seen you. You’ve been working on this and this space.” That trust and familiarity is how it started.
What is the perception about this work among data workers themselves? What did you want to demystify for the public?
Michael: People understand that these conditions are exploitative but they do not have an alternative, so they remain in the jobs out of desperation. We try to tell them a job is supposed to be a source of satisfaction, not just something to survive on. These jobs strategically have very poor working conditions, to ensure that we are always in survival mode. They target people between the ages of 18 and 23 – young people who do not have responsibilities. If someone gives you $200 per month and you are still living with your parents, you do not feel like it is a burden. But for someone like me who has a family, who has to lay rent and school fees, I will realise that $200 is not enough. It opens eyes for me.
It is hard, trying to bring these people to the table; most people do not know they are in survival mode. But they are beginning to understand what we mean by lack of adequate income.
Joan: We are also seeing people self-harming because of these jobs – that also makes people join associations or unions.
There is a knowledge lacuna about this work among policy makers and the general public too. Most people did not know what data labeling is – we had to establish its definition and how important it is in the Artificial Intelligence field. We took it upon ourselves to define these roles. We took it upon ourselves to be part of conversations where we were not invited. And we took it upon ourselves to speak about how people are being treated, to name and shame these organisations.
Big Tech organisations are very clever in squashing our voices. An organisation like Sama, for instance, is present in Kenya and other countries for the exact same job. Once they figure out that Kenyans are mobilising or are making noises or want more say in deciding the final product, they shift the work to another country. People are afraid these jobs will move so they end up being defensive of tech companies and these jobs. But we try to explain to them the companies are only concerned about the job, not the workers.
It is important for us to speak about the reality of this work not only on a regional but a national level. These Big Tech companies deliberately do not want workers to speak up or to be known – we are creating a space to challenge this.
Can you tell us more about the mental health aspects of this work? What makes it an occupational hazard for data workers?
Michael: You’re right, mental health is not recognised as a professional issue. We need to address it from a sector point of view, not a general point of view. Right now people believe content moderators have it easy – they are working online, facing a computer and earning a dollar. That is not the case.
For example, people are being killed in the ongoing war between Iran and Israel, but there’s someone who’s filtering this information so that it doesn’t reach consumers on the platform. What a content moderator is seeing online is not what a doctor sees in an emergency unit of the hospital, but both can cause mental health issues.
I have personally worked on moderating pornographic content. You cannot expect someone to work on pornography for eight hours everyday, for months, and still be the same. It was not easy.
Joan: Mental health is a very touchy subject but you can see the effects of this work. You can see how desensitised people have become due to the nature of the content they are exposed to, much of it harmful or disturbing. Many workers are affected, yet they either do not recognise the impact on their own mental health or are reluctant to acknowledge it. For example, in Kenya in 2024, when the platform Remotasks shut down, many workers were suddenly left without income, and the stress and pressure they had experienced over time were clearly visible. There is a hidden emotional toll of this work, even though it is rarely discussed openly.
[Workers at Sama described their work as “torture”. They found the sessions with ‘wellness counsellors’ unhelpful and rare because the focus was always on worker productivity.]
It’s not only about content but the environment itself. The 23-year-olds being targeted will potentially have families in the future and they will realise $200 is not enough. What happens to your family adds to your stress; you’re constantly living on the edge.
What triggered my panic attack and anxiety was this moment, around my daughter’s birthday. My account was deactivated because I violated community guidelines I was not aware of, and I did not get any money. I tried to appeal but nobody responded to my emails. That caused me to panic, I had already planned what I needed to do with that money, and then I didn’t even have money for diapers. It makes you feel so vulnerable – you’re vulnerable to the platform, to the system.
Mental health is manifesting in very different ways, but some people do not want to address it because of the stigma. Especially as Africans, we do not want to associate with mental health in any way, shape, or form – people are afraid to be labelled. When I suffered from anxiety attacks, I could not speak about it because I was afraid of so many things. How are my friends going to view me, is anything I’m going to say taken into consideration, or are they going to think I’m ‘psycho’ or something like that?
How do you weave mental health into your advocacy if the stigma contributes to silence?
Joan: This has been the hardest issue to surface. We didn’t want to start with mental health at first. Even the mental health department [Taskforce on Mental Health, Ministry of Health] has not been part of the conversation. But now when we speak to policymakers, we do not sugarcoat – we tell this is what these jobs are doing, this is how desensitised people are.
Platforms say they have mental health practitioners but usually it’s wellness coaches, and as content moderators you’re not supposed to speak about the content you’re working on. It’s quite unhelpful.
We are trying to look for mental health practitioners but we do not have any funding, so we want the mental health department to be part of this conversation. To be able to pay for mental health workshops or individual counseling sessions.
We have also found that community engagement — just having a community of people – can also ease workers’ mental tension. It’s like walking on eggshells in this sector. You do not want to harm the workers, but you also do not want to portray them as victims.
How important is cross-border solidarity or international organising for you? What would you tell data workers in India?
Joan: We want everybody on board because if it’s not Kenya providing the manpower, it’s India or Pakistan or the Philippines. Data workers should focus on building strong, cross-border solidarities. Collective organising, shared knowledge, and mutual support across regions are key to ensuring that their voices are heard and that they are not left invisible in a system that depends so heavily on them.
People are currently settling for less than what they deserve. For Kenya, platforms impose quality standards requiring work accuracy of 98% and above [workers whose output falls below a certain quality score get paid less, deprioritised for tasks, or removed]. Most people are doing work that is almost perfect. The quality standards are so high, which only shows how skilled and valuable workers are. But the irony is we are still treated as disposable.
Workers should not feel that they need to despair or overextend themselves, breaking their backs and sacrificing their health, just to deliver for a technology that was never designed with their well-being or interests in mind. Instead, they should recognise the value of the work they do and use that as a foundation to demand better pay, stronger protections, and proper recognition for their contributions.
Researchers have found that colonial power dynamics also show up in India – platforms employed ‘time zone hierarchies’ (reserving higher paying jobs for workers in Global North) and racial wage code; and having an Indian accent meant lower pay. Can you talk more about the racial dynamics at play in Kenya?
Joan: There is also a lot of gatekeeping in payment. People in Kenya are paid $2 an hour but those in the US or EU regions are paid close to $30 to $60 an hour. We were discussing this recently at a session when somebody from the audience said, because you’re Black, you should accept being paid $2. That’s the mentality we’re working with — I’m Black, they’re white. But aren’t we all human, aren’t we putting the same effort to do this job? We need to get rid of that self-discrimination. Arguably yes, these are different economies but the pay rate difference should not be this much.
Michael: I would call this a form of discrimination slavery. In their own country, training tasks earn people as much as $30. In Kenya, someone is working for two weeks without being paid even a single penny on training tasks. The act of someone not paying you or paying you poorly, that is slavery, and it has never had a good track record.
We need to put these conversations into perspective. Today, most Kenyans are struggling to apply for jobs because of what this work has created. When you submit a job application, it gets flagged as AI-generated because of the Kenyan English used. We fed a lion, but today the same same lion that was supposed to protect us is coming back to eat us. It’s pathetic.
[Kenyan English has its own syntax, rhythm, and phrasing, and those same patterns were fed into AI training datasets by workers. So the detection software reads what a Kenyan writes naturally as machine output, as ‘AI-generated content’. AI detection tools have been shown to produce higher false positive rates for Black writers than for white ones.]
After the Second World War, Europe had to survive and the European Marshall Plan was put in place but this post-war recovery assumed continued extraction and access to resources [land, labour] in colonies like Africa. It has happened earlier and it’s still happening. I see AI in this manner: we are still using Africa, for example, to sustain Europe and the West.
I came across an article someone wrote. Why do we have white cars with black tires? Why is it that when you block someone from accessing you, they are put on a ‘black list’? In a war field, a ‘white flag’ denotes peace but a ‘black flag’ shows war. We have regions where people die in most cases through drugs and it’s called a ‘black spot’. Why black? We call it ‘talent’ in Europe and the US, but when you come to Africa, they call it witchcraft.
Unless we stop manufacturing cars with white bodies and black tires, discrimination will never leave our society. Until we get to a point where this philosophy of one race being superior to the other is permanently abolished, where the colour of a man’s eyes is of no significance, then this space is going to continue being messy.
People like the tech, the glitz of AI tools, but we often fail to see what labour and effort goes into making it. What would you want people to know?
Joan: It is very important for people to feel secure and appreciated for the work they do. We only know the importance of our work or feel like we did a good job when we see self-driving cars work, knowing they are not causing accidents. We can see the robot vacuum is doing a good job. But in making that possible, we also had to see people sitting on toilets and doing very weird things which this robot took photos of.
[A recent investigation found Meta’s smart glasses recorded people’s intimate and private lives, collecting sweeping footage which was reviewed by data annotators.]
My point is: where am I in this conversation? Most people rely on basic research out there but they’re not willing to interact with the people who do this work. I don’t know what they’re afraid of.
When we see cutting edge technology, let’s ask the questions that nobody wants to ask. How did this come to be? How was it made? Let’s ask questions; let’s not just embrace AI.
Michael: I work on self-driving cars, but I am not sure if I will ever see self-driving cars here in Kenya before I die. I’ve never even seen streets of Kenya being pictured for the same. This is purely African intelligence powering European intelligence, which they are calling artificial intelligence.
Rights need to be centered in these conversations; people need to stop being blind to what is so obvious and just address the elephant in the room. The elephant in the room is Big Tech and human rights violations. People need to open their eyes.
We believe everyone deserves equal access to accurate news. Support from our readers enables us to keep our journalism open and free for everyone, all over the world.



