• About
  • Advertise
  • Contact
  • Login
Newsletter
NRI Affairs
Youtube Channel
  • News
  • Video
  • Opinion
  • Culture
  • Visa
  • Student Hub
  • Business
  • Travel
  • Events
  • Other
No Result
View All Result
  • News
  • Video
  • Opinion
  • Culture
  • Visa
  • Student Hub
  • Business
  • Travel
  • Events
  • Other
No Result
View All Result
NRI Affairs
No Result
View All Result
Home Opinion

Hard labour conditions of online moderators directly affect how well the internet is policed – new study

Guest Author by Guest Author
July 27, 2025
in Opinion
Reading Time: 6 mins read
A A
0
Hard labour conditions of online moderators directly affect how well the internet is policed – new study

Getty Images/GCShutter

Share on FacebookShare on Twitter
Advertisements

Tania Chatterjee, The University of Queensland; Agam Gupta, The Indian Institute of Technology Delhi, and Pradip Ninan Thomas, The University of Queensland

Big tech platforms often present content moderation as a seamless, tech‑driven system. But human labour, often outsourced to countries such as India and the Philippines, plays a pivotal role in making judgements that involve understanding context. Technology alone can’t do this.

Behind closed doors, hidden human moderators are tasked with filtering some of the internet’s most harmful material. They often do so with minimal mental health support and under strict non-disclosure agreements.

After receiving vague training, moderators are expected to make decisions within seconds, keeping in mind a platform’s constantly changing content policies and ensuring at least 95% accuracy.

Do these working conditions affect moderating decisions? To date, we don’t have much data on this. In a new study published in New Media & Society, we examined the everyday decision-making process of commercial content moderators in India.

Our results shed light on how the employment conditions of moderators do shape the outcomes of their work – and three key arguments that emerged from our interviews.

Efficiency over appropriateness

“Would never recommend de-ranking content as it would take time.”

—A 28-year-old audio moderator working for an Indian social media platform

As moderators work under high productivity targets, it compels them to prioritise content that can be handled quickly without drawing attention from supervisors.

In the above excerpt, the moderator explained she avoided content and processes that required more time to maintain her pace. While observing her work over a screen-share session, we noticed that reducing the visibility of content (de-ranking) involved four steps. Meanwhile ending live streams or removing posts required only two steps.

To save time, she skipped the content flagged to be de-ranked. As a result, content marked for reduced visibility, such as impersonations, often remained on the platform until another moderator intervened.

This shows how productivity pressures in the moderation industry easily lead to problematic content staying online.

Decontextualised decisions

“Ensure that none of the highlighted yellow words remained on the profile”

—Instructions received by a text/image moderator

Moderation work often includes automation tools that can detect certain words in text, transcribe speech, or use image recognition to scan the contents of pictures.

These tools are supposed to assist moderators by flagging potential violations for further judgement that takes context into account. For example, is the potentially offensive language simply a joke, or does it actually violate any policies?

In practice we found that under tight timelines, moderators frequently follow the tools’ cues mechanically rather than exercising independent judgement.

Advertisements

The quoted moderator above described instructions from her supervisor to simply remove text detected by the software. During a screen-share, we observed her removing flagged words without evaluating the context.

Often the automation tools that queue content and organise it for human moderators will also detach it from the broader conversational context. This makes it even harder for the moderator to make a context-based judgement on content that gets flagged but was actually innocent – despite that judgement being one of the reasons human moderators are hired in the first place.

Impossibility of thorough judgements

“If you guys can’t do the work and complete the targets, you may leave”

—Work group message of a freelance content moderator

Precarious employment compels moderators to mould their decision‑making processes around job security.

They are compelled to use strategies that allow them to decide quickly and appropriately. In turn, this influences their future decisions.

For instance, we found that over time, moderators develop a list of “dos and don’ts”. They may dilute expansive moderation guidelines into an easily remembered list of ethically unambiguous violations which they can quickly follow.

These strategies reveal how the very structure of the moderation industry impedes thoughtful decisions and makes thorough judgement impossible.

What should we take away from this?

Our findings show that moderation decisions aren’t just shaped by platform policies. The precarious working conditions of moderators play a crucial role in how content gets moderated.

Online platforms can’t put into place consistent and thorough moderation policies if the moderation industry’s employment practices are not improved too. We argue that content moderation and its effectiveness are as much a labour issue as it is a policy challenge.

For truly effective moderation, online platforms must address the economic pressures on moderators, such as strict performance targets and insecure employment.

We need greater transparency around how much platforms spend on human labour in trust and safety, both in‑house and outsourced. Currently, it’s not clear whether their investment in human resources is truly proportionate to the volume of content flowing through their platforms.

Beyond employment conditions, platforms should also redesign their moderation tools. For example, integrating quick‑access rulebooks, implementing violation‑specific content queues, and standardising the steps required for different enforcement actions would streamline decision-making, so that moderators don’t default to faster options just to save time.

Tania Chatterjee, Joint PhD Candidate at Indian Institute of Technology, Delhi, The University of Queensland; Agam Gupta, Associate Professor, Technology and Society, The Indian Institute of Technology Delhi, and Pradip Ninan Thomas, Associate Professor in Communication & Media, The University of Queensland

This article is republished from The Conversation under a Creative Commons license. Read the original article.

?s=32&d=mystery&r=g&forcedefault=1
Guest Author

Guest Author

Guest Author

Related Posts

Memorial to slavery Ile de Goree Senegal
Opinion

I study modern-day slavery − and here’s what I’ve learned about how enslavers try to justify their actions

July 29, 2025
OPINION: Mamdani for NYC Mayor: The Fight We’ve Been Waiting for
Opinion

OPINION: Mamdani for NYC Mayor: The Fight We’ve Been Waiting for

July 28, 2025
Gaza is starving – how Israel’s allies can go beyond words and take meaningful action
Opinion

Gaza is starving – how Israel’s allies can go beyond words and take meaningful action

July 25, 2025
Next Post
OPINION: Mamdani for NYC Mayor: The Fight We’ve Been Waiting for

OPINION: Mamdani for NYC Mayor: The Fight We’ve Been Waiting for

Memorial to slavery Ile de Goree Senegal

I study modern-day slavery − and here’s what I’ve learned about how enslavers try to justify their actions

Racism @UniMelb

Most Australians experiencing racism choose silence over formal complaints, major study finds

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recommended

Indian-Origin-Resident-of-Michigan-Faces-Nine-Years-in-Health Care-Fraud Case

Indian-Origin Resident of Michigan Faces Nine Years in Health Care Fraud Case.

1 year ago
ResignModi

Two petitions on Modi doing the rounds. Which one will you be signing?

4 years ago
Singapore-Indian-Origin-Former Minister-Fails-to-Secure-Witness-Statements-Ahead-of-Upcoming-Trial-nriaffairs

Singapore’s Indian-Origin Former Minister Fails to Secure Witness Statements Ahead of Upcoming Trial

11 months ago
CRICKET: Brisbane Heat secures Indian leggie for Women’s Big Bash League

CRICKET: Brisbane Heat secures Indian leggie for Women’s Big Bash League

4 years ago

Categories

  • Business
  • Events
  • Literature
  • Multimedia
  • News
  • nriaffairs
  • Opinion
  • Other
  • People
  • Student Hub
  • Top Stories
  • Travel
  • Uncategorized
  • Visa

Topics

Air India Australia california Canada caste china COVID-19 cricket Europe Gaza Germany h1b visa Hindu immigration India Indian Indian-American Indian-origin indian diaspora indian origin indian student Indian Students Israel Khalistan London Modi Muslim Narendra Modi New Zealand NRI NSW Pakistan Palestine Racism Singapore student students travel trump UAE uk US USA Victoria visa
NRI Affairs

© 2025 NRI Affairs.

Navigate Site

  • About
  • Advertise
  • Contact

Follow Us

No Result
View All Result
  • News
  • Video
  • Opinion
  • Culture
  • Visa
  • Student Hub
  • Business
  • Travel
  • Events
  • Other

© 2025 NRI Affairs.

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
WP Twitter Auto Publish Powered By : XYZScripts.com