Aranya Sahay's Humans in the Loop
Whose Knowledge Counts?
Introduction
In the closing decades of the twentieth century, postcolonial theorists argued that the most durable forms of power were not those exercised through physical coercion but those embedded in systems of knowledge in the quiet authority to decide what counts as truth, expertise, and legitimate understanding. Today, that argument has acquired an urgency its originators could not have anticipated, because the most consequential knowledge systems of the twenty-first century are not libraries or universities but algorithms. Aranya Sahay's Humans in the Loop (2024) intervenes in this landscape with remarkable precision, placing an Adivasi woman named Nehma at the intersection of two radically different epistemological worlds: the living, relational, ecologically embedded knowledge of indigenous Jharkhand, and the rigid, taxonomic, statistically driven logic of machine learning.
The film's central argument dramatized through narrative, cinematic form, and cultural juxtaposition is that algorithmic bias is not a technical error waiting to be corrected by better engineers. It is a structural condition, produced and reproduced by epistemic hierarchies that determine whose ways of knowing are permitted to shape technological systems and whose are silently consumed by them. This blog engages with the film across three stages of encounter: a contextual grounding in the film's core themes, a set of analytical frameworks for active viewing, and a critical post-screening essay examining how the film represents the relationship between artificial intelligence and human knowledge. Together, these sections aim to equip readers with the conceptual tools to engage with Humans in the Loop not merely as cinema but as a political text.
Background and Core Themes
Aranya Sahay's Humans in the Loop (2024) arrives at a critical moment in global conversations about artificial intelligence, labour, and cultural representation. The film follows Nehma, an Adivasi woman from Jharkhand whose entry into AI data-labelling work opens a window onto the deeply human and deeply unequal infrastructures that underpin machine learning systems. The Wikipedia summary establishes the narrative premise immediately: an Adivasi woman thrust into the mechanics of machine learning. In doing so, it signals the film's political intent to render visible the human infrastructure that underwrites artificial intelligence, and to ask what is lost when that infrastructure is built on the erasure of the very communities that sustain it.
A central concern running through the film's critical reception is the tension between indigenous, lived experience and the rigid, reductive categories that machine learning systems rely on. When Nehma's knowledge shaped by Adivasi culture, relationship with land, and ways of understanding the world is forced into algorithmic frameworks, the film reveals just how much gets lost or distorted in that translation. The Federal's review emphasizes this clash explicitly, noting that the film foregrounds invisible labour and cultural representation, and interrogates what happens when the epistemological richness of indigenous life collides with the binary logic of data annotation (Anjum). This clash, the film suggests, is not incidental but structural: it is built into the design of AI systems that treat all human input as equivalent and interchangeable, erasing the contextual depth that gives that input its meaning.
At a broader level, the film raises fundamental questions about who gets to define knowledge, whose experiences are considered valid data, and what it means for AI to learn from human input when that input is filtered through unequal power structures. The Indian Express identifies the full constellation of themes at work bias, identity, knowledge hierarchies, representation, invisible labour, nature versus technology, and cultural epistemologies which together constitute an argument that the ethics of artificial intelligence cannot be separated from the politics of labour, culture, and recognition (Indian Express Editorial). In sum, Humans in the Loop is not merely a film about one woman's employment; it is a meditation on the conditions under which certain knowledges are made to serve systems that ultimately do not serve them.
Pre-Viewing Tasks
1. AI Bias and Indigenous Knowledge Systems
Artificial intelligence is frequently imagined as objective a neutral system of pattern recognition operating beyond human prejudice. This assumption collapses under scrutiny. AI bias refers to the systematic errors embedded in machine learning systems arising from skewed, incomplete, or culturally homogenous training data. When the categories used to label data reflect dominant cultural assumptions, the resulting system encodes those assumptions as universal truth. As Bordwell and Thompson note in their foundational analysis of meaning-making in film and cultural systems, representation is never innocent it always carries ideological freight (Bordwell and Thompson). Applied to AI, this means that the very act of classifying and labelling data is a political act, not a technical one.
Indigenous knowledge systems present a particularly sharp challenge to the taxonomic logic of machine learning. Adivasi ecological knowledge is relational, contextual, and embedded in oral tradition and lived practice qualities that resist reduction into binary categorical structures. When such knowledge is forced into algorithmic frameworks, what is lost is not merely cultural nuance but entire epistemological worlds. Goker's research on AI representation in Indian cinema underscores this tension, observing that cultural narratives surrounding artificial intelligence frequently erase non-Western ways of knowing in favour of technocratic universalism (Goker). Humans in the Loop dramatizes precisely this erasure, asking viewers to consider whose knowledge counts as valid data and whose does not.
2. Labour and Digital Economies
The concept of invisible labour is central to understanding the political economy of artificial intelligence. Behind every seemingly autonomous AI system lies an enormous, largely hidden workforce of human annotators, data cleaners, and content moderators people whose cognitive and emotional effort is systematically obscured by the mythology of machine intelligence. This phenomenon is what scholars of digital capitalism identify as the ghost work that sustains technological systems while remaining deliberately invisible to end users and consumers (Alonso). The invisibility is not accidental: it is structurally produced by platforms and corporations that depend on this labour remaining cheap, disposable, and unorganized.
The significance of making such labour visible in cultural narratives cannot be overstated. Marxist film theory has long argued that cinema possesses a unique capacity to either naturalize or denaturalize the conditions of labour under capitalism (Vighi). A film that centres a data-labeller's experience challenges the dominant narrative of AI as a clean, frictionless technology produced by algorithms alone. It restores the human body specifically, in this case, a marginalized Adivasi woman's body to a process from which it has been systematically erased. Anjum's critical review in The Federal emphasizes that the film's focus on invisible labour is precisely what makes it politically urgent, connecting the micro-experience of one worker to the macro-structures of global digital economies (Anjum).
3. Politics of Representation
Representation in cinema operates on multiple levels simultaneously it concerns not only what is shown but how, by whom, and for what audience. From its critical reception, Humans in the Loop appears to engage self-consciously with the politics of representing Adivasi culture, resisting the ethnographic gaze that has historically reduced indigenous communities to spectacle or stereotype. The Indian Express notes that the film interrogates knowledge hierarchies and cultural epistemologies, suggesting a representational strategy that centres indigenous subjectivity rather than observing it from the outside (Indian Express Editorial).
Apparatus theory, as developed in film studies, argues that cinema as a technological apparatus is itself ideologically loaded the camera, the frame, and the editing process all participate in constructing and naturalizing particular worldviews (Shepherdson, Simpson, and Utterson). When a film about AI technology deploys that same apparatus to represent Adivasi culture, it enters into a complex negotiation between form and content. The film's representational politics extend to the question of who speaks: whether the narrative grants Nehma genuine interiority and critical agency, or whether she is positioned merely as symbol. Pre-viewing, students are encouraged to reflect on how these representational stakes operate in the film's trailers, posters, and reviews paratexts that are themselves sites of ideological construction.
Points to Ponder While Watching
1. Narrative and Storytelling
Pay close attention to the structural relationship the film establishes between Nehma's intimate personal world and the vast, impersonal systems of algorithmic labour she is drawn into. Effective political cinema rarely presents systemic critique in the abstract; it anchors ideology in human experience. Attend to the specific narrative moments scenes of domestic life, moments of cultural practice, sequences of labelling work and ask what formal devices the film uses to move between these registers.
- How does the film situate Nehma's personal life within larger algorithmic structures? What narrative turns foreground labour, family, and knowledge systems most vividly?
- When Nehma teaches AI by labelling data, what does this suggest about the human-machine learning loop beyond its technical definition? Is her role experienced as empowerment, exploitation, or something more ambiguous?
- At what narrative moments does the connection between the personal and the structural become most palpable? How does the film move between the intimate and the systemic without losing the weight of either?
2. Representation and Cultural Context
Observe carefully how the film represents Adivasi culture its language, its rituals, its relationship to forest and land. The danger in films engaging with marginalized communities is what postcolonial theorists call the burden of representation the tendency to flatten complex communities into singular symbolic figures. Consider whether Nehma functions as a fully realized character with interiority and contradiction, or whether she risks becoming a symbol of noble indigenous resistance. Goker's work on Indian cinema and AI narratives suggests that films in this space often struggle to balance cultural specificity with narrative accessibility (Goker).
- How are Adivasi culture, language, tradition, and ecological knowledge represented? Does the film grant them epistemological weight positioning indigenous knowledge as a sophisticated and legitimate system or does it treat them as atmospheric background?
- Does the film challenge or reinforce dominant media stereotypes about tribal communities and modern technology? Look for moments where the film complicates easy binaries primitive/modern, natural/digital, local/global.
- Who controls the gaze? Does the camera observe Adivasi life from the outside, or does the film's perspective align with Nehma's own subjectivity? This question of focalization carries significant ideological consequences.
3. Cinematic Style and Meaning
Engage actively with the film's formal vocabulary. As Bordwell and Thompson argue, cinematographic choices are not merely aesthetic but ideological they direct viewer sympathy and shape understanding of power relations (Bordwell and Thompson). Mise-en-scene the arrangement of everything within the frame is a primary site of meaning-making. Attend to how the film visually differentiates between the forest environment and the digital workspace, and what values are attached to each through framing, lighting, and composition.
- Mise-en-scene and Cinematography: How are the forest, computer screens, workspace, and rituals framed visually? Are natural spaces filmed with wide, open compositions suggesting freedom, while digital spaces are rendered through close, constrictive shots or does the film complicate this binary?
- Sound Design: Does the soundscape of forest life birdsong, wind, ritual music contrast sharply with the ambient hum of computers and the mechanical rhythm of typing? Sound is frequently where the most politically charged meanings reside; attend carefully to what the film makes audible and what it silences.
- Editing Rhythms: What is the film's pace across its different registers? A film that lingers on the tedium of data-labelling makes a very different argument than one that renders it dynamic. What does editing rhythm suggest about the subjective experience of invisible digital labour?
- Visual Semiotics: The juxtaposition of natural imagery and digital spaces constitutes a system of signification that generates meaning through contrast (Number Analytics). Does the forest appear as primitive backdrop to the modernity of computer screens, or as a counter-world with its own depth and intelligence?
4. Ethical and Political Questions
The film's title is itself a provocation: humans in the loop is an optimistic phrase in AI ethics, suggesting that human judgment remains central to machine decision-making. But the film appears to ask whether being in the loop constitutes genuine agency or merely a more sophisticated form of exploitation. Hold the following questions actively in mind as you watch not as abstract puzzles but as questions the specific scenes and characters are engaging.
- What ethical dilemmas are depicted when training AI with culturally specific data? What happens when Nehma's contextual, relational knowledge is forced into universal algorithmic categories? Who bears the cost of that translation?
- How does the human-in-the-loop metaphor operate beyond its technical meaning politically, socially, and culturally? Is being in the loop a position of agency or of capture?
- Alonso's research argues that mainstream films typically domesticate the political stakes of AI, rendering systemic inequality as individual drama (Alonso). Does Humans in the Loop resist or participate in this tendency? Does it suggest structural transformation, individual resilience, or something more ambivalent?
- Attend to moments of friction scenes where the system fails to accommodate Nehma's input, or where her knowledge visibly exceeds the categories available to her. These are where the film's political argument is most concentrated.
Post-Viewing Critical Analysis
Prompt: Critically analyze how Humans in the Loop represents the relationship between technology (AI) and human knowledge.
Algorithmic Bias as Cultural Condition, Not Technical Glitch
The dominant public understanding of AI bias treats it as an engineering problem a correctable flaw arising from insufficient data diversity or poorly designed training pipelines. This framing, while not entirely wrong, is politically convenient because it locates the problem within the technology rather than within the social structures that produce and govern it. Humans in the Loop refuses this convenience from its opening sequences. By situating Nehma's data-labelling work within the specific cultural landscape of Adivasi Jharkhand a region with its own languages, cosmologies, and ecological relationships the film insists that what AI systems learn is inseparable from the cultural assumptions of those who design the categories of learning.
This is not merely a narrative choice; it is a theoretical claim borne out by scholarship. Haris et al., in their study of gender bias in machine learning applied to cinema, demonstrate that the categories used to classify and evaluate human behaviour in datasets reflect the social biases of their creators, embedding discrimination into systems that subsequently present themselves as objective (Haris et al. 3). The same logic applies to cultural bias. When Nehma is asked to label images, sounds, or concepts according to categories developed by technologists operating within Western or upper-caste urban frameworks, the act of labelling becomes an act of translation and translation, as postcolonial theory has long insisted, is never neutral. Something is always lost, distorted, or subordinated in the crossing.
Cave et al., writing on the cultural construction of the AI engineer in popular film, argue that dominant cinematic representations of artificial intelligence systematically marginalize non-Western, non-male, non-expert figures, rendering them invisible in the very systems they help to build (Cave et al.). Humans in the Loop reverses this representational logic by placing precisely such a figure an Adivasi woman with no formal technological training at the centre of the narrative. In doing so, it makes visible what the mythology of AI as a Western technocratic achievement conceals: that the system depends, at its most fundamental level, on the cognitive and cultural labour of people like Nehma, whose contributions are extracted, anonymized, and absorbed into an infrastructure that does not recognize them as knowledge producers.
Epistemic Hierarchies and the Politics of What Counts as Knowledge
The concept of epistemic hierarchy refers to the unequal valuation of different knowledge systems the processes by which certain forms of knowing are elevated as universal, rigorous, and authoritative while others are dismissed as local, anecdotal, or pre-scientific. In the context of AI, epistemic hierarchy operates through the design of training datasets and classification systems. The categories used to label data encode assumptions about what entities, relationships, and distinctions are meaningful and these assumptions are never culturally neutral.
Nehma's Adivasi knowledge presents the film with its most philosophically rich material. Indigenous ecological knowledge systems are fundamentally relational rather than taxonomic. They understand the natural world not as a collection of discrete, classifiable objects but as a web of relationships, obligations, and meanings a living system in which human beings are participants rather than observers (Goker). When Nehma attempts to bring this understanding to the task of data labelling when her perception of a forest, an animal, or a seasonal change is shaped by cultural categories that do not map onto the binary logic of machine learning the system has no framework for accommodating her knowledge. It can only register her input as correct or incorrect according to criteria she had no role in defining.
This dynamic illustrates what philosophers of science call epistemic injustice the harm done to a person specifically in their capacity as a knower (Frias). Nehma is not merely economically exploited by the data-labelling economy; she is epistemically exploited. Her knowledge is extracted and instrumentalized, but the framework within which it is used systematically devalues the very qualities its contextuality, its relationality, its embeddedness in living culture that make it distinctive and valuable. The film renders this injustice not through didactic dialogue but through the accumulating frustration of scenes in which Nehma's instinctive responses conflict with the system's expected outputs. The audience is positioned to feel, viscerally, the violence of a categorization system that cannot hear what she is trying to tell it.
Bordwell and Thompson's framework for analyzing narrative point of view is instructive here. They argue that a film's alignment of the viewer's perspective with a particular character's experience is one of cinema's most powerful ideological tools (Bordwell and Thompson). By aligning the viewer with Nehma by granting access to her frustration, her intelligence, and the richness of the knowledge she carries Humans in the Loop performs a representational act of considerable political weight. It does not ask the viewer to observe Adivasi knowledge from the outside, as exotic or picturesque. It positions that knowledge as the evaluative standard against which the AI system is judged and finds the system wanting.
Apparatus Theory and Ideological Representation
Apparatus theory argues that the film medium itself participates in naturalizing particular worldviews that the camera, the edit, and the frame are not neutral windows onto reality but ideological instruments (Shepherdson, Simpson, and Utterson). Applied to Humans in the Loop, this raises a productive tension: a film that critiques the ideological dimensions of one technological apparatus (AI) must reckon with the ideological dimensions of its own apparatus (cinema). Sahay appears acutely aware of this. The film's visual grammar works deliberately against the conventions of technological triumphalism that dominate mainstream AI cinema.
Where films like Ex Machina or Her frame artificial intelligence through sleek, minimalist aesthetics that signify progress and sophistication, Humans in the Loop consistently returns to the textures of embodied, natural life: forest canopies, handmade objects, ritual practices, the physical fatigue of repetitive digital labour. This visual strategy enacts at the level of form what the narrative argues at the level of content that the natural, the cultural, and the human are not superseded by the digital but persist alongside it, and at considerable cost. The forest does not appear as primitive backdrop to the modernity of computer screens; it appears as a counter-world with its own depth, complexity, and intelligence a system of meaning that precedes and exceeds the algorithmic (Number Analytics).
Alonso's research on socio-technical imaginaries in AI cinema is pertinent here. He argues that films about artificial intelligence typically reproduce one of two dominant imaginaries: AI as existential threat (the dystopian mode) or AI as liberatory tool (the utopian mode). Both imaginaries tend to be individualist and Western in their framing, centering the concerns of technologists and consumers rather than the workers who build and maintain the systems (Alonso). Humans in the Loop departs from both modes by centering a collective, culturally specific experience of AI as an extractive economic structure a film not about whether AI will destroy or save humanity in the abstract, but about what it is already doing to particular communities in the concrete.
Power Relations, Ideology, and the Question of Resistance
Any critical analysis of a film engaging with marginalized communities must grapple with the question of agency. There is a risk, in films that foreground exploitation and epistemic injustice, of producing what Vighi calls a cinema of victimhood representations that elicit empathy for the oppressed without suggesting the possibility of transformation or resistance (Vighi). Whether Humans in the Loop avoids this risk depends significantly on how it handles Nehma's interiority and capacity for critical reflection.
From its critical reception, the film appears to grant Nehma genuine subjectivity not merely suffering but understanding. The narrative conceit of data labelling becomes a double-edged metaphor. Nehma labels the world for the machine, but the film suggests that she is also labelling the machine developing, through her work, a critical understanding of how the system operates, what it cannot perceive, and where its categorical logic breaks down. This interpretive possibility that the human-in-the-loop retains the capacity to read the loop from the inside is where the film's most politically hopeful dimension resides.
Goker's analysis of posthuman perspectives in Indian cinema suggests that films engaging with AI and indigenous identity have a unique opportunity to articulate counter-imaginaries alternative visions of the human-technology relationship that draw on non-Western philosophical traditions (Goker). If Humans in the Loop achieves this, it does so not by rejecting technology but by insisting that the terms on which human knowledge enters technological systems must be renegotiated. The epistemic hierarchy undergirding AI cannot be reformed from within its own logic; it must be challenged by the knowledges it currently excludes.
Conclusion
Humans in the Loop is, at its core, a film about the politics of knowing. Across its narrative, its cinematic form, and the critical conversation it has generated, the film argues consistently that algorithmic bias is not a technical anomaly but a cultural condition one produced by the same epistemic hierarchies that have historically determined whose knowledge counts, whose labour is visible, and whose ways of understanding the world are permitted to shape shared systems of meaning. From the contextual grounding of its core themes, through the analytical frameworks active viewing demands, to the critical scrutiny the post-screening essay applies, the film rewards sustained, theoretically informed engagement at every stage.
By centering Nehma's experience, the film performs a representational intervention: it returns the Adivasi woman from the margins of the AI narrative to its very centre, revealing that the loop of human-in-the-loop is not a circle of equal participants but a structure of extraction in which some humans do the knowing and others do the being known. Whether this revelation translates into transformative political critique or remains a powerful but contained act of cultural visibility is a question the film leaves, productively, open. What it does not leave open is whether the current terms of AI development are just. On that question, its answer is unambiguous delivered not in abstract argument but in the specific, irreducible face of one Adivasi woman, labouring at a screen, teaching a machine that will never acknowledge what she knows.
Works Cited
Alonso, D. V. "Imagining AI Futures in Mainstream Cinema: Socio-Technical Narratives and Social Imaginaries." AI & Society, 2026, https://doi.org/10.1007/s00146-026-02880-7.
Anjum, N. "Aranya Sahay's Humans in the Loop and the Politics of AI Data Labelling." The Federal, 2026, https://thefederal.com/films/aranya-sahay-humans-in-the-loop-oscar-adivasi-data-labelling-jharkhand-ai-tribal-216946.
Bordwell, David, and Kristin Thompson. Film Art: An Introduction. 12th ed., McGraw-Hill Education, 2019.
Cave, S., et al. "Shuri in the Sea of Dudes: The Cultural Construction of the AI Engineer in Popular Film, 1920–2020." Feminist AI: Critical Perspectives on Algorithms, Data, and Intelligent Machines, Oxford University Press, 2023, pp. 65–82.
Frias, C. L. "The Paradox of Artificial Intelligence in Cinema." Cultura Digital, vol. 2, no. 1, 2024, pp. 5–25, https://doi.org/10.23882/cdig.240999.
Goker, D. "Human-Like Artificial Intelligence in Indian Cinema: Cultural Narratives, Ethical Dimensions, and Posthuman Perspectives." International Journal of Cultural and Social Studies, vol. 11, no. 2, 2025, pp. 1–10, https://doi.org/10.46442/intjcss.1799907.
Haris, M. J., et al. "Identifying Gender Bias in Blockbuster Movies through the Lens of Machine Learning." Humanities and Social Sciences Communications, vol. 10, 2023, article 94, https://doi.org/10.1057/s41599-023-01576-3.
Humans in the Loop (film). Wikipedia, Retrieved February 15, 2026, https://en.wikipedia.org/wiki/Humans_in_the_Loop_(film).
Indian Express Editorial. "Humans in the Loop: Technology, AI and Digital Lives." The Indian Express, 2026, https://indianexpress.com/article/opinion/columns/humans-in-the-loop-aranya-sahay-technology-ai-digital-10391699/.
McDonald, Kevin. Film Theory: The Basics. 2nd ed., Routledge, 2023.
Number Analytics. "Film Theory Essentials: Key Concepts and Frameworks." Number Analytics, 2023, https://www.numberanalytics.com/blog/film-theory-essentials.
Sahay, Aranya, director. Humans in the Loop. India, 2024.
Shepherdson, C., J. Simpson, and A. Utterson, editors. Film Theory: Critical Concepts in Media and Cultural Studies. Vols. 1–4, Routledge, 2004.
Vighi, Fabio. Critical Theory and Film: Rethinking Ideology through Film Noir. Bloomsbury Academic India, 2019.
.png)
.png)
Comments
Post a Comment