Author: Denis Avetisyan
New research exposes the hidden human effort powering artificial intelligence systems, revealing the precarious conditions faced by those performing this essential ‘ghost work’.

A study of Bangladeshi platform workers demonstrates how ‘ghostcrafting’ sustains AI, highlighting issues of algorithmic fairness and the need for more equitable labor practices in platform economies.
Despite the increasing ubiquity of artificial intelligence, the human labor underpinning its development remains largely obscured. This research, ‘Ghostcrafting AI: Under the Rug of Platform Labor’, investigates the hidden contributions of Bangladeshi platform workers who materially enable AI systems through improvised, often precarious labor practices. We demonstrate that these workers engage in ‘ghostcrafting’ – resourceful strategies for navigating platform economies – while facing exploitative conditions and algorithmic biases. How can design, policy, and governance interventions recognize and sustain this vital, yet invisible, workforce powering the future of AI?
The Hidden Labor of Intelligent Machines
The perception of artificial intelligence as wholly autonomous often overshadows the significant human effort embedded within its creation and upkeep, a phenomenon researchers term ‘Ghostcrafting AI’. This labor, frequently unseen by end-users, involves tasks like labeling data, moderating content, and refining algorithms – processes essential for AI systems to function effectively. Recent investigation into the work of Bangladeshi platform laborers reveals the scale of this contribution; these workers, operating within the digital ‘gig’ economy, are critical in sustaining the very AI applications that are increasingly integrated into daily life. The research underscores that AI is not simply created by machines, but rather sustained by a largely invisible global workforce, demanding a re-evaluation of how AI development is understood and attributed.
The foundation of many artificial intelligence systems relies heavily on a geographically concentrated workforce in the Global South, where large-scale data annotation and content moderation are commonplace. This research reveals that workers in regions like Bangladesh are crucial in ‘ghostcrafting’ the AI that increasingly shapes modern life, performing the repetitive, yet vital, tasks that allow algorithms to ‘learn’ and function. These individuals meticulously label images, transcribe audio, and filter harmful content, effectively training AI to recognize patterns and make decisions. The study demonstrates this labor isn’t merely supplementary; it is integral to sustaining the performance and scalability of contemporary AI systems, highlighting a critical dependency often obscured by the narrative of autonomous technology.
The rise of platform work, while offering some flexibility, frequently subjects digital laborers to precarious conditions that border on exploitation. These workers, essential for training and maintaining artificial intelligence, often face inconsistent earnings, a lack of benefits like healthcare or paid leave, and limited legal protections. Competition for tasks on these platforms drives down wages, while algorithmic management systems exert intense control over work processes, often without transparency or due process. This creates a power imbalance where workers are vulnerable to unfair treatment and have little recourse for addressing grievances. The very structure of platform work – characterized by short-term contracts and a lack of employer-employee relationship – effectively externalizes the costs of labor onto the workers themselves, leaving them bearing the brunt of economic instability and lacking the safeguards typically afforded to traditional employees.
A truly equitable and responsible artificial intelligence future hinges on acknowledging and addressing the contributions of a largely unseen workforce. Current AI development often overlooks the human labor – data annotation, content moderation, and algorithmic training – predominantly performed by workers in the Global South. Without transparent accounting for this ‘ghostwork’, issues of fairness and accountability remain obscured, potentially perpetuating biases embedded within algorithms and creating exploitative labor conditions. Recognizing this hidden workforce is not merely an ethical consideration, but a fundamental requirement for building AI systems that are demonstrably just, reliable, and reflective of diverse human values; ignoring this crucial component risks reinforcing existing inequalities and undermining the potential benefits of artificial intelligence for all.
Algorithmic Oversight: The Mechanics of Control
Algorithmic Management systems on digital labor platforms utilize both Visibility Algorithms and Rating Systems to regulate worker access to opportunities and ultimately, their earnings. Visibility Algorithms determine the order in which workers appear in search results or are presented with task requests, effectively controlling their exposure to potential work. Simultaneously, Rating Systems – often based on client feedback – assign scores to workers, which directly impact their visibility; lower ratings typically result in reduced access to tasks and potentially, account deactivation. This dual system creates a dynamic where both algorithmic ranking and performance metrics jointly influence a worker’s ability to secure income, shifting control away from traditional employer-employee relationships and towards platform-defined criteria.
Algorithmic Management systems frequently function as ‘black boxes’ due to a lack of transparency regarding the specific metrics used to evaluate platform worker performance. The criteria for assessment – encompassing factors like completion rates, customer feedback scores, response times, and adherence to often-unspecified quality guidelines – are typically not fully disclosed to workers. This opacity fosters intense competition as workers attempt to decipher the algorithmic logic and optimize their behavior to achieve favorable ratings. Without clear insight into the evaluation process, workers are compelled to engage in strategies aimed at ‘gaming’ the system, often prioritizing metrics visible to the algorithm over other considerations, and creating a competitive environment where relative performance, rather than absolute standards, determines access to opportunities and income.
Platform workers experiencing consistently low ratings face a demonstrable reduction in task visibility and subsequent access to work opportunities. This effect is algorithmically driven; visibility algorithms prioritize workers with higher average ratings, effectively demoting those with lower scores in search results and task allocation. The resulting decrease in available tasks leads to reduced earnings, creating a cycle of financial instability – or ‘precarity’ – as workers strive to improve their ratings to regain access to sufficient work. This dynamic is often self-reinforcing, as reduced earnings can impact a worker’s ability to invest in resources – such as better equipment or training – that might improve performance and, consequently, their rating.
Platform workers frequently modify their behavior in response to algorithmic management systems, demonstrating adaptive strategies to maintain or improve their performance metrics. This includes practices such as ‘gaming’ the system by strategically accepting high-completion-rate tasks, optimizing profile information to appeal to customer preferences, and engaging in proactive communication to preemptively address potential negative feedback. Workers also utilize techniques like ‘batching’ – rapidly accepting multiple tasks to increase overall volume – or employing secondary accounts to mitigate the impact of low ratings. These adaptations are not necessarily indicative of increased productivity, but rather represent calculated efforts to navigate the opaque criteria of visibility algorithms and secure continued access to work opportunities, highlighting a shift in labor practices driven by algorithmic control.
Adaptive Strategies: Workers Navigating Algorithmic Constraints
Workers operating within algorithmically managed labor platforms frequently utilize ‘Tactical Repertoires’, which encompass a range of improvised strategies designed to navigate platform constraints and optimize earnings. These tactics are not pre-planned but rather emerge as responses to specific algorithmic controls, such as rating systems, fee structures, or task allocation methods. Common examples include techniques for manipulating visibility to gain preferential task access, methods for circumventing platform fees, and strategic acceptance or rejection of tasks based on predicted profitability. The application of these tactics is often context-dependent, varying based on platform policies, worker experience, and the specific demands of the task at hand. These repertoires represent an ongoing process of adaptation as workers react to and attempt to influence the algorithmic systems governing their labor.
Workers on algorithmic platforms frequently utilize strategies to manipulate system constraints, including identity masking through the use of multiple accounts or anonymizing personal information, fee avoidance via techniques like utilizing different payment methods or exploiting geographical discrepancies in pricing, and strategic task acceptance focused on maximizing earnings relative to time or effort expended. These tactics are not necessarily violations of platform terms of service, but rather represent worker adaptations to optimize outcomes within the established system. The specific methods employed vary considerably depending on the platform, the nature of the work, and the worker’s individual circumstances, but consistently aim to increase earnings or reduce operational costs.
Situated learning is central to the development of worker tactical repertoires within platform economies. This process involves the acquisition of knowledge and skills through active participation in specific communities of practice, rather than solely through formal training or platform-provided resources. Workers learn from observing the strategies of peers, exchanging information regarding successful task completion and avoidance of penalties, and collectively identifying loopholes or inefficiencies in platform algorithms. This decentralized, experiential learning is particularly important due to the rapidly evolving nature of platform work and the limited availability of standardized guidance, allowing workers to adapt quickly to changing conditions and optimize their performance through shared best practices and localized knowledge.
Peer networks are critical for workers navigating the challenges presented by digital labor platforms. These networks function as informal support systems where individuals share information regarding platform policies, algorithm changes, and effective strategies for maximizing earnings or circumventing restrictions. This knowledge transfer frequently occurs through online forums, messaging applications, and localized in-person interactions. Resources exchanged within these networks include tips on gaming the system, identifying profitable tasks, avoiding penalties, and collectively addressing issues with platform management. The decentralized nature of these peer-to-peer learning environments allows for rapid adaptation to evolving platform conditions, providing workers with a competitive advantage and a means of collective agency.
Bridging the Divide: Infrastructure and Access
The digital divide significantly restricts participation in platform work by creating unequal access to necessary resources. Individuals lacking consistent access to reliable internet connections, or personal computing devices such as smartphones, laptops, or tablets, are effectively excluded from opportunities available through online labor platforms. This disparity impacts not only access to work but also the ability to search for jobs, complete tasks requiring digital tools, and receive payments. Data indicates a strong correlation between socioeconomic status and digital access, meaning that vulnerable populations are disproportionately affected, reinforcing existing inequalities and limiting their potential to benefit from the growth of the digital economy.
Cyber cafes function as vital on-ramps to the digital economy for individuals lacking personal devices or consistent internet access. These establishments provide not only computer hardware and internet connectivity, but also frequently offer ancillary services such as printing, scanning, and digital literacy training. For platform workers, particularly those in developing nations, cyber cafes enable task acceptance, communication with clients, and completion of work deliverables. The availability of these shared access points directly impacts participation rates in the gig economy, mitigating barriers to entry for populations otherwise excluded due to infrastructural limitations. While mobile internet access is increasing, the reliability and data costs associated with it can still make cyber cafes a more viable and affordable option for consistent access.
Messaging applications such as WhatsApp function as key communication and collaboration tools for platform workers, particularly in contexts where dedicated work platforms lack integrated communication features. These applications facilitate real-time problem-solving, information sharing regarding work opportunities, and the dissemination of best practices amongst peers. Beyond task-related communication, these platforms also foster the development of informal support networks and communities, enabling workers to share experiences, provide mutual assistance, and collectively navigate the challenges of platform work. This peer-to-peer learning and support is especially crucial for those lacking formal training or access to traditional employee resources.
Equitable access to opportunities within the digital economy necessitates addressing the digital divide, as disparities in internet access and digital literacy create systemic barriers to participation. Individuals lacking reliable connectivity or the necessary devices are effectively excluded from online labor markets, educational resources, and essential services. This exclusion disproportionately affects low-income populations, rural communities, and marginalized groups, exacerbating existing inequalities. Closing this gap requires investment in affordable broadband infrastructure, digital skills training programs, and accessible devices to ensure all individuals have the means to benefit from the expanding digital economy and avoid further marginalization.
Towards Accountable AI and Decolonizing Computation
The development of truly Accountable AI necessitates a thorough understanding of the social and economic realities underpinning digital labor practices. Algorithms aren’t built in a vacuum; they are trained on data generated by human workers operating within specific economic constraints and power dynamics. Ignoring these contexts risks perpetuating, and even amplifying, existing inequalities. Current AI development often overlooks the precarious conditions faced by many digital laborers – issues like low wages, lack of benefits, and algorithmic management – leading to systems that prioritize efficiency gains over worker well-being. Consequently, an accountable approach requires actively investigating the livelihoods, working conditions, and survival strategies of those who generate the data that fuels these technologies, ensuring that AI benefits all stakeholders, not just those at the top of the value chain.
The pursuit of fairness in artificial intelligence necessitates a critical examination of deeply rooted power imbalances that permeate the entire lifecycle of algorithmic systems. These inequities aren’t simply technical glitches; they are reflections of historical and ongoing societal structures that privilege certain groups while marginalizing others. Algorithmic design, data collection, and deployment processes often encode existing biases, leading to discriminatory outcomes in areas like loan applications, hiring practices, and even criminal justice. Addressing this requires moving beyond purely technical solutions and actively challenging the assumptions and power dynamics embedded within these systems. A truly equitable AI demands a concerted effort to decolonize data, diversify development teams, and ensure algorithmic transparency and accountability, acknowledging that fairness is not a neutral concept but one actively constructed and maintained through conscious effort.
Postcolonial Computing emerges as a vital framework for dissecting the inherent power dynamics woven into the fabric of technological advancement. This approach doesn’t merely seek technical fixes for biased algorithms; instead, it critically examines how historical and ongoing colonial structures influence the design, deployment, and impact of computing technologies. By acknowledging that technology isn’t neutral, but rather a product of specific socio-political contexts, Postcolonial Computing challenges the assumption of universal applicability and encourages researchers to consider the localized consequences of technological interventions. It proposes a shift from simply optimizing for efficiency or accuracy to prioritizing equity, justice, and the empowerment of marginalized communities, ultimately advocating for a more inclusive and responsible future for technological development that actively decolonizes digital spaces.
A truly beneficial artificial intelligence necessitates a fundamental shift in development, prioritizing the lived experiences of marginalized workers often obscured by technological progress. Recent research, employing qualitative and ethnographic methods focused on Bangladeshi platform workers, illuminates the complex realities of digital labor. This analysis details not merely the challenges faced – precarious income, algorithmic management, and limited social protections – but also the remarkable strategies these workers employ for survival and continuous learning. By documenting their resourcefulness and the informal ‘learning ecologies’ that emerge within the digital marketplace, the study demonstrates that centering these voices is not simply an ethical imperative, but a crucial step towards building AI systems that genuinely address human needs and foster equitable outcomes, moving beyond theoretical fairness to practical benefit.
The study illuminates a critical paradox within platform economies: the reliance on ‘ghostcrafting’ to maintain artificial intelligence systems. This improvised labor, largely unseen and undervalued, sustains the very technologies promising automation. It echoes Marvin Minsky’s assertion: “The more we learn about intelligence, the more we realize how much of it is just clever hacking.” The Bangladeshi workers detailed aren’t building sophisticated algorithms; they’re enacting a form of pragmatic, low-level ‘hacking’ to resolve the shortcomings of these systems, a necessary intervention masked by the platform’s design. This reinforces the core argument concerning precarity; systemic issues necessitate individualized, often exploitative, solutions.
What Remains to Be Seen
The practice of ‘ghostcrafting’ – the unacknowledged labor sustaining automated systems – does not present itself as a novel phenomenon. Rather, the research illuminates a persistent condition. Automation, it appears, does not eliminate labor so much as redistribute it, often obscuring the work and the worker in the process. The question is not whether such work exists, but how readily it can be rendered invisible, and to what end. Further inquiry must move beyond documentation of precarity, toward a systematic analysis of the architectures that necessitate it.
The focus on Bangladesh, while crucial, represents a single node in a vast, globally distributed network. Comparative studies, examining diverse contexts of platform labor, are essential. Such research should not merely catalogue instances of ‘ghost work’, but probe the shared logic underpinning these arrangements. What incentives, technical and economic, consistently prioritize invisibility? What assumptions about labor – its value, its source, its deservingness – are embedded within these systems?
Ultimately, the challenge lies not in perfecting the tools of automation, but in confronting the uncomfortable truth that every intelligence, artificial or otherwise, is built upon a foundation of human effort. To pretend otherwise is not progress, but a particularly efficient form of self-deception.
Original article: https://arxiv.org/pdf/2512.21649.pdf
Contact the author: https://www.linkedin.com/in/avetisyan/
See also:
- Clash Royale Best Boss Bandit Champion decks
- Vampire’s Fall 2 redeem codes and how to use them (June 2025)
- Best Hero Card Decks in Clash Royale
- Mobile Legends: Bang Bang (MLBB) Sora Guide: Best Build, Emblem and Gameplay Tips
- Clash Royale Furnace Evolution best decks guide
- Best Arena 9 Decks in Clast Royale
- Clash Royale Witch Evolution best decks guide
- Dawn Watch: Survival gift codes and how to use them (October 2025)
- Wuthering Waves Mornye Build Guide
- All Brawl Stars Brawliday Rewards For 2025
2025-12-30 02:52