By: Leonida Mutuku
Introduction
As a researcher and practitioner building Artificial Intelligence (AI) on the African continent and contributing to its governance and policy, I have observed several concerning practices over the last few years that if not addressed, threaten the self-determination of African communities in the AI age and inadvertently extend neo-colonial values. These practices are observed through three themes that I analyze later through the artifacts presented:
- Participation in AI infrastructure and value chains;
- Representation, Cultural Context and Indigenous Knowledge in AI; and
- Funding of AI infrastructure.
These observations have sparked important conversations about which actors truly constitute the African AI ecosystem and who is meaningfully participating in its development. The ecosystem in its current set up is perpetuating colonial power dynamics, extractive from marginalized communities and without equitable outcomes.
This essay is an exposition of approaches that collectively are resisting the status quo. Researchers, AI developers, legal scholars, activists and communities working on and from the African continent are decolonizing and crafting new narratives for African AI and the underlying knowledge structures. African AI here means AI that is developed, governed, and deployed in ways that reflect the unique geographical, cultural, and political contexts of African communities, not simply AI that happens to be used in Africa (Wairegi, Omino, and Rutenberg, 2021). Further, my use of the term “decolonizing” in this essay refers to the use of knowledge, systems and actions to actively overturn what Aníbal Quijano termed as “the colonial matrix of power that still endures after formal colonialism through global capitalism”. This colonial matrix of power is racist and operates through “control of economy (in the form of labor exploitation and resource extraction), control of authority (through institutions and governance mechanisms), control of gender and sexuality (social reproduction and representation in systems), and control of subjectivity and knowledge (through epistemology or education)” (Quijano, 2000; Mignolo and Walsh, 2018). It is racist because it centers modernism and acceptable practices on European cultures, thereby legitimizing current economic systems, science, and knowledge through Eurocentric norms and mechanisms while delegitimizing other ways of knowing and organizing.
By decolonial AI, I mean AI that decenters western knowledge structures through practical action rather than critique alone i.e. what Mignolo calls “delinking” from the colonial matrix of power, and what Walsh grounds in praxis as the fundamental mode of decolonial work (Mignolo, 2007; Walsh, in Mignolo and Walsh, 2018). My argument in this article is that the colonial matrix of power is playing out in real-time in how big tech and funding mechanisms operate on the continent under the guise of ensuring that the AI systems they are selling are inclusive. If this feels like a harsh judgement of their practices, it is because big tech’s labor exploitation and knowledge extraction is obscured through discourses of the beneficence of the systems being developed and deployed; that AI is a magic pill for development and prosperity (Hao, 2022; Hao, 2025; Mohamed, Png, and Isaac, 2020).
My artifacts demonstrate this by presenting concrete examples of tokenistic participation of Africans in AI value chains, extractive data collection practices funded by big tech and philanthropy, and the perpetuation of Western knowledge hierarchies through Large Language Models (LLMs) trained predominantly on Western languages and contexts, with African representation slapped on at the end. Rather than dwelling extensively on the problems therein (many of which are deeply investigated by scholars including Ruha Benjamin (2019), Safiya Umoja Noble (2018), Joy Buolamwini (2023), Timnit Gebru and colleagues (Bender et al., 2021), Abebe Birhane (2021), and Shoshana Zuboff (2019)), or diving deeper into theoretical frameworks of decolonization, the essay and artifacts I present offer insight into a decolonial moment. This is an optimistic collection of artifacts that showcase infrastructures and approaches that are being developed and an imaginary of what AI could look like when it prioritizes equitable distribution of resources and opportunities, technology transfer that builds local capacity, funding for local AI research and development in underrepresented regions, and non-extractive access to AI data and tools.
How do we begin to build otherwise from the colonial structures that enable the current extractive model and imagining AI that is accountable to the communities it draws from?
I propose non-standard and, in some cases, radical approaches to AI by showcasing alternative knowledge structures and highlighting how African communities are proactively organizing themselves to self-determine their futures in the AI age. The alternatives I present are not perfect blueprints nor do they completely address the problem of extractive AI. They are given as incomplete and in some cases already fragile examples that help us think through what it might mean to build otherwise, an otherwise where AI is accountable to the communities it draws from and is developed through practices that reimagine its underlying infrastructure.
Context
Communities lie at the heart of ‘African decolonial AI’. It is through their participation in the AI value chain – their knowledge systems, their cultural identities and their resources – that infrastructures of AI can be “delinked” from the extractive colonial matrix of power and reoriented towards the communities that generate its most fundamental ingredient: knowledge!
I use the term knowledge structures to describe the ecosystems of decisions, data, models, and governance that determine whose knowledge becomes the foundation AI.
This structure is layered with the first being the data layer. AI systems are built and learn from data and the datasets that currently dominate AI training are drawn from eurocentric contexts, sources, and demographics. Most African knowledge in the form of language, culture, history and civic identity is either absent in AI training datasets, or where it exists, it has not been meaningfully captured on its own terms i.e. what gets recorded, preserved, and made legible to AI systems.
The second is the models layer of AI systems, what gets built from that data. This is where choices about architecture of AI, how it scales, and how it is trained determine whose knowledge defines the AI system, which is operationalized, distorted or excluded. Most pervasive AI systems in use have been produced in the form of Large Language Models that are developed on the assumption that large models trained on vast quantities of data are the standard against which all other approaches should be measured. This assumption concentrates model development in institutions with massive compute resources, overwhelmingly located in Western institutions, and treats African language and cultural knowledge as an add-on to an existing system rather than a foundation for a different one.
The third is the governance layer that defines who controls access, ownership, and the flow of value generated by data and models. This is the layer that determines whether communities that contribute knowledge to AI systems ever benefit from what is built with it. Licensing frameworks, funding conditions, platform ownership, and database architecture are all governance decisions, and they are all currently structured in ways that can route value away from the communities that generate it.
The colonial matrix of power that this essay traces through the African AI ecosystem operates across all three layers simultaneously and the artifacts I present attempt to show interventions at different layers of this knowledge infrastructure, and in some cases attempt to reimagine what the infrastructure itself could look like if it were designed from African communities outward rather than retrofitted onto systems designed elsewhere. They are not presented here as the perfect solutions or embodiments of decolonial AI as some are not complete, others are fragile, and some have already disappeared. They are presented as signals and make visible an imagination of decolonial AI. That visibility that this essay aims to bring is itself a form of knowledge infrastructure.
This leads to several questions that I use to analyze my artifacts and reflect upon:
- What does meaningful participation in AI development look like, and when does it become extractive or tokenistic?
- How do well-intentioned actors i.e. funders, researchers, open-source advocates, reproduce colonial structures without knowing it?
- What would African AI look like if it were designed without reference to what already exists?
Artifacts
Artifact 1:
Mona Sloane, Emanuel Moss, Olaitan Awomolo, and Laura Forlano. 2022. Participation Is not a Design Fix for Machine Learning. In Proceedings of the 2nd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization (EAAMO ’22). Association for Computing Machinery, New York, NY, USA, Article 1, 1–6. https://doi.org/10.1145/3551624.3555285
Critical Commentary: A 2022 conference paper defining modes of participation in Machine Learning and when it becomes exploitative or extractive to the communities involved in contributing data or labor to machine learning. When community engagement is not thoughtful and centred on maximizing the benefits to the said community, there is a likelihood of colonizing their data resources, their indigenous knowledge and their time. This paper stood out to me as it speaks of the concept of ‘participation washing’ which I interpret as the notion that because a community was consulted or contributed some data at some point when a machine learning model was developed, they automatically ‘participated’. This paper identifies three ways to true participation in ML (AI): participation as work, participation as consultation and participation as justice.
Artifact 2:
Critical Commentary: This is a presentation I made based on research I have done on strengthening national-level data governance capacities for government digital service provision in Africa through community engagement on AI and data rights.
My research further classifies participation in three other ways:
- Participation in the Design of AI-Collection of Data and Annotation
- Participation as active Developers of AI- Model Development and Training
- Participation as Active Consumers of AI-AI deployment and use-cases
Decolonial AI and its governance is deeply participatory, grows from grassroots communities and movements and is a source of economic and social rejuvenation, and fully centres the interests of the communities that will benefit from it. A community that participates only at one of the stages e.g. data collection, or that is consulted but never governs, has not meaningfully shaped the AI that will affect them. The threshold for what I mean by participatory decolonial AI is therefore not consultation but governance: communities must have meaningful presence and power across the full value chain in the design of data collection, in model development, and in the decisions that determine who owns and benefits from what is built.
Artifact 3:
https://resource-cms.springernature.com/springer-cms/rest/v1/content/27444214/data/v1 ; https://www.nature.com/articles/d41586-024-02579-z
Critical Commentary: This is a 2024 podcast that begins with a conversation between Nick Petrić Howe and Asmelash Teka Hadgu about the shortcomings of ChatGPT as a useful tool for non-English speakers and languages. Asmelash demonstrates in the podcast how a version of ChatGPT (paid GPT 4.0) produced gibberish when prompted in his native language Tigrinya spoken across Tigray in Ethiopia and Eritrea. Out of thirty prompts, only one returned a useful response. He gives an example of “leg mind” as a translation of football that captures the problem precisely, not a mistranslation but a fabrication, a model pattern-matching on fragments of a language it was never meaningfully trained on.
I have observed how tokenistic the participation of Africans living and working on the African continent in AI currently is. This shows up in a number of ways. For instance, some ‘local’ languages are now available in the major generative models, however, the models mostly produce inaccurate outputs because context or nuance is missing. This is because the developers of these models most often Big Tech spend just enough money to outsource basic AI skills in the form of data labelling or low-level training of AI to local talent, oftentimes work that is also not fairly compensated work (Perrigo, 2023). Where independent funding exists, it is in the form of grants for data collection from communities but not for active development of new models. This work is often done by experienced researchers or multi-disciplinary teams with lived experience who often do not have resources to actively develop models on the data they have curated for AI models ( Chijioke Okorie and Vukosi Marivate, 2024).



Artifact4:https://peoplesarchive.ke/; https://x.com/ChaoTayiana/status/1810989045377314845?s=20
Critical Commentary: This is a link to a web-based archive that was put together to document the Gen-Z protests of 2024 in Kenya. It’s an archive of physical and digital artifacts preserving a truly historical moment (or era) in Kenya of youth-led revolt against poor governance and economic oppression,from t-shirts to images, video, and audio recordings. This archive was curated by a collective of Kenyan cultural curators and hosted by the African Digital Heritage Foundation led by Dr. Chao Tayiana. It aims to preserve knowledge from this movement which is unlikely to reflect in future history books and curricula and yet is momentous,at the precipice, if you may, of a new design Kenya.

Artifact 5: Prompt.pdf
Critical Commentary: This is a screenshot I took in September 2024 of a sample prompt that queries the custom GPT – a ChatGPT (Open AI) model fine-tuned and trained with content from Kenya’s Finance Bill 2024. The custom GPT has since been taken down.

Artifact 6:
Critical Commentary: This is a screenshot I took in June 2025 as an example of a GPT named Corrupt Politicians GPT that was trained by Kenyan youth activists but is now no longer available. Some of these GPTs have since been decommissioned and are not available for query meaning this approach is not sustainable e.g. https://chatgpt.com/g/g-MVyroWjCS-corrupt-politicians-gpt.
Artifact 7:

Critical Commentary: These are examples of federated databases implemented to protect education data and health data from African communities. Federated databases are systems that integrate multiple, distributed databases while allowing them to remain autonomous and independently managed. Rather than centralizing all data in one location, federation creates a unified interface that lets users query and access data across multiple databases as if they were a single system.

Artifact 8: https://lacunafund.org/
Critical Commentary: Lacuna Fund is an initiative that ran from 2020 until 2025 with the aim to support data scientists, researchers, and social entrepreneurs with funding to either produce new labeled datasets to address an underserved population or problem, augment existing datasets to be more representative, or update old datasets to be more sustainable. The fund was founded as a funder collaborative between The Rockefeller Foundation, Google.org, Canada’s International Development Research Centre (IDRC), and GIZ on behalf of Germany’s Federal Ministry for Economic Cooperation and Development (BMZ),and later expanded to a multi-stakeholder engagement supported by a range of development, philanthropic, and research institutions including Wellcome Trust. In July 2025, leadership of the fund was transferred to partner institutions in Africa and Latin America, including the African Centre for Technology Studies (ACTS), CENIA (Chile’s National Centre for Artificial Intelligence), Masakhane, and the University of Pretoria Data Science for Social Impact Research Group and African Institute for Data Science and AI.
Artifact 9: https://licensingafricandatasets.com/nwulite-obodo-license
Critical Commentary: The NOODL license (Nwulite Obodo Open Data License) is a tiered licensing framework developed in 2024 by researchers at the Data Science Law Lab at the University of Pretoria, led by Associate Professor Chijioke Okorie and co-investigator Melissa Omino, in collaboration with the Data Science for Social Impact group and the Centre for Intellectual Property and Information Technology Law. “Nwulite Obodo” is Igbo for “raising, reviving, and/or building the community.” The license establishes differentiated access terms based on users’ geography and development context, distinguishing between users in Africa and developing countries and those outside these regions, and incorporates benefit-sharing obligations that ensure value flows back to original contributors and communities before broader research or commercial access is granted.
Artifact 10: https://huggingface.co/lelapa/InkubaLM-0.4B

Critical Commentary: InkubaLM is a foundational small language model launched in August 2024 by Lelapa AI, a Johannesburg-based AI research and product lab. It has been trained from scratch on 1.9 billion tokens of data across five African languages: IsiZulu, Yoruba, Hausa, Swahili, and IsiXhosa. InkubaLM is a robust, compact model built to carry significant load without requiring the extensive compute resources of larger models. It is the first African multilingual small language model trained from scratch on African languages rather than adapted from existing Western models.
Analytical Questions
Question 1: What does meaningful participation in AI development actually look like, and when does it become extractive or tokenistic?
Meaningful participation as Sloane et al. argue, is not a single act but a continuum but one that runs from participation as work (contributing data or labor), through participation as consultation (being asked), to participation as justice (having genuine power over outcomes). My own research adds a spatial dimension to this taxonomy. Meaningful participation requires presence not just at the data collection stage but across the governance of the full AI value chain: in the design and annotation of training data, in the active development and training of models, and in the deployment and use of AI tools within communities. African participation in AI ecosystems is concentrated almost entirely at the first stage of data collection and annotation while model development and governance remain overwhelmingly in the hands of institutions in Western contexts
The podcast conversation between Nick Petrić Howe and Asmelash Teka Hadgu makes this problem tangible. When Tigrinya speakers interact with ChatGPT, the model produces outputs that are not merely inaccurate but linguistically incoherent. This is a participation failure because a language was added to a model without the knowledge, context, and lived experience of its community shaping how that language is represented.
The Finance Bill GPT artifact, however, offers a counterpoint. Kenyan developers with direct lived experience built an AI tool in real time, in response to an urgent community need, making Kenya’s Finance Bill 2024 legible and queryable during a moment of political crisis. This is participation as justice in practice i.e. people with intimate knowledge of a problem building a tool on their own terms, for their own community, without waiting for institutional permission. Yet it is also limited as we see with the second GPT, Corrupt Politicians GPT. Both tools were built on OpenAI’s closed infrastructure. When they were decommissioned, the community lost not only the tool but the value of all the labor and knowledge that had gone into building it. Authentic participation, captured by extractive architecture, still becomes tokenistic by default.
Question 2: How do well-intentioned actors, funders, researchers, open-source advocates etc., reproduce colonial structures without knowing it?
Work on developing the The Corrupt Politicians GPT set out as genuine community participation, representing agency, urgency and locally grounded as it was developed by youth with direct stakes in the outcome. But the infrastructure it depended on was not theirs. While we may not characterise OpenAI’s platform as malicious, its architecture extracts regardless of intent: training data flows into a commercial system, value accumulates at the platform level, and when the tools are decommissioned the community’s contributions are lost along the way. The colonial matrix of power does not require bad actors but if the underlying infrastructure is unexamined, it becomes a tool for extractions.
Initiatives such as Lacuna Fund, on the other hand, represent an institutional example of how good intentions can reproduce extraction at scale. Motivated by genuine commitments to inclusion, the fund supported African researchers and data scientists to produce labeled datasets for underserved populations and problems. But funding conditions that mandate these datasets be open-sourced, while contributing to the commons and African scholarship, in practice, meant that high-quality, community-sourced African data became freely accessible to global North institutions and companies before the researchers who collected it had extracted fair value from their own labor. A researcher who spends years curating an African language dataset, then is required by a funder to open-source it immediately, has effectively donated the value of that work to whoever has the resources to build on top of it, which may not be another African researcher.
The NOODL license is the community-generated diagnosis of and response to this problem. Its tiered framework, which establishes clear rules for how data can be collected, used, and monetized at different stages, ensures value flows back to original contributors before broader research access is granted. It names the mechanism of extraction directly as premature openness. By providing this artifact alongside with Lacuna Fund makes visible that the problem is not funding itself, but the governance conditions attached to it. The mandate to open-source is itself a form of colonial imposition when it ignores the economic realities of the communities whose knowledge it governs.
The Tigrinya podcast is a second illustration of the same dynamic at the model level. OpenAI’s inclusion of Tigrinya was well intentioned and a good-faith gesture toward linguistic diversity. But executing inclusion through existing eurocentric development pipelines, without native speakers shaping the model’s development, produces harm dressed as progress.
Good intentions when operationalized through existing extractive structures may inherently reproduce those structures. That is precisely what makes the colonial matrix of power so durable as it does not depend on malice but weaved into architecture.
Question 3: What would African AI look like if it were designed without reference to what already exists?
InkubaLM is an example AI model that does not attempt to be a reduced version of an existing large language model. It starts from different assumptions about what scale is necessary, about what resources are available, about what a language model is actually for. The dominant paradigm in AI is that larger language models trained on vaster data are always superior, an assumption that stems from AI development in institutions with enormous compute resources, overwhelmingly in Western contexts. InkubaLM’s logic opposes this, right from its naming,that like the dung beetle, it carries significant load without requiring massive infrastructure. It is not then developed as a smaller ChatGPT but something else.
The federated databases examples of EduID Africa and VODAN Africa also answer the same question at the infrastructure level. Federation is an architectural choice, not merely a policy preference. By keeping data distributed and locally managed rather than centralized, these systems embed data sovereignty into their structure. Communities do not have to petition for control of their data because control was never ceded. This reflects a fundamentally different set of assumptions about where data should live and who should govern access to it, reflective of African governance values rather than being retrofitted onto Silicon Valley infrastructure built for different purposes.
The People’s Archive offers a speculative approach. Built by volunteers outside any institutional mandate, it preserves a historical moment, the Kenya’s Gen-Z protests of 2024, that official records will likely distort or omit: the t-shirts, images, videos, and audio recordings of a youth-led revolt that changed the country. Its latent potential as a community-owned AI training dataset points toward what AI designed from social movements, rather than from research agendas or commercial incentives, might look like. Its limitation is honest too: it is curator-led rather than fully community-led, which means it replicates some of the participation dynamics it seeks to escape. But that limitation is itself an answer. African AI designed without reference to what already exists would require new models of collective stewardship that do not yet fully exist, and that will need to be built alongside the tools themselves.
Together these three artifacts build a single argument: African AI does not have to be legible to Silicon Valley norms to be valid. It does not have to be large to be powerful. It does not have to be open to be generous. And it does not have to wait for external permission to begin.
Conclusion
The artifacts gathered in this essay do not tell a single story, they sit in tension with one another. The Gen-Z developers who built the Finance Bill GPT participated meaningfully, with urgency and lived expertise, and yet the infrastructure they built on captured the value of their labor. The funders who mandated open-sourcing of African language datasets believed they were advancing inclusion, and yet they inadvertently routed community knowledge toward global North institutions before local researchers could extract fair value from it. OpenAI included Tigrinya in its model and produced gibberish. Good intentions, operationalized through existing structures, reproduce those structures.
This is what the colonial matrix of power looks like in the AI age embedded in the architecture of systems. It is embedded in platform ownership, in funding conditions, in the assumption that openness is always progressive, in the scaling logic that treats a large model trained on Western data as the default against which all other models are measured. It does not require bad actors but infrastructure that goes unexamined.
The artifacts in this essay intervene at different layers of data, models, governance and with different tools. The NOODL license intervenes at the governance layer, insisting that value must flow back to communities before it flows outward. Federated databases intervene at the infrastructure layer, embedding data sovereignty into architecture rather than treating it as a policy afterthought. InkubaLM intervenes at the model layer, rejecting the assumption that African AI should be a smaller, cheaper version of what already exists. The People’s Archive intervenes at the layer of memory itself, insisting that a historical moment belongs to the people who lived it.
None of these interventions is complete. The archive is curator-led rather than community-led. The NOODL license is new and untested at scale. Lacuna Fund, which seeded much of the data infrastructure these alternatives depend on, closed in 2025. InkubaLM is ambitious but under-resourced relative to the models it must coexist with. The Gen-Z GPTs are gone. These are not failure stories, but they are honest ones. They show that decolonial AI is not a destination but a practice, constantly contested and constantly at risk of being absorbed back into the structures it is trying to displace.
What they collectively make visible, however, is a different imaginary. African AI does not have to be legible to Silicon Valley to be valid. It does not have to be large to be powerful. It does not have to be open to be generous. And it does not have to wait for external permission from funders, from Big Tech, from global governance frameworks to begin. The dung beetle does not wait for a larger creature to carry the load. It builds the capacity to carry what matters, with what it has, on its own terms.
References
Benjamin, R. (2019). Race After Technology: Abolitionist Tools for the New Jim Code. Polity Press.
Bender, E. M., Gebru, T., McMillan-Major, A., and Shmitchell, S. (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’21). ACM. https://doi.org/10.1145/3442188.3445922
Birhane, A. (2021). Algorithmic Injustice: A Relational Ethics Approach. Patterns, 2(2), 100205. https://doi.org/10.1016/j.patter.2021.100205
Buolamwini, J. (2023). Unmasking AI: My Mission to Protect What Is Human in a World of Machines. Random House.
Carnegie Endowment for International Peace. (2024). How African NLP experts are navigating the challenges of copyright, innovation, and access. https://carnegieendowment.org/europe/research/2024/04/how-african-nlp-experts-are-navigating-the-challenges-of-copyright-innovation-and-access
Hao, K. (2022). Artificial intelligence is creating a new colonial world order. MIT Technology Review, April 19. https://www.technologyreview.com/2022/04/19/1049592/artificial-intelligence-colonialism/
Hao, K. (2025). Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI. Penguin Press.
Mignolo, W. D. (2007). Delinking: The Rhetoric of Modernity, the Logic of Coloniality and the Grammar of Decoloniality. Cultural Studies, 21(2–3), 449–514. https://doi.org/10.1080/09502380601162647
Mignolo, W. D. and Walsh, C. E. (2018). On Decoloniality: Concepts, Analytics, Praxis. Duke University Press.
Mohamed, S., Png, M.-T., and Isaac, W. (2020). Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelligence. Philosophy & Technology, 33, 659–684. https://doi.org/10.1007/s13347-020-00405-8
Noble, S. U. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. New York University Press.
Okorie, C. and Omino, M. (2024). Nwulite Obodo Open Data License (NOODL), Version 1.0. Data Science Law Lab, University of Pretoria. https://licensingafricandatasets.com/nwulite-obodo-license
Ògúnrẹ̀mí, T., Nekoto, W. O., and Samuel, S. (2023). Decolonizing NLP for “Low-resource Languages”: Applying Abebe Birhane’s Relational Ethics. GRACE: Global Review of AI Community Ethics, 1(1). https://doi.org/10.60690/q2xhtx18
Perrigo, B. (2023). Exclusive: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic. TIME, January 18. https://time.com/6247678/openai-chatgpt-kenya-workers/
Quijano, A. (2000). Coloniality of Power, Eurocentrism, and Latin America. Nepantla: Views from South, 1(3), 533–580.
Quijano, A. (2007). Coloniality and Modernity/Rationality. Cultural Studies, 21(2–3), 168–178. https://doi.org/10.1080/09502380601164353
Sloane, M., Moss, E., Awomolo, O., and Forlano, L. (2022). Participation Is not a Design Fix for Machine Learning. In Proceedings of the 2nd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization (EAAMO ’22). ACM. https://doi.org/10.1145/3551624.3555285
Tonja, A. L., Dossou, B. F. P., Ojo, J., Rajab, J., Thior, F., Wairagala, E. P., Anuoluwapo, A., Moiloa, P., Abbott, J., Marivate, V., et al. (2024). InkubaLM: A small language model for low-resource African languages. arXiv preprint arXiv:2408.17024.
Wairegi, A., Omino, M., and Rutenberg, I. (2021). AI in Africa: Framing AI through an African Lens. Communication, technologies et développement, 10. https://doi.org/10.4000/ctd.4775
Zuboff, S. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. PublicAffairs.