Do you want your technical work to matter beyond the sprint cycle?.

Code for Africa (CfA), through the InclusionAI Research Network, with the A+ Alliance, is offering a 6 month fellowship for an AI Engineer, anywhere in the world, to join our TechLab in implementing an AI Innovation Sandbox.

The AI Innovation Sandbox is funded by the International Development Research Centre (IDRC) and the Foreign, Commonwealth and Development Office (FCDO) designed to support women-led and women-focused organisations across Sub-Saharan Africa to build meaningful, ethical AI tools for their communities.

Through the fellowship, you will architect AI strategy and implementation. You will own how we adopt, adapt, and deploy AI across multiple products and those of our partner organisations. The core of this role is building with AI: designing agent systems that do real work, engineering the context that makes models useful, evaluating whether what you’ve built actually holds up in the real world, and iterating until it does.

This is an AI Engineer role, not an ML Engineer role. We’re looking for someone who builds systems and products using AI — with enough understanding of how models work to make good decisions, not necessarily to train them from scratch.

The successful candidates will work as part of a multinational and multilingual team using digital collaboration tools to create content for a global audience and international media partners.

Required: minimum requirements include:

  • 4+ years building and shipping software, with meaningful hands-on experience building AI-powered products or systems
  • Fluency in Python and TypeScript
  • Demonstrated experience designing and building agentic AI systems: multi-step task execution, tool use, memory, planning, and error recovery
  • Strong context engineering instincts: you think about the full information architecture a model needs to be useful, not just how to phrase a prompt
  • A systematic approach to evals: you design for measurability, not just intuition, and you know how to tell whether an AI feature is actually working
  • Familiarity with the broader AI ecosystem: open-source tooling alongside commercial APIs and nonprofit access programmes from leading labs
  • Strong system design instincts around AI: you think about latency, fallbacks, cost, and reliability, not just model quality
  • Sound judgement on responsible AI: bias, fairness, transparency, and the limits of what a model should be asked to do
  • The ability to communicate clearly across the room: to an engineer debugging a pipeline and to a journalist or funder asking what it all means
  • Fluency in English
  • A degree in Computer Science, Engineering, or a related field — or equivalent experience you can point to through your work and portfolio

Preferred: candidates who are able to demonstrate the following will have an advantage:

  • Experience deploying open-source LLMs in production environments
  • Existing relationships or experience working with AI lab programmes: Anthropic for Startups/Nonprofits, OpenAI for Nonprofits, Google.org AI access, or similar
  • Familiarity with vector databases, embedding models, and knowledge graph approaches
  • Experience with multimodal AI systems
  • Background in containerisation and cloud infrastructure (Docker, Kubernetes, cloud-hosted model deployment)
  • Experience in civic technology, investigative journalism, international development, or human rights contexts
  • Experience with multilingual NLP, particularly for low-resource or African languages
  • Fluency in French, Arabic, KiSwahili, or another major African language
  • Experience working across international, cross-cultural technical teams

Language and Location Requirements:

  • Location: Fully remote — open to candidates anywhere in the world, with a preference for those based in Africa
  • Languages: English required; French, Arabic, KiSwahili, or any other major African language is a significant advantage

About the Role:
The role sits within the TechLab. You will collaborate with a distributed, multidisciplinary team of engineers, designers, data journalists, and product managers. You will also engage directly with external partners ranging from investigative newsrooms to human rights defenders.

CfA takes a pragmatic, portfolio approach to the AI landscape. Open-source models such as Mistral, Qwen, Gemma and others sit at the core of our infrastructure where data sovereignty and auditability matter most. But we also engage with leading AI labs through their nonprofit programmes, when frontier capability serves a specific need. Part of this role is maintaining the relationships and technical fluency to move across that landscape intelligently.

One of your early priorities will be leading the technical architecture of our AI Innovation Sandbox, a self-contained ecosystem that gives civil society organisations access to this full range of AI infrastructure and tooling. This is a flagship initiative, but it is one of many: you will be expected to identify, shape, and lead AI work across the full breadth of CfA’s portfolio as the field evolves.

Responsibilities: Your work schedule will include:

  • Navigate the AI model landscape
    • Make and maintain principled decisions about when to use open-source models, when to leverage frontier models through nonprofit partnerships, and how to architect systems that avoid lock-in either way
    • Cultivate relationships with leading AI labs such as Anthropic, OpenAI,and others, staying close to how their technology, access programmes, and priorities are evolving
    • Monitor the broader ecosystem continuously, and bring the right capabilities to CfA’s work before partners and peers fall behind
  • Engineer context, not just prompts
    • Design the full context that makes models useful: system instructions, retrieval strategies, memory architecture, tool outputs, conversation state, and structured reasoning chains; not just individual prompts
    • Build and maintain context engineering frameworks, agent templates, and workflow orchestration tools that the wider team and partner organisations can use without deep AI expertise
    • Create domain-specific AI assistants grounded in curated, high-quality knowledge bases, making specialist knowledge accessible and actionable at scale
  • Design and run evals
    • Build evaluation frameworks that give the team genuine confidence that AI systems are working as intended, not just anecdotally, but measurably
    • Treat evals as a first-class engineering discipline: defining what good looks like before building, not after
    • Identify failure modes proactively, particularly in African linguistic and cultural contexts where standard benchmarks often fall short
  • Build agent systems that do real work
    • Design and develop AI agents capable of planning, executing multi-step tasks, using external tools and APIs, handling errors gracefully, and operating with appropriate degrees of autonomy
    • Move the team beyond single-turn interactions toward systems that can reason, retrieve, act, and self-correct across longer workflows
    • Apply agentic thinking to how the team itself works; using AI-assisted development, automated pipelines, and agent tooling to move faster and build better across the portfolio
  • Build and ship AI-powered products
    • Design and develop AI features across CfA’s platforms, from RAG systems and agentic pipelines to tool integrations and multimodal applications
    • Collaborate with product managers and designers from the start of a feature, not the end to turn user needs into sound technical decisions and technical possibilities into experiences people can actually use
    • Own the full cycle from prototype to production, including the unglamorous parts: versioning, output testing, edge case handling, and knowing when to ship and when to go back
  • Drive responsible AI practice
    • Embed bias detection, ethical review, and human rights considerations into how CfA builds and deploys AI; particularly in African linguistic, political, and social contexts
    • Develop clear documentation and governance protocols that ensure accountability and auditability across the portfolio
    • Represent CfA’s AI thinking externally: in publications, partnerships, conferences, and peer networks
  • Build capability across the organisation and beyond
    • Grow CfA’s internal AI literacy across technical and non-technical colleagues
    • Support partner organisations such newsrooms, civil society groups, and researchers, through direct technical guidance and capacity building
    • Stay closely connected to the global applied AI community, bringing relevant advances back into CfA’s work

Fellowship Package:

  • Duration: 6 months
  • Stipend: A competitive monthly, subject to experience, with opportunities for performance-based growth, both in terms of career path and public stature.
  • Mentorship: Guidance from CfA’s TechLab and support from partner institutions
  • Network: Collaborate with the InclusionAI Research Network
  • Skills acquisition: Ongoing opportunities to learn new cutting-edge skills and techniques/technologies to future-proof yourself in a rapidly evolving industry.
  • Showcase: A chance to shine on a global stage, writing for international audiences and interacting with colleagues around the world.

How to apply:

Please fill in this form: this form by April 30, 2026, rolling


About Us:

Code for Africa (CfA) is the continent’s largest network of indigenous African civic technology and investigative data journalism laboratories, with over 120 staff in 26 countries, who build digital democracy solutions that are intended to give citizens unfettered access to actionable information that empowers them to make informed decisions and that strengthen civic engagement for improved public governance and accountability.

This includes building infrastructure such as the continent’s largest open data portal, open.AFRICA, and largest open source civic software portal, commons.AFRICA, as well as the largest repository of investigative document-based evidence, source.AFRICA, as well as incubating initiatives as diverse as the africanDRONE network that gives citizens their own ‘eyes in the sky’, the PesaCheck fact-checking initiative in 12 African countries, and the sensors.AFRICA remote-sensing citizen science initiative to combat air/water pollution.

CfA also incubates the African Network of Centres for Investigative Reporting (ANCIR), as an association of the continent’s best investigative newsrooms, ranging from large traditional mainstream media to smaller specialist units. ANCIR member newsrooms investigate crooked politicians, organised crime and big business. The iLAB is ANCIR’s in-house digital forensic unit, with teams in east, south and west Africa. ANCIR uses its resources to strengthen newsrooms’ own internal capacity, by providing access to the world’s best whistleblower encryption and investigative semantic analysis technologies, as well as skills development, and seed grants for cross-border collaboration.


At CfA, we don’t just accept differences – we celebrate it, we support it, and we thrive on it for the benefit of our employees, our products and our community. CfA is proud to be an equal opportunity workplace and is an affirmative action employer. If you have a disability or special need that requires accommodation, please let us know. 

To all recruitment agencies: CfA does not accept agency resumes. Please do not forward resumes to our employment application line, CfA employees or any other CfA contact. CfA is not responsible for any fees related to unsolicited resumes.

Please note: Due to high volumes of applications, we are unable to respond to each one individually. If you are selected for an interview, we will contact you.