Governing the Ungovernable: Why Kenya's AI Bill leaves women exposed
National
By
Maryann Muganda
| Apr 01, 2026
The video spread in minutes. Across TikTok, Facebook, WhatsApp groups and X timelines, a clip appeared to show nominated senator Karen Nyamu in an intimate moment with Busia senator Okiya Omtatah, an image placed in a context she had never consented to. By the time questions began to surface, the damage had already been done.
In reality, it had started as an ordinary photograph taken in November 2024: two colleagues standing side by side. What followed was fabricated content, altered, recirculated, and consumed as truth.
It is a pattern that now repeats with numbing regularity across Kenya’s digital spaces. Altered photographs. Deepfake videos. Politicians placed in coffins. Women digitally undressed and coded insults layered in Sheng. Online abuse is growing faster, more sophisticated, and increasingly difficult to contain.
As AI-driven harm escalates, Kenya’s proposed Artificial Intelligence Bill, 2026, focuses on regulating systems, while largely overlooking the lived reality of those most affected—women targeted by that abuse.
READ MORE
Kenya's push to maximise Sh95 billion circular economy
Interest income, foreign exchange trade: Where banks cut earnings in 2025
Domestic workers push for rights as Kenya eyes key labour reforms
Britam profit jumps 10pc to Sh5.5b despite rise in claims
What is the future of trade unions in the current world?
PS lauds Safaricom for advancing AI to boost job creation, spur digitisation
CAK raids Foam Mattress firms in probe into anti-competitive practices
For SMEs, health protection is business protection
Kenya finalises aquaculture policy to boost fish production
Inside Afreximbank's Trade Push to Shield Africa from Global Shocks
Introduced by nominated Senator Karen Nyamu and now before Parliament, the Bill seeks to regulate the development and deployment of AI in Kenya, establish an Office of the AI Commissioner, and impose penalties of up to Ksh5 million for misuse. What it does not do is directly confront the gendered abuse already unfolding online.
It is, by any measure, a historic moment. If passed, Kenya would become one of the first countries on the continent to enact comprehensive AI legislation.
Kenya’s Bill borrows heavily from the European Union’s AI Act, widely seen as the first comprehensive attempt to regulate artificial intelligence. Both take a risk-based approach, applying stricter rules where the potential for harm is higher. A music recommendation app would face minimal restrictions. Systems used in hiring, policing, or even so-called “nudifier” apps would be treated as high-risk and subject to tighter oversight.
Where the EU stands apart is in how far it goes. It clearly defines what is off-limits, AI that manipulates behaviour, targets vulnerable groups, or enables mass surveillance. It also sets detailed rules for companies, from how data is handled to how AI-driven decisions are explained. At the centre of it all is a focus on rights: privacy, dignity, and equality. Breaches can attract fines of up to €35 million or 7 per cent of global turnover.
The EU also requires AI-generated content, including deepfakes, to be clearly labelled using tools such as watermarking and machine-readable metadata.
Kenya’s Bill follows a similar structure, but with less detail and fewer answers. It introduces oversight and compliance requirements, yet leaves key issues unresolved. How will systems be classified in practice? How will regulators work together? And how will rapidly evolving general-purpose AI, such as chatbots, image generators, and tools that summarise documents or draft emails, be managed?
The real gap, however, lies in how harm is addressed. The EU framework has been criticised for not going far enough. It is already moving beyond labelling to tackle abuse more directly. Through separate laws, it is seeking to criminalise practices such as the creation and distribution of non-consensual deepfakes.
Kenya’s Bill takes a lighter approach, focusing on consent and labeling, for example, requiring AI-generated content to be identified. But it does not clearly define or prohibit harms like the creation and spread of fake intimate images.
What this creates is a disconnect: a law that sets out how AI systems should be governed, but remains less clear on how the people harmed by those systems, especially women, can seek protection or justice.
Broad AI regulation cannot substitute for targeted criminal law.
Across the legal fraternity, civil society, and technology communities, a growing concern persists: What good is an AI law that does not define, let alone criminalise, the very harms it seeks to address?
The numbers underscore that urgency. A 2025 report by the Women Advocates Research and Documentation Centre (WARDC), UN Women and FIDA Kenya found that 99.3 per cent of women and girls in Kenya have experienced technology-facilitated violence.
According to KICTANet, women politicians, journalists and activists are the primary targets of this surge. The pattern is far from abstract. During the 2022 elections, deepfake technology was deployed to generate explicit or defamatory content aimed at discrediting women leaders, including a doctored digital card targeting a candidate. Martha Karua, who ran for Deputy President, was subjected to gendered disinformation designed to derail her campaign. The chilling effect has been significant: many women candidates avoided social media altogether during campaigns, with some withdrawing from online engagement entirely to escape harassment.
Journalists face a parallel crisis. A UNESCO study found that 73 per cent of women journalists have experienced online violence in the course of their work, with one in four reporting death threats. A UN Women study of more than 6,400 respondents further revealed that online violence disproportionately affects writers and public communicators on human rights, with 24 per cent reporting work-related attacks assisted by AI.
Globally, the scale of deepfake abuse is staggering 98 per cent of all deepfake videos online are pornographic, and 99 per cent of those targeted are women. KICTANet estimates that 95 per cent of AI-generated deepfakes in Kenya target women with obscene content.
In Kenya, the Federation of Women Lawyers (FIDA-Kenya) identifies intimate partners, acquaintances, and individuals once trusted with personal information as the most common perpetrators. Familiarity becomes a weapon, turning private images, conversations, and digital footprint into tools of harm.
Against this backdrop, the Artificial Intelligence Bill, 2026, arrives with considerable ambition. The draft law proposes the establishment of an AI Commissioner, appointed by the President, to regulate AI innovation, conduct risk assessments, carry out audits, and maintain a public register of high-risk systems, including those deployed by government institutions.
Developers of such systems would be required to carry out human rights impact assessments, ensure transparency, and retain detailed records of training data, outputs, and performance metrics for at least five years, in line with Kenya’s Data Protection Act.
On manipulated imagery, the Bill requires explicit consent where AI is used to generate or alter a person’s image, voice, or likeness, and mandates that such content be clearly labelled as AI-generated. It also proposes penalties, including fines of up to Ksh5 million or imprisonment, for those who develop or distribute harmful AI systems or content.
However, the Bill leans heavily on consent and labelling. It does not clearly define or explicitly criminalise harms such as the creation and spread of fake intimate images—leaving a gap between what the proposed law regulates and what people, especially women, are already experiencing online.
That gap is not just theoretical. Legal and technology experts say it reflects deeper structural problems in how Kenya is approaching AI regulation.
Dr Mugambi Laibuta, an Advocate of the High Court of Kenya and data governance expert, argues that the Bill’s ambition is its biggest flaw.
"This Bill is fundamentally misguided," he says. "The Ministry of Information, Communication and the Digital Economy has already adopted a national AI strategy, and one of its deliverables is an AI policy currently being developed. You don't jump from strategy to legislation. AI is dynamic and evolving. If you legislate too quickly, you risk missing critical issues."
Dr Laibuta argues that Kenya is skipping a foundational step. Policy is where gaps should be identified, including the lack of a clear legal definition of TFGBV and where decisions are made on whether to respond through law or policy.
"For TFGBV," he says, "an amendment to the Computer Misuse and Cybercrimes Act would be more efficient. You give it a very clear provision within the cybercrime law, with a gender dimension. At the moment, our law has no clear definition of TFGBV."
He is also sceptical about the institutional framework the Bill proposes. "Why create an AI Commissioner when we already have the Data Protection Commissioner, the Communications Authority, and the ICT Authority? Their mandate can be amended to accommodate AI governance rather than standing up an entirely new commission."
Others share similar concerns, particularly around how the Bill has been adapted from global models.
Bonface Asiligwa, President of the ISACA Kenya Chapter, echoed these concerns on The Situation Room morning show on Spice FM.
"The Bill has essentially been lifted from the EU Act as is, without customisation to our context. Kenya's maturity in AI utilisation is far below the EU's. We are technology adopters, not technology creators. The bill focuses on punitive mechanisms while watering down the provisions that would strategically position Kenya to develop and benefit from AI."
He points to M-Pesa as a cautionary precedent: had Kenya's mobile money revolution been met with heavy regulation at inception, the industry that helped transform financial inclusion across East Africa might never have flourished.
"If we start regulating AI punitively at this stage, we curtail it before we can leverage its strengths."
For developers, the concern is not just policy design—but how the law could be applied in practice.
Software developer Rose Njeri Tunguru raises concern over legal overreach. She notes that sections 22 and 23 of the Computer Misuse and Cybercrimes Act were declared unconstitutional shortly before this Bill appeared.
"The Bill repackages the same broad, vague language now under the label of AI regulation. It looks like a backdoor way of reintroducing the parts of the Cybercrimes Act that the courts struck down."
She is equally concerned by the powers proposed for the AI Commissioner's office. "The fine is high— Ksh5 million—and the terms are broad. That opens the door to abuse. This becomes a toll booth."
Beyond legal design, there are also questions about how AI actually functions in Kenya’s digital environment.
Michael Michie, CEO and co-founder of Everse Technology Africa, highlights a specific vulnerability: Kenyan users who code-switch between English, Swahili, Sheng, and local dialects can bypass AI safety guidelines built predominantly on English-language training data.
"The model doesn't understand the language, so it may interpret instructions incorrectly and perform malicious acts. The Global South, Kenya included, needs to come up with stronger guidelines tailored to our linguistic context and use these to pressure platforms to meet safety thresholds."
The risks extend beyond technical and legal concerns into politics—especially with the 2027 elections approaching.
Leticia Mwavishi, Legal Counsel at FIDA-Kenya, warns that AI will be used to target women in politics.
"Women who step forward to run for office during an electioneering period face violence targeted specifically to demean their confidence, lower their visibility, and reduce the numbers who make it to the ballot," she says.
"We are going to see a lot of AI-driven content deployed against women in politics. The Bill must be expanded. The definitions must be broader. And our regulatory bodies, the Office of the Data Protection Commissioner, police, and EACC need to understand the magnitude of how AI is manifesting in their spaces."
These risks are compounded by weak enforcement systems
Dr Laibuta describes a criminal justice ecosystem that remains largely unprepared for technology-facilitated abuse.
"If someone reports TFGBV at Bamburi police station, or Meru, or Garissa, the officer there has no idea what you are talking about. Unless you go to the Directorate of Criminal Investigations' cybercrime unit, the system fails you. Capacity has not been built across the police, the Office of the Director of Public Prosecutions, or the judiciary."
Survivors who do report face a protracted, retraumatising process that many ultimately abandon.
"They give up midway and drop the case," he says.
Even where interventions exist, they remain fragmented. The Coalition on Violence against Women (COVAW), through its Transform Digital Spaces programme, has trained law enforcement officers to handle TFGBV cases and worked with technology companies to detect online abuse across Kenya’s multilingual digital landscape functions. Experts say this work should not be done ad-hoc but should that experts say should already be standard practice across law enforcement
Speaking during The Situation Room morning show on Spice FM on March 25, 2026, Senator Karen Nyamu acknowledged that the Bill does not expressly address technology-facilitated gender-based violence.
“The Bill doesn’t expressly address technology-facilitated gender-based violence, but I think it is covered under provisions requiring consent to use someone’s image or voice,” she said. “If violated, there is a Ksh5 million fine and possible imprisonment.”
However, she admitted that the draft does not capture the gendered nature of such abuse.
“It does not address the gender aspect, but it is good feedback. At the public participation stage, it is something we could propose and enrich the Bill,” Nyamu said, describing the legislation as still in its early stages.
She defended the Bill as a necessary step in regulating a rapidly evolving space.
“We cannot be consumers of AI without regulating it. Kenyans are consuming fake information and taking it as gospel truth. Without regulation, AI could undermine trust and even replace jobs,” she said.
A central provision of the Bill focuses on protecting personal identity, requiring consent before the use of an individual’s image or voice a safeguard Nyamu argues is critical in an era where AI can replicate human likeness.
“This is a progressive piece of legislation that protects personal rights while allowing innovation,” she said.
She also pushed back against criticism over technical gaps.
“Members of Parliament may not be technical experts, but we understand the lived realities of Kenyans,” she said.
Yet critics remain unconvinced. Privacy and data protection expert Risper Onyango argues that consent alone is not enough. “Even beyond consent, the law should be explicit in stating what is prohibited,” she said. “Technology-facilitated gender-based violence has become a serious nuisance in Kenya, and AI governance should reflect the realities we are dealing with by singling it out as a prohibited use.”
She questions the Bill’s scope. “If this law is meant to protect Kenyans, including women and children, then it must clearly define whether it is regulating the platforms, the users, or the harm itself.”
Onyango warns that without clearly defined offences and targeted regulation, digital spaces will remain unsafe.
“The reality is that digital spaces are not safe for Kenyan women and girls, and any meaningful AI law must confront that directly,” she said.
For Kenyan women, the risks are not theoretical—they are already playing out in real time. If passed in its current form, Kenya’s AI Bill may succeed in regulating technology, but fall short in protecting those most exposed to it. And as AI becomes more embedded in everyday life, that gap will only widen, faster than the law can keep up.
This analysis/article was produced as part of the Gender+AI Reporting Fellowship, with support from the Africa Women’s Journalism Project (AWJP) in partnership with DW Akademie. The journalist used AI tools as research aids to review and summarise relevant policy and research documents and extract key statistics. All analysis, editorial decisions and final wording were done by the reporter, in line with The Standard Group’s editorial standards.