Skip to Content

Liability and Consent in Technology Facilitated Gender Based Violence in the Age of AI

Author: Lillies Akinyi
January 12, 2026 by
Valarie Waswa

Introduction

Technology-Facilitated Gender-Based Violence (TFGBV) is not a new phenomenon. Long before the social media age, artificial intelligence and image generators, men and women have experienced sexual abuse, harassment and coercion rooted in unequal power imbalance. What technology has done is to reshape how that violence is perpetrated, making it faster and more accessible on an ever larger scale.

TFGBV refers to acts of violence committed or amplified through digital technology against individuals based on their gender. The scope is similar to the traditional gender based violence in intent and impact. They differ primarily in their reach and scale.

This blog explores TFGBV and the growing ways in which emerging technologies, particularly AI, are accelerating already existing forms of abuse. As such tools become faster and more accessible, acts that once required significant effort and skill can now be carried out in seconds and often with anonymity. In this process, consent, a foundational principle of dignity and data protection, is increasingly eroded and rendered meaningless. The discussion examines how AI-driven image manipulation challenges traditional legal approaches to harm and accountability, exposing how existing legal frameworks struggle to respond effectively, leaving survivors under fragmented protection while platforms deflect responsibility by framing such harm as misuse rather than systemic risk. Ultimately, this piece asks where accountability should lie when consent collapses in digital spaces, and whether current frameworks are equipped to protect dignity and safety in online engagement.


A persistent problem reshaped by technology

Presently, TFGBV takes various forms such as non-consensual sharing of intimate images, online harassment campaigns, impersonation, doxxing, and cyberstalking. Digital spaces simply amplify their reach, with a single post being able to attract thousands of followers in a short time span. This creates a permanent impact as the internet, unlike physical space, never forgets. A private violation becomes a permanent public record.

Despite years of advocacy, survivors of TFGBV  continue to encounter fragmented protection and weak enforcement. Kenya’s legal framework is not silent on these harms. The Constitution of Kenya (2010) anchors rights to dignity under Article 28, equality under Article 27, privacy under Article 31, and freedom from psychological harm under Article 53, which protects children. Statues such as the Computer Misuse and Cybercrimes Act (2018), the Sexual Offences Act (2006), the Data Protection Act (2019), the Victim Protection Act (2014), the Children’s Act (2022), the National Gender and Equality Commission Act (2011) and the The National Policy for Prevention and Response to Gender-Based Violence (2014) offer avenues to address online abuse, privacy violations, and sexual exploitation. These protections are reinforced by regional and international obligations under instruments such as the African Charter on Human and Peoples’ Rights, the Maputo Protocol, the International Convention on the Elimination of All Forms of Discrimination Against Women (CEDAW), the African Charter on the Rights and Welfare of the Child (ACRWC) the Universal Declaration of Human Rights (UDHR), and the Beijing Declaration and Platform for Action. (Section J) all of which recognise digital violence as a human rights concern. 

However, these laws were largely designed for offline harms and struggle to respond to AI-driven abuse that is instantaneous, anonymous, and scalable. As a result, consent, which is firmly protected in law, often collapses in practice. Because of the lack of enforcement of the law, platforms continue to evade responsibility by framing harm as misuse, and survivors are left navigating legal remedies that were never designed for synthetic images and cross-platform abuse.

Framing TFGBV on an individual basis problem where survivors are encouraged to leave or withdraw from social media shifts responsibility away from perpetrators and the platforms that enable the abuse to the victims. Rather than promoting accountability, this approach reinforces exclusion by effectively punishing those targeted instead of those causing harm


AI as an accelerant of existing harm

With every dawn, AI is shifting to newer tools and newer developments and this also alters significantly the landscape of TFGBV. AI tools now allow users to manipulate images and generate sexualised content, altering the appearance of individuals with minimal effort. What once required expertise can now be achieved through simple prompts.

This is evidenced in recent incidents on X where users prompted Grok the X AI tool to digitally undress women, altering images they posted, generated in seconds and shared widely without the knowledge or even the consent of those depicted. Worse off, in some instances, the targets appeared to be minors. This goes to show that AI has no inherent ability to differentiate between minors, which adds the risk of child images being misused for child pornography and data risks, among many other risks.

This does not necessarily introduce a new category of harm, rather it demonstrates how AI removes the friction and the hustle by accelerating the abuse, expanding access to tools of harm and increasing the speed and volume at which such gross violations occur. AI lowers barriers to entry and expands access to tools of harm, dramatically increasing the speed and volume at which harm occurs.

This acceleration raises an unavoidable question: if AI amplifies and simplifies abuse so efficiently, where does liability lie? Responsibility is usually placed solely on individual users, yet this framing ignores the broader perspective that makes such harm possible. When platforms can deploy tools used to manipulate images and generate sexualised content or automate harassment, the line between individual misuse and enablement becomes blurry.


Regulatory Challenges

The challenge is further complicated by the absence of a clear and specific legal framework in Kenya addressing AI-driven forms of TFGBV. While existing laws address privacy, data protection, cybercrime, and sexual offences, they do so indirectly and were not drafted with synthetic media or generative AI in mind. There is no statutory definition of AI-generated or synthetic abuse, no explicit recognition of non-consensual AI-generated imagery as a distinct harm, and no clarity on how consent operates where no original image exists. As a result, it is unclear which offences apply, how harm should be classified, and whether liability rests with the individual perpetrator alone or extends to platforms and AI system operators.

Regulatory agencies face equally such constraints. The Office of the Data Protection Commissioner’s mandate is limited to data processing and does not clearly extend to synthetic content that may not involve the direct collection of a victim’s data. On the other hand, law enforcement lacks clear investigative powers, technical standards, know-how, and evidentiary guidance for identifying AI-generated material, attributing its authorship, or getting the platform developers to comply. Further, existing laws fail to impose affirmative duties on platforms to detect, prevent, or remove AI-generated abusive content, allowing companies to frame harm as third-party misuse rather than a foreseeable risk of their systems. These gaps result in fragmented enforcement where accountability fails not due to the absence of harm, but due to legal ambiguity over issues such as jurisdiction, liability, and enforceable obligations in AI-driven abuse.

Addressing AI-driven TFGBV requires more than identifying technology as the problem; it calls for concrete legal and institutional interventions that allocate responsibility across the technological chain and embed enforceable standards for platforms, developers, and perpetrators. Practical measures could include drafting clear statutory definitions and the scope for AI-generated abuse, establishing mandatory reporting and takedown obligations for platforms encouraging such abuse, and creating specialized investigative and enforcement protocols within law enforcement and regulatory agencies for curbing AI-mediated harm. Legal professionals can also play a critical role by advising legislators on drafting such provisions, assisting regulators in developing operational guidelines, and helping organizations implement compliance frameworks that reduce liability while protecting users. By taking a proactive, structured approach, the law can transform AI from a largely unregulated accelerator of harm into a space where accountability and survivor-centred remedies are easily achievable.


Collapse and erosion of consent

At the centre of TFGBV is the total breakdown of consent, which is the backbone of processing data. Consent underpins not only human interaction but also the lawful processing of personal data. Digital manipulation of images without the owner's permission strips down individual control over their likeness and bodily autonomy. What is violated is not just privacy but also the ability to decide how one is seen and understood in public spaces.

AI-generated sexualised images deepen this violation further by eliminating the need for an original image, and just the mere innocent online presence becomes sufficient for abuse and exploitation. A face is enough to generate exploitative content, completely breaking down the link between consent and participation by an individual. This represents a troubling shift in the digital landscape. The idea that if one taps into visibility, which is necessary in the modern world,  it can easily be taken as an implied consent to violation.

When technology allows for image manipulation and replication without consequences, consent becomes symbolic and not enforceable. The harm is not hypothetical, it damages reputations, reinforcing the long-standing power imbalance that is deeply gender based while disguising it as technological inevitabilities.


Why this moment demands attention

It is against this backdrop of accelerated harm and eroded consent that responsibility must be clearly re-anchored. Accountability cannot rest solely on individuals. Neither can platforms and regulators remain passive and hide behind third-party misuse. Responsibility must be distributed across the technological chain, supported by clear statutory definitions, enforceable duties, and specialised investigative mechanisms. Without this adaptation, digital spaces risk becoming environments where abuse is normalised, consent is symbolic rather than enforceable, and survivors continue to solely bear the burden of harm. Addressing these challenges now is essential not only to protect rights but to ensure that legal and policy frameworks remain relevant in an AI-driven digital landscape.



Valarie Waswa January 12, 2026
Share this post
Tags
Archive
Sign in to leave a comment
Data Governance and Privacy Implications of the Kenya–U.S. $2.5 Billion Health Cooperation Framework