In July 2024, a short, grainy video began circulating across Pakistani social media. It showed a woman in a sexually explicit situation, her face unmistakably recognisable as that of Azma Bukhari, sitting Punjab’s provincial information minister. Within hours, the clip had jumped platforms—from X to Facebook to encrypted WhatsApp groups—mutating into screenshots, slowed-down versions, crude commentary and moral judgement. None of it was real. The video was an AI-generated deepfake, her face digitally grafted onto someone else’s body. But in Pakistan’s political and cultural climate, authenticity mattered far less than impact.
For Bukhari, a seasoned politician and lawyer, the damage was immediate and deeply personal. She later described the experience as shattering, saying she went silent for days after learning about the video. In a society where a woman’s public credibility is often tethered to perceptions of sexual “respectability”, the intent of the attack was obvious: humiliation, delegitimisation, and intimidation. The deepfake was not meant to be believed so much as felt.
Digital forensic experts and fact-checkers quickly confirmed the video was fabricated, pointing to visual inconsistencies typical of AI-generated content. Yet the clip continued to circulate, amplified by partisan actors and anonymous accounts. As with many cases of technology-facilitated gender-based violence, the burden shifted to the victim—not only to prove the content false, but to survive the social fallout.
What distinguishes Bukhari’s case from countless others is that she refused to disappear. Encouraged, she later said, by her daughter, she chose to fight back publicly and legally. She filed a petition in the Lahore High Court, naming individuals involved in the creation and dissemination of the deepfake and demanding accountability. The court ordered Pakistan’s Federal Investigation Agency to investigate the case, marking one of the country’s most prominent judicial engagements with AI-driven sexual disinformation.
As proceedings unfolded, the court grew increasingly critical of delays and non-cooperation. Arrest warrants were issued against suspects who failed to appear, and authorities confirmed that at least one social-media worker linked to a major political party had been taken into custody in connection with spreading the material. While convictions remain uncertain, the shift from dismissal to enforcement was significant in a system where cyber-harassment cases—especially those involving women—rarely progress beyond complaints.
The case exposed how AI deepfakes have become a new weapon in Pakistan’s already hostile digital landscape for women. Journalists, politicians, activists, and even private citizens have reported a surge in synthetic sexual content used for blackmail, revenge, or political sabotage. What makes deepfakes uniquely dangerous is their plausibility. In conservative contexts, the mere suggestion of sexual impropriety can trigger reputational ruin, professional exclusion, or even physical danger—regardless of truth.
Digital rights groups warn that the chilling effect is already visible. Many women are withdrawing from public platforms, limiting their online presence, or avoiding leadership roles altogether. The fear is not abstract: once a deepfake enters WhatsApp family groups or local networks, it escapes any meaningful control. Platforms may eventually remove content, but screenshots live on, and apologies rarely travel as far as lies.
Legally, Pakistan remains unprepared. Existing cybercrime laws do not explicitly address AI-generated impersonation, leaving investigators to retrofit old provisions to new harms. Police officers often lack the technical training to identify synthetic media, while victims face stigma, disbelief, and procedural exhaustion. Bukhari’s status and visibility forced the system to respond, but activists point out that most women do not have access to courts, media attention, or political backing.
That is why the case matters beyond one individual. It reveals how emerging technologies intersect with entrenched misogyny, turning AI into a tool of social control. It also demonstrates what resistance can look like: naming the violence, refusing shame, and demanding that the law catch up with reality. Whether Pakistan will build lasting safeguards against this new frontier of abuse remains uncertain. But for a brief moment, the deepfake did not silence its target. It exposed the system that enabled it.


The deepfake attack on Azma Bukhari wasn’t about sex or scandal. It was about reminding a woman in power that her authority is conditional, fragile, and always one fake video away from public punishment. AI just made that message easier to deliver.
Let’s not pretend otherwise. This was never about curiosity or gossip. It was a calculated act of humiliation, designed to shove a woman back into her “proper place”. Patriarchy has always relied on sexual shame as its favourite disciplinary tool. What’s new is the speed, scale, and deniability that artificial intelligence provides. One man, one laptop, one app—and suddenly a woman is fighting for her reputation while the perpetrators hide behind usernames and shrugs.
What depresses me is not that such a video was made. Of course it was. Give misogyny better tools and it will innovate enthusiastically. What depresses me is how smoothly the shame machine kicked into gear. The whispers. The “even if it’s fake…” The sermons about dignity delivered to a woman who committed no crime except visibility. In Pakistan—and let’s be honest, far beyond Pakistan—a woman’s reputation is treated like communal property: anyone can vandalise it, and she’s expected to apologise for the mess.
Notice how quickly the focus shifts away from the men who created and spread the deepfake. No one asks why sexualised attacks are the default language of political sabotage against women. No one asks why a fake porn clip is treated as more credible than a woman’s denial. Instead, we hear about her “image”, her “family”, her “honour”—as if honour were something stored between a woman’s legs rather than in her backbone.
What Azma Bukhari did next matters. She did not disappear. She did not lower her head. She did not apologise for existing. She went to court. She forced institutions to move. She made it awkward, public, and legally expensive to attack a woman this way. That is not just bravery; it is resistance with a strategy.
But let’s not romanticise this. She could fight back because she has power. Most women do not. For every high-profile case that reaches a courtroom, there are hundreds of girls and women whose faces are deepfaked into porn and circulated until they drop out of school, quit jobs, or are “protected” straight into silence. AI didn’t invent this cruelty. It simply automated it.
So here’s Auntie’s bottom line: if your first reaction to a deepfake is to judge the woman instead of hunting down the man who made it, congratulations—you’re part of the problem. And if the law can learn fast enough to chase a fake video, society can learn fast enough to stop blaming women for crimes committed against them. The future is already here. The question is whose violence we choose to normalise.