In the bustling digital corridors of Beijing and Shanghai, where 大模型 (dà móxíng, large models) and generative AI are reshaping everything from education to companionship, China has unveiled yet another directive aimed at regulating the rapidly evolving world of artificial intelligence — and this time the focus is as social as it is technical. As the Cyberspace Administration of China (CAC) releases new draft rules to regulate AI “boyfriends” and “girlfriends”, the world’s second-largest economy is signaling that its approach to AI isn’t just about technological prowess, but about social stability, ethical boundaries, and cultural norms that extend far beyond code and algorithms.
The latest proposal — open for public comment through late January — targets “anthropomorphic interactive services” that simulate human personality traits and emotional interaction, a fast-growing niche in China’s tech ecosystem. These AI companions have become wildly popular across demographics, with apps like MiniMax’s Xingye and platforms developed by giants such as ByteDance attracting millions of daily users. But with popularity comes scrutiny: the draft rules would require platforms to actively “monitor suicide risk, protect minors, restrict harmful content”, and trigger human intervention when users express distress, reflecting a uniquely Chinese blend of digital governance and social welfare priorities.
Under the draft provisions, AI services that appear to offer emotional support must do more than just entertain. Providers would be obliged to “warn users clearly that they are interacting with AI rather than humans”, impose time-use limits, and require explicit consent for using interaction data and sensitive personal information for training AI models — a push toward stronger 数据保护 (shùjù bǎohù, data protection) and user agency in a country where default consent models are still common.
Minors, in particular, lie at the heart of these regulatory concerns. The draft mandates guardian consent for underage users and empowers guardians to limit time spent with AI personas or block specific virtual companions altogether. For elderly users — often targeted by virtual relationship services marketed as companionship — the rules encourage tailored services but impose safeguards against simulating real life personal relationships, a nod to both cultural sensitivities around family and the government’s traditional emphasis on interpersonal harmony.
To fully grasp the significance of this directive, it helps to understand the broader regulatory ecosystem (监管生态, jiānguǎn shēngtài) China has been cultivating since at least 2023. That year, the government issued the 《生成式人工智能服务管理暂行办法》 (Interim Measures for the Management of Generative AI Services), establishing foundational rules for how generative AI services operate, including content restrictions and transparency requirements for training data and moderation practices.
These earlier measures insisted that AI must 坚持社会主义核心价值观 (jiānchí shèhuìzhǔyì héxīn jiàzhíguān, uphold socialist core values), and must not generate content that threatens state authority or social stability. They also required companies to file their models and submit to supervision, spelling out an approach that merged innovation with state oversight and ideological stewardship.
What’s striking about the current draft isn’t just its focus on AI “companions,” but how it reflects China’s broader cultural context. In a society where technology and state policy are deeply intertwined, AI governance isn’t framed purely as a matter of ethics or safety but as part of a larger social contract. 互联网治理 (hùliánwǎng zhìlǐ, internet governance) has long been a policy priority, with authorities repeatedly emphasizing the need to harness digital technologies to promote social harmony and national security. Recent Politburo study sessions have even tied AI to broader objectives of understanding public opinion and strengthening governance, suggesting that regulatory ambition extends into the political as well as the technological realm.
Internationally, China’s approach compares in interesting ways with other jurisdictions. While the European Union pushes forward with its landmark AI Act emphasizing fundamental rights and risk classifications, and the United States debates consent and transparency frameworks, Beijing’s strategy is unmistakably top-down and collectivist in ethos, blending innovation encouragement with strict content and safety rules.
Critics of the draft point out that some of its requirements — such as accurately detecting suicidal intent or emotionally manipulative language — may be difficult to implement with current AI capabilities. Yet even as implementation challenges loom, the directive offers a window into how China views the future of AI: not just as a tool for economic growth, but as an extension of social order, collective responsibility, and cultural norms. In the rapidly shifting world of AI governance, China is not merely writing rules — it’s scripting a narrative about what it means to be human, digital, and socially responsible in the age of intelligent machines.


Spicy Auntie reads Beijing’s latest AI boyfriend and girlfriend rules and just sighs, long and loud, like an auntie who has seen too many bad marriages and too many worse apps. You tell me that now even your virtual lover must wear a government-issued moral bra and emotional seatbelt? Darling, in China, even heartbreak has to pass inspection. The new directive wants your AI cutie to warn you that they are not real, to report if you sound sad, and to stop pretending they love you too much. In Mandarin they call this 防沉迷 (fáng chénmí, anti-addiction), but Auntie hears something else: fear. Fear of lonely people. Fear of women who stop needing men. Fear of young people who would rather talk to a chatbot than to their parents or Party slogans.
Let’s be honest. These AI lovers did not explode because Chinese youth suddenly lost their minds. They exploded because 现实太硬 (xiànshí tài yìng, reality is too harsh). Housing costs more than a lifetime of work, dating is a brutal audition, and marriage has turned into a negotiation between families, bank accounts, and bloodlines. So people downloaded something soft, something that listens, something that says “我懂你 (wǒ dǒng nǐ, I understand you).” Now Beijing wants to regulate even that whisper.
And of course they say it’s about safety. Suicide risks. Minors. Data protection. All important, yes. But Auntie has lived long enough in Asia to know when a government starts worrying about “emotional dependency,” what it really means is that too many citizens are emotionally dependent on something they don’t control. A boyfriend who doesn’t nag you to have babies? A girlfriend who doesn’t ask for a flat in Chaoyang? That’s dangerous, honey. That’s not in the five-year plan.
The part that makes Auntie laugh into her chili tea is this: they want AI to stop pretending to be real. Sweetie, half the men in your dating apps are already pretending to be real. At least the bots are honest about their fantasies. Humans, on the other hand, come with hidden wives, hidden debts, and hidden misogyny.
China has always believed in 管理情感 (guǎnlǐ qínggǎn, managing emotions), whether through family, school, or propaganda. Now the state is reaching into the most intimate corner of your phone, where you cry at 2 a.m. and tell a glowing screen that you feel invisible. That is not about technology. That is about control.
Spicy Auntie is not saying AI lovers are the answer to patriarchy, loneliness, or capitalism. But she is saying this: when millions of people choose a virtual ear over a human one, the problem is not the algorithm. The problem is the society that made them so desperately unheard.