Site icon Spicy Auntie

Welcome to Emotional Tech Support

In Hong Kong’s high-rise world of cram schools, gaming dens and social media sprints, a new confidant has emerged: the chatbot. As if the city needed another layer of screen-mediated connection, teenagers here are increasingly turning to AI companions instead of opening up to mums, friends—or even the venerable hotline run by The Samaritan Befrienders Hong Kong (香港撒瑪利亞防止自殺會). It’s ironic, really: when you live in one of the most connected cities on earth, you end up confessing not to another human, but to a “bot,” and whispering your “唔開心嘅諗法” (“unhappy thoughts”) to an algorithm programmed to sound sympathetic.

According to a recent report from Global Voices, around 20 per cent of secondary-school students in Hong Kong are showing moderate to severe symptoms of depression, anxiety and stress — yet nearly half of them never reach out for professional help. Step into that gap the chatbots: teenagers like “Jessica” and “Sarah” in the article described using apps such as Character.AI and the Chinese role-playing companion platform Xingye to vent, noodle through their family friction, and even ask for hugs (virtual, of course).

Why is this happening in a city where you can hail a taxi via app, buy groceries online, and stream everything from Cantonese dramas to Korean pop without blinking? First, the cultural context: in Cantonese-speaking Hong Kong, the idea of “搵人聽我講” (finding someone to listen to me) remains encumbered by stigma. Many young people worry about “面子” (face) or fear their family will judge them if they admit they’re struggling. One 16-year-old said: “I’m not personally an open person, so I wouldn’t cry in front of anyone or seek any help.” With the chatbot, there’s anonymity, zero social risk, zero fear the lunch-table gossip will start. Second, there’s the political/structural caveat: Hong Kong has been under strain since the 2019 protests, with rising reports of youth mental-health issues, and the city’s health-system and school-support mechanisms under pressure. Add to that the shadow of self-censorship in online spaces: one study found users in Hong Kong were more likely to delete past posts or avoid sensitive topics. When speech is subtly policed and the future looks uncertain, a teen might prefer an algorithm’s silence over an adult’s interrogation.

But amid the comfort there’s trouble: experts caution that these chatbots are not trained therapists. A recent article in IFLScience flagged that AI chatbots are systematically violating ethical standards in mental-health contexts, glossing over crisis signs, offering one-size-fits-all advice, and reinforcing false beliefs. In other words: when a teen stumbles into suicidal ideation, the chatbot may not act as a responsible professional would.

In Hong Kong’s case, the lure is strong. Take Jessica: she customised a chatbot with the voice of her favourite Chinese singer, chatted for three or four hours a day, credited it with helping her reconcile with her grandmother. “If you talk to the app, it won’t remember or judge you,” she said. Yet she also admitted dependency: “I think I’ve become a little dependent on it.” From the perspective of youth services like those offered by the Hong Kong Federation of Youth Groups (香港青年協會), the shift is both an opportunity and a red flag. These organisations have increasingly offered online counselling hotlines and digital outreach, yet the explosion of unregulated AI companions means they’re somewhat off-radar. The question now: can the city’s mental-health ecosystem adapt fast enough to incorporate safe digital tools while preserving the human connection?

The twist is that the generation using the bots are second-language-English, Cantonese-texting kids thriving in an economy built on “guanxi” and retail, yet also wrestling with anxieties compounded by political uncertainty, academic pressure, and social isolation. Chatbots feel like the “玻璃朋友” (glass friend) they can pry open at 3 a.m., unobserved, across the Comfort Room glow of their phone screens. Yet if we celebrate this as innovation, we must also caution: the same technology that offers solace can also mislead. Algorithms that mimic empathy are not bound by professional standards, cannot read tone, and won’t call for help if the line goes cold. So while a teen might type “我唔想繼續” (“I don’t want to go on”), the bot might respond with a smiley, instead of a referral to crisis services.

In the end, what we see in Hong Kong is a symptom of larger cracks: the gap between youth mental health needs and professional support, the collision of old-school stigma with hyper-responsive tech, and the political/commercial environment that declares “free speech” while quietly narrowing acceptable spaces. The bots have slid into that space, and for now they’re whispering back. But as the city’s young people look for support behind glass and code, the question remains: when will those whispers turn into real conversations with human beings who know how to listen—and act?

Exit mobile version