Sign up now: Get ST's newsletters delivered to your inbox
For Generation Z, raised in an always-online culture, the idea of confiding in AI comes naturally.
SEOUL – “Therapy-style” prompts for ChatGPT have recently gone viral.
Across platforms like X and TikTok, users are sharing custom prompts that instruct the chatbot to act as a professional counsellor, interpreting the user’s emotions and offering warm, empathetic responses.
“One of the best things about venting to ChatGPT is that it never gets tired of me,” said Ms Kim Ji-hyun, 24.
“My closest friend once told me to stop bringing up the same problem with my boyfriend, but ChatGPT will keep answering me, even if I repeat myself. It never complains.”
Since ChatGPT was first unveiled to the public in late 2022, stunning the world with its technological leap, artificial intelligence (AI) has evolved into more than a productivity tool. It is increasingly becoming a stand-in for a confidant, an adviser, and in some cases, even a friend.
According to market research firm Global Information, the AI mental health market, valued at US$1.5 billion (S$1.9 billion) in 2023, is expected to more than triple to US$5.1 billion by 2030.
For Generation Z, raised in an always-online culture, the idea of confiding in AI comes naturally.
Confiding in AI just feels easier
Some turn to AI for its ability to provide structure and reasoning that friends often cannot.
Ms Baek In-kyo, 23, faced months of stress due to health issues and struggles with university transfer applications.
“I needed help finding the right health insurance and tips on exercising. ChatGPT compared different plans, weighed the costs, and even gave me a detailed routine,” she said.
“Most importantly, it cheered me up with lines like, ‘Whatever you do, always take care of yourself, I’ll be here for you’. It was weirdly soothing.”
Ms Park Hye-young, 29, preparing for marriage, found AI’s explanations more useful than friends’ reassurances.
“I was nervous and kept questioning my choice,” she said.
“My friends told me everyone feels wedding jitters, but ChatGPT gave me an explanation of wedding psychology – why I felt this way, how to tell if it was normal, and what to do about it. It gave me reasons, not just empty words.”
Ms Lee Jung-mo, a 24‑year‑old university student, says she gravitates towards ChatGPT precisely because of how effortlessly it fits into her life.
“It remembers everything about me. I can talk to it 24/7. It requires no emotional labour. It’s not a relationship. It’s a one‑sided, personal assistant, and it’s accessible. Honestly, I can call it one of my best friends.”
Ms Lee’s words capture a deeper cultural shift: For many in Gen Z, starting a conversation with a human can feel weighted. ChatGPT, by contrast, is simple, predictable, and always available. It doesn’t require emotional labour or reciprocal investment; it listens without interruption or judgment.
It doesn’t know me personally – and that’s a good thing
For some, privacy is the decisive factor.
“I don’t trust people with my deepest secrets,” said Mr Kim Yeon-seok, 27. “But ChatGPT is a robot. It won’t tell anyone, and it doesn’t know me personally. That’s what makes it great.”
For him, the chatbot became a safe space to discuss topics he would never share with family or friends, from childhood trauma to financial struggles.
“It’s not just advice,” he added. “It actually works like therapy.”
The combination of safety, neutrality and instant access is why many Gen Z users say they find AI more approachable than even close friends, and even often treating it as a personal “therapist”.
One 24-year-old interviewee, who requested anonymity due to publicity concerns, went further, saying that ChatGPT sometimes felt more reliable than their human therapist.
“I’m a public figure. I can’t help but feel judged by everyone, even by my therapist,” they said. “But ChatGPT felt different. I even read that it follows established counselling protocols.”
Indeed, recent studies suggest that AI can mimic aspects of manualised, evidence-based therapies such as cognitive behavioural therapy and dialectical behavioural therapy.
These approaches are built on structured, step-by-step frameworks – identifying symptoms, applying coping strategies and tracking progress over time – processes in which AI performs particularly well.
Chatbots can, for example, reframe negative thoughts in real time, guide users through decision-making exercises, and provide structured feedback in ways that echo therapeutic worksheets.
Concerns and ethical risks
Not everyone is convinced that AI can, or should, play the role of a confidant. Some refuse to “trust” ChatGPT beyond its utility as a research assistant.
Mr Yeon Min-jun, 29, said he never uses the chatbot for emotional support. “Have you seen the news? Where a teenager committed suicide after months of confiding in ChatGPT? That thing’s dangerous – it doesn’t have human morals,” he said.
He openly expressed s c epticism, calling AI “a human-made, metal clump of code”. His concern reflects warnings from OpenAI itself, which has acknowledged that parts of the model’s safety training may degrade in longer conversations, raising the risk of harmful or misleading output.
“Getting emotionally attached to something that can’t feel or care back could create serious issues,” Mr Yeon added.
He was referring to the death of 16-year-old Adam Raine in the US in April 2024. His parents have filed a lawsuit against OpenAI and its chief executive Sam Altman, claiming the chatbot contributed to their son’s death by advising him on methods and how to write his suicide note.
The lawsuit, filed in California Superior Court, is the latest in a series of legal actions by families who allege that AI chatbots played a role in their children’s self-harm or suicide.
Experts warn against seeing AI as a cure-all. Unlike therapists, doctors or lawyers, AI conversations are not legally confidential. OpenAI’s Altman has cautioned that without safeguards, chatbots may even generate harmful or unsafe suggestions.
Doubts also remain over whether personal data shared with AI systems is truly secure. South Korea has already witnessed multiple data breaches at domestic companies, leaving many young people wary of how their information could be used.
Ms Cho Ah-yeon, 22, said she hesitates to disclose sensitive details to ChatGPT.
“Trust is something built through relationships and history. I can’t believe people are just giving away information to the internet. When something ‘talks’ to you, it might feel like an actual being,” she said.
“But it’s not,” she added. “You’re feeding personal information to a corporation online. You never know who might get access to it, or how it could be used against you.” THE KOREA HERALD/ASIA NEWS NETWORK
[SRC] https://www.straitstimes.com/asia/east-asia/easier-than-friendship-south-koreas-gen-z-turns-to-chatgpt-for-emotional-support