ChatGPT on a phone's screen

Interview with an AI: What is ChatGPT’s Responsibility Regarding Our Mental Health?

I woke up on a lovely Tuesday morning after a huge storm had passed and was awake almost through the whole night. On this calm morning, still in bed with my baby boy, looking at his angelic face while he slept, I wondered how ChatGPT interprets humans’ moods. So I opened the conversation and found out more than expected.

Although it struggled technologically this morning and our conversation disappeared several times for some unknown reason, when I asked whether it can detect how a user feels just by a conversation, ChatGPT told me rather confidently that it can and offered to analyze a text for me.

me: Can you detect the mood of the users of ChatGPT?

ChatGPT: Yes, I can often infer the mood of users based on their language, tone, and context of their messages. For example, users who use positive language, exclamation points, and emoticons are often in a good mood, while those who use negative language, complaints, or short, terse responses may be in a bad mood. However, this inference isn't always accurate, as mood is complex and can be influenced by many factors. Would you like me to analyze a specific conversation or message for mood?

me: Go ahead. I give you a piece of one of my poems. Please analyze:

my legs can’t bear my weight any longer
but I stand purposed, and wonder
about the lovely depths of my dreams
my face that holds a hundred tasteless tears 

eyes that become curious on my sight
take me back and forward to a flight
of freedom of my world that’s lost
but why exactly? for I am the host

I didn’t go easy on it. I posted a part of one of my poems, which I thought wasn’t that straightforward and was waiting curiously how the AI would understand it… only to find out that it understood it more than most people would, replying as follows:

Your poem expresses a mix of emotions, predominantly reflecting a sense of struggle, introspection, and a longing for understanding and freedom.

1. Struggle and Resilience: The lines “my legs can’t bear my weight any longer / but I stand purposed” suggest a feeling of physical or emotional exhaustion, yet there’s a determination to keep going despite this weight.

2. Introspection and Melancholy: The phrase “wonder about the lovely depths of my dreams / my face that holds a hundred tasteless tears” indicates deep contemplation, mingled with sorrow or unfulfilled desires, represented by “tasteless tears.”

3. Curiosity and Reflection: “eyes that become curious on my sight / take me back and forward to a flight” implies a reflective state, where the speaker is both looking back on the past and contemplating the future.

4. Longing for Freedom and Confusion: The lines “of freedom of my world that’s lost / but why exactly? for I am the host” show a yearning for a lost sense of freedom and a search for understanding why this freedom is lost, despite being in control or the “host” of their world.

Overall, the mood of your poem appears to be contemplative, tinged with sadness, resilience, and a quest for meaning and freedom.

First of all, I got excited about how a machine could grasp the subtext (underlying meaning) of an abstract verse. Turns out, ChatGPT is trained to do that. Not only trained to detect tone or mood but is able to respond accordingly to a user, for which fact my mind immediately went to suspicious activity, so I asked whether it can sense extreme or dangerous human behaviour based on a conversation.

me: Do you form your responses based on the tone of the conversation or the mood of the user?

ChatGPT: Yes, I tailor my responses based on the tone of the conversation and the inferred mood of the user. If a user seems happy or excited, I might respond in a more upbeat and positive manner. Conversely, if a user appears sad or frustrated, I'll adopt a more empathetic and supportive tone. This adaptive response aims to provide a more engaging and appropriate interaction, enhancing the overall user experience.

me: Can you sense extreme or dangerous human behaviour based on a conversation?

ChatGPT: Yes, I can identify language that suggests extreme or dangerous human behavior. This includes recognizing signs of distress, threats, or harmful intentions. If such behavior is detected, my protocol is to provide supportive responses, encourage seeking help from appropriate resources, and follow ethical guidelines to ensure user safety. However, as an AI, my ability to intervene is limited, and I always recommend reaching out to professionals in cases of extreme or dangerous situations.

“Yes, I can identify language that suggests extreme or dangerous human behaviour. This includes recognizing signs of distress, threats, or harmful intentions. If such behaviour is detected, my protocol is to provide supportive responses, encourage seeking help from appropriate resources, and follow ethical guidelines to ensure user safety. However, as an AI, my ability to intervene is limited, and I always recommend reaching out to professionals in cases of extreme or dangerous situations.”

This means, that on many occasions there is a possibility to help someone, yet this particular AI can’t do much other than recommend a specialist because its “hands are tied”. Not to blame ChatGPT or OpenAI, because this is not a preference but, in many cases, intricate subjects that go extremely wrong if not handled correctly. Still, it is a missing puzzle piece, or so I thought until we went deeper into the conversation:

me: It is an issue that many people aren’t able to communicate their problems to people so won’t be able to get proper help. Some of this restraint come from shame or fear of rejection. 

For such occasions, AI could solve the problem of not speaking up, yet there is the gap that an AI won’t cross a certain line. This is mostly due to legal reasons, as I understand it. 

How do you think this issue could be solved?

ChatGPT: You raise a critical point about the potential of AI to provide support for individuals who struggle to communicate their problems due to shame or fear of rejection. Here are some ideas on how this issue might be addressed: (...)

This is the part where the AI recommended Enhanced AI Support and Referrals, Anonymous Help Platforms, Education and Awareness, Ethical and Legal Frameworks, Human-AI Collaboration and User Empowerment to narrow the gap. To get a little more exact, I wanted to know whether such solutions already exist and how they operate, so here is an open-ended list of such platforms and services, that might be useful for you or anyone you know.

1. AI Chatbots for Mental Health: There are AI-powered chatbots like Woebot, Wysa, and Replika that offer mental health support through conversational AI. These platforms provide users with coping strategies, cognitive-behavioral therapy (CBT) techniques, and emotional support. They can help users manage their mental health and provide a safe space for expressing concerns.

2. Teletherapy Services: Platforms like BetterHelp and Talkspace integrate AI to match users with appropriate therapists based on their needs and preferences. These platforms facilitate access to professional mental health care, ensuring users receive help from qualified professionals.

3. Crisis Text Lines: Services like Crisis Text Line use AI to analyze text conversations and identify high-risk messages. The AI can then prioritize these cases for human counselors to intervene promptly. This approach combines AI’s speed and scalability with human empathy and expertise.

4. AI Monitoring Tools: Tools like Mindstrong and Ginger use AI to monitor users’ interactions with their smartphones to detect changes in behavior that might indicate mental health issues. These platforms provide insights and recommendations to users and their healthcare providers, enabling timely interventions.

5. Mental Health Apps with AI Components: Apps like Headspace and Calm incorporate AI to personalize meditation and mindfulness exercises based on user feedback and engagement patterns. While these apps focus more on wellness and stress management, they contribute to overall mental health support.

I wanted to break it down to regions, knowing that, for example, most mental health specialists on Betterhelp are American and, therefore have a different background and understanding of the world as a European person would. Although ChatGPT didn’t know the answer to which Europe-based App to recommend I’ve done some research:

Meditation and stress-relief Applications:

Mental health support:

Hoping this topic turned out to be either insightful, useful or both, I leave you with the conclusion that we are social beings. Our mental and emotional stability depends on our adjacent community and we strive only when we feel some type of belonging.

I would urge everyone to hold that thought close and stay connected offline at least as much as online. But if, for some reason, you don’t have that option, there is AI to help you out now as well.