7 Comments
User's avatar
Amit's avatar

What a lovely framing of AI as sort of 3rd space, loved the observations, feedback from folks and food for thought. I think the closest I see AI in this space as Lay Counselors - accessible, not trully experts, but helping with the scale issue. I think this is what you allude to as the first-aid.

The biggest point that worries me is people's belief (sometimes even when they see it for the machine / software it is) is that AI has no baggage of history or agenda, unlike humans. That trust can be dangerous in just an year or two - with the amount of personal data AI is storing (even when you ask it to not "use" it for a chat), combined with the pressure on AI companies to shore up revenues. Pushing them to ... well, provide an agenda to their models.

And on that note, I was wondering how the AI may evolve in next 1-3 years when the multi-modal (voice / visual / other sensory) inputs start getting as strong and as commonplace as is textual right now for LLMs. Surely, that would bring down the gap with the empathy a human is able to show even further.. Even if it's a very good mimicry.

Another question was coming from my own lack of real therapy literacy. I wonder how different is the risk from seeing an inexperienced or 'bad' human therapist as your first one as opposed to reaching out to AI or a lay counselor... Because, often times, humans also do have that tendency to be agreeable or jump to solutioning mindset... So, in both cases, the safety mechanism user's own critical thinking. How much are we willing to question what we are told, regardless of the source...

Is that what you mean when you say this 3rd space is currently not an equal space?

So much to think about...what a lovely article!

Expand full comment
Gowri N Kishore's avatar

Lots of great points, Amit.

(1) When I wrote 'first-aid', I meant that AI conversations could offer quick relief and, in many cases, even stop something from becoming a bigger problem. But they may also not be a 'full cure' or solve underlying issues.

(2) Yes, AI does know a LOT more about us than we think. At the moment, it doesn't seem to be putting this information to use in any meaningful way. I remember a friend saying, 'Fine, take my data but at least be truly useful to me.' I know opinion is divided on this. Some, like you, care about the loss of privacy and its implications and others see it as inevitable.

(3) Multi-modal did come up even in these conversations. I spoke to more than a dozen people in all but didn't quote everyone. At the moment, texting and voice-to-text dictation are the most popular. I also had one person journaling, then clicking photos of their notes and uploading it to ChatGPT. However, nobody wanted to hear AI speak. There was clear discomfort with that idea. Maybe that makes it too surreal?

(4) Bang on. There are lots of bad/inept counselors around (I've been a victim myself) and the onus is still on us to use our critical thinking, as you put it, to decide if this is helpful or not. Which is where I think psychological literacy is important. Imagine a world where everyone knows the basic expectations and work modalities of a therapist interaction. What is okay and what is not, what can you expect, and so on. When everyone knows this, they can make an informed decision about whether to continue working with a particular therapist. At the moment, there's definitely an information inequality.

(5) You didn't bring this up but I was thinking how the line between 'mental health' support and 'just talking to AI' is blurred. A couple of people at least were using AI to prep for a work interview. They started out talking about coding. Then at some point, the conversation shifted to 'am I getting too anxious about this?' 'do I suck at coding?' and moved into an emotional support conversation. My point is, not everyone opens AI thinking they need mental health support. But everyone does need mental health support at some point in life, and we might take it where we find it. ¯\_(ツ)_/¯

Expand full comment
Dolly sharma's avatar

Well penned. Loved how accessible the language is.

Expand full comment
Joyful Budget Girl's avatar

AI is helping at the moment and building infrastructure around it by acknowledging is so important and critical right now. The point about access is spot on, very insightful.

Expand full comment
Anupam Kumar's avatar

Well articulated essay!

Expand full comment
The Quiet Thread's avatar

This is an interesting article, something we discussed about with a friend of ours, who is a social psychologist working with victims of rape and domestic violence.

She felt it unsettling that somebody is asking AI for advice. But here is the different view, especially for Indians for whom still it's a taboo to talk about mental health, I personally believe AI could be the stepping stone and a bridge to open up without having feeling of being judged.

Expand full comment
Gowri N Kishore's avatar

You're right, Janani. For someone who has never opened up about rape or domestic violence, AI could be the first 'safe' space. It might make them feel heard. I can only hope that it also encourages them to seek help from humans—those are the ethical design guardrails that need to be put in place.

Expand full comment