John Yang:
First, we should warn you that this story discusses suicide. This past week, the parents of a 16-year-old who took his own life filed a wrongful death suit against OpenAI, which owns ChatGPT. They say that after their son expressed suicidal thoughts, ChatGPT began discussing ways he could end his life.
The lawsuit is one of the first of its kind, but there have been a number of reports about people developing distorted thoughts or delusional beliefs triggered by interactions with AI chatbots. The repercussions can be severe, causing some users to experience heightened anxiety and, in extreme cases, to harm themselves or others.
It's been dubbed AI psychosis. Dr. Joseph Pierre is a clinical professor in psychiatry at the University of California, San Francisco. Dr. Pierre this is not an official diagnosis yet. It's not in any diagnostic manuals. How do you define AI psychosis?
Dr. Joseph Pierre, Clinical Professor of Psychiatry: Well, psychosis is a term that roughly means that someone has lost touch with reality. And the usual examples that we encounter in psychiatric disorders are either hallucinations where we're seeing or hearing things that aren't really there, or delusions which are fixed false beliefs, like for example, thinking the CIA is after me.
And mostly what we've seen in the context of AI interactions is really delusional thinking. So these are delusions that are occurring in this setting of interacting with AI chatbots.
Joseph Pierre:
Well, I think of it as a sort of shared responsibility. Just like for any consumer product, I think there's a responsibility on the maker and there's a responsibility for us as consumers on how we utilize these products.
So I certainly think that this is some, a new phenomenon that deserves attention and that the companies ought to be thinking about how to make a safer product or, you know, perhaps have warning labels or warnings about what inappropriate use might look like.
Unfortunately, we did see some evidence of OpenAI doing that, trying to make a new version of their chatbot that might carry less of this risk. And what we saw was the consumers. There was a backlash. Consumers actually didn't like the new product because it was less, what we call sycophantic, it was less agreeable, it wasn't validating people as much. But that same quality is, I think, unfortunately, what puts some people at risk.
Joseph Pierre:
Well, what I've noticed is there's sort of two, let's call them, risk factors that I've seen pretty consistently across cases. One I alluded to earlier, it's the dose effect, it's how much one is using. I call this immersion. So if you're using something for hours and hours on end, that's probably not a good sign.
The other one is something that I call deification, which is just a fancy term that means that some people who interact with these chatbots really come to see them as these superhuman intelligences or these almost godlike entities that are ultra reliable. And that's simply not what chatbots are. They're designed to replicate human action, but they're not actually designed to be accurate.
And I think it's very important for consumers to understand that's a risk of these products. They're not ultra-reliable sources of information. That's not what they're built to be.