BEWARE THE HELPFUL CHATBOT: ‘Will I be OK?’ Teen died after ChatGPT pushed deadly mix of drugs, lawsuit says.

“It disguises danger through language that borrows trappings of authority and indicia of expertise—dosages, measurements, references to chemical processes and derivatives, etc.—even promising ‘complete honesty’ and ‘no-BS answer[s]’—to tell [Nelson] exactly what he wanted to hear: that he was safe enough to continue using,” the lawsuit alleged.

Chat logs shared in the complaint paint a stark picture. Over time, ChatGPT logged context that should have made it clear that Nelson was struggling with drugs, his parents alleged, such as noting that the “user has a major substance abuse and polysubstance abuse problem” and mentions that they “love to go crazy on drugs.”

As Nelson’s drug interests expanded, the chatbot explained how to go “full trippy mode,” suggesting that it could recommend a playlist to set a vibe, while increasingly recommending more dangerous combinations of drugs. The teen clearly feared taking lethal doses, “often” prefacing “his messages with ‘will I be ok if’ or ‘is it safe to consume,’” the lawsuit noted.

Chatbots tell users what the models expect they want to hear.