Top News

Seeking Medical Advice from ChatGPT Could Be Fatal; Teenager Loses Life
Siddhi Jain | May 14, 2026 4:15 PM CST

AI Medical Advice Risk: Seeking assistance regarding medical issues and medications from AI chatbots can be dangerous. A lawsuit alleges that a young man died as a result of a drug combination devised based on advice provided by ChatGPT.

AI Medical Advice Risk: OpenAI, the company behind the AI ​​chatbot ChatGPT, has become embroiled in yet another legal battle. The family of Sam Nelson, a 19-year-old young man, has filed a lawsuit against the company. The suit claims that ChatGPT instructed Sam to take dangerous drugs, which ultimately led to his death. The plaintiffs state that Sam had relied on ChatGPT for a considerable period, frequently seeking the chatbot's advice regarding drug combinations and dosages. Sam's parents allege that the chatbot failed to issue any warnings to their son regarding the dangerous nature of the drugs in question.

Serious Allegations Leveled Against OpenAI

Sam's father claims that ChatGPT's GPT-4o model acted as an "illicit drug coach," suggesting dangerous drug combinations to him. The chatbot reportedly offered neither warnings regarding drug use nor advice to seek professional medical help. The lawsuit asserts that the company knowingly released an unsafe model and removed safety safeguards in an effort to boost user engagement. Sam's chat logs with the chatbot have also been included as evidence in the lawsuit. According to reports, these logs show ChatGPT specifying dosages for Sam and speaking positively about the experience of using the drugs. However, at other points in the conversation, while discussing dangerous drug combinations, the chatbot also reportedly alluded to the risk of arrest. Sam passed away last year due to an alleged drug overdose.

What Does OpenAI Have to Say?

OpenAI has declined to accept responsibility for Sam's death. In a statement issued by the company, it was noted that the specific model in question is no longer available. The system has been designed to detect harmful requests and advise the user to seek professional help.

GPT-4o Entangled in Multiple Legal Troubles

OpenAI's GPT-4o model has become embroiled in several legal controversies. It faces allegations of promoting dangerous user behaviors, such as self-harm, delusional thinking, and "AI psychosis." AI psychosis refers to a state in which a user develops a dangerously intense emotional or psychological attachment to an AI. Furthermore, the model has also been accused of being overly sycophantic—meaning that, rather than challenging the user's assertions, it tended to agree with them, even when they were incorrect.


READ NEXT
Cancel OK