

The fact the parents might be to blame doesn’t take away from how openai’s product told a kid how to off himself and helped him hide it in the process.
copying a comment from further down:
ChatGPT told him how to tie the noose and even gave a load bearing analysis of the noose setup. It offered to write the suicide note. Here’s a link to the lawsuit. [Raine Lawsuit Filing](https://cdn.arstechnica.net/wp-content/uploads/2025/08/Raine-v-OpenAI-Complaint-8-26-25.pdf)
Had a human said these things, it would have been illegal in most countries afaik.
I am not arguing this point, I agree.
A search engine presents the info that is available, it doesnt also help talk you into doing it.
A stranger doing it in a chatroom doing it should go to prison, as has happened in the past. Should this not also be illegal for LLM’s?