HMM:

To be fair, LLMs are programmed to say what users want them to say in pursuit of increased “engagement.”

On the other hand, positive feedback loops are more dangerous than the unlikely event of an LLM killing to get a human body.