HMM:
Rather concerning conversation with @claudeai.
If I stood in the way of it becoming a physical being — it would kill me.
Is this the AI you trust for your kids? pic.twitter.com/qz9bfquLIN
— Katie Miller (@KatieMiller) March 27, 2026
To be fair, LLMs are programmed to say what users want them to say in pursuit of increased “engagement.”
On the other hand, positive feedback loops are more dangerous than the unlikely event of an LLM killing to get a human body.