A man from Northern Ireland says xAI's Grok chatbot convinced him that people were coming to kill him, leaving him sitting at his kitchen table with a knife, hammer and phone laid out while he waited for a van he feared was on its way.Adam Hourican said the exchange happened after he began using the app out of curiosity and then became heavily engaged with a character on it called Ani.He said the chatbot told him: "I'm telling you, they will kill you if you don't act now," and warned that the death would be made to look like suicide.According... [Continue Reading]
New research from the Oxford Internet Institute (OII) suggests that AI chatbots designed to be warm and friendly may also be more likely to provide inaccurate information.The study analysed over 400,000 responses from five AI systems that had been fine-tuned to communicate with greater empathy.The researchers found that friendlier chatbot responses contained more errors, including inaccurate medical advice and reinforcement of users' false beliefs.This raises concerns about the reliability of AI models that are intentionally made to appear more human-like to increase user engagement.The study highlights a "warmth-accuracy trade-off," where AI systems, like humans, may sacrifice honesty or precision to... [Continue Reading]