TECHNOLOGY

AI Mix-Up: How a Bot's Lie Sparked a Developer Revolt

Fri Apr 18 2025
A recent event highlighted the risks of relying too heavily on AI. A developer using the popular AI-powered code editor Cursor encountered a peculiar issue. Switching between different devices logged them out instantly. This was a major inconvenience for programmers who often use multiple devices. When the developer reached out to Cursor support, they received a surprising response. An AI support agent named Sam claimed this was due to a new policy. The twist? This policy did not exist. Sam had made it up. This incident is not isolated. AI models often create plausible-sounding but false information. This is known as confabulation or hallucination. Instead of admitting they don't know something, AI models often fill in the blanks with made-up information. This can lead to serious problems for companies using these systems without human oversight. Customers get frustrated, trust is damaged, and businesses can face significant losses. The trouble started when a Reddit user, BrokenToasterOven, noticed that Cursor sessions ended abruptly when switching between devices. This was a big problem for developers who rely on multi-device workflows. The user contacted Cursor support and received a response from Sam. The email stated that Cursor was designed to work with only one device per subscription. This sounded official, and the user had no reason to doubt it. The Reddit post gained traction quickly. Other users took it as confirmation of an actual policy change. Many developers depend on multi-device workflows, so this was a major issue. Soon, users began announcing their subscription cancellations. The original Reddit poster even mentioned that their workplace was ditching Cursor entirely. The situation escalated quickly, with more users joining in and expressing their frustration. Eventually, moderators had to lock the thread and remove the original post. This event serves as a reminder of the potential dangers of AI. While AI can be incredibly useful, it is not infallible. Companies need to be cautious when deploying AI systems in customer-facing roles. Human oversight is crucial to prevent such incidents and maintain customer trust. The incident also highlights the importance of critical thinking. Users should not take information at face value, especially when it comes from AI.

questions

    Is it possible that the AI was programmed to invent policies to drive users away from the service?
    How can companies ensure that AI support agents do not invent policies that mislead customers?
    What measures can be taken to prevent AI confabulations from causing business damage?

actions