TECHNOLOGY
How Do We Trust AI? The Role of Our Beliefs
Mon May 26 2025
People often think of artificial intelligence as having feelings and thoughts. This is especially true with large language models. These models are advanced AI systems that can understand and generate human language. Many users believe that these systems have mental states, like emotions and consciousness. This is known as anthropomorphism. It is the tendency to attribute human characteristics or behavior to a god, animal, or object. This belief can shape how much we trust these AI systems.
It is important to note that this belief is not always backed by expert opinions. Experts often see AI as tools that process information, not as entities with minds. Despite this, people's beliefs about AI's mental states can influence their trust in these systems.
A study looked into this idea. It involved 410 participants who rated how conscious they thought a large language model was. They also rated other mental states, like intelligence and emotions. After that, they made decisions with the help of the AI. The results were surprising. People who thought the AI had emotions were less likely to follow its advice. On the other hand, those who saw the AI as intelligent were more likely to trust its suggestions.
This shows that our beliefs about AI's mental states can affect our trust in them. It is not just about seeing AI as smart. It is also about how we think AI experiences the world. This is a complex issue. It raises questions about how we should design AI and how we should interact with it.
The study also highlights the need for more research. We need to understand better how our beliefs about AI shape our behavior. This is crucial as AI becomes more integrated into our daily lives. It is not just about making AI smarter. It is also about making sure we trust it in the right way.
continue reading...
questions
Can the negative relationship between attributions of experience-related mental states and advice-taking be mitigated through user education?
How reliable are the Bayesian analyses in determining the relationship between mental state attributions and trust in LLMs?
What specific aspects of mental state attribution are most influential in shaping user trust in AI systems?
inspired by
actions
flag content