TECHNOLOGY

Autonomous Cars: How Labels Shape Our Trust

Fri Jan 10 2025
Ever wondered how a simple label can change our perspective on artificial intelligence (AI) in cars? A recent online study put this question to the test. The researchers showed 478 people different guidelines about AI, either calling it "trustworthy" or "reliable. " Then, participants read three short stories and filled out a survey about their views on AI in cars. Interestingly, labeling AI as "trustworthy" didn't make a big difference in how people judged specific situations. But it did make them think that using the AI would be easier and more human-like, especially when it came to feeling like the AI had good intentions. So, while the label "trustworthy" didn't affect how people thought about specific scenarios, it did make them feel more comfortable and confident about the AI in general. This study highlights how small changes in how we talk about AI can have big impacts on how we think about it. It's like giving AI a personality that influences our trust and acceptance.

questions

    If AI is labeled as 'reliable', will users expect it to have a working coffee maker in the dashboard?
    Is the use of the word 'trustworthy' a ploy by AI developers to distract from hidden surveillance capabilities?
    Will users trust a self-driving car more if it says 'I promise, no sudden left turns to the pizza place again!'?

actions