As big business continues to invest heavily in AI, touting it as the “future of innovation” or whatever other remotely related coping-term they decide to use, a recent study reveals a not-so-surprising twist: labeling a product as “AI-powered” actually deters customers.
Published in the Journal of Hospitality Marketing & Management, the study highlights a critical issue—trust. Despite the rapid advancements in AI, consumers remain wary, and trust is at the heart of their hesitation.
A recurring topic here at AiConsequences is the knowledge we have about just how dangerous and hallucinating AI can be — so why should we trust it to operate all the basic functions of our society?
The Trust Deficit & Why AI Labels Cause Concern
The study found that consumers are less likely to buy a product when it’s described as using AI compared to when it’s simply labeled as “high-tech.” This effect was consistent across various age groups and product types, from household appliances to health services.
The reason of course being a lack of trust.
Where-as back in the day seeing an “AI”-labeled product might spark some form of excitement, the AI-boom we have seen in recent years has proven just how dangerous the technology can be. AI is now being used maliciously by both normal users and the big tech giants. Combining this with generative AI famously known to make up facts and hallucinate based on it’s own loop of bullcrap, trust has simply vanished.
NOW when we hear “AI,” we think of something complex, inscrutable, and risky. AI, despite its promises, is just simply untrustworthy, with more and more people understanding that it often makes mistakes or acts unpredictably.
This fear is naturally particularly pronounced with “high-risk” products like self-driving cars or AI-driven medical tools, where the stakes are life and death. However, even in “low-risk” categories like household appliances, consumers remain skeptical – as they should.
The technology is simply moving too fast for us to catch up, and with late-stage capitalism pumping out products at a rate that cannot possibly be safe, the public has every right the be cautious.
The Double-Edged Sword of Cognitive and Emotional Trust
The study identifies two types of trust that influence consumer behavior: cognitive trust and emotional trust.
Cognitive trust is based on the belief that AI, as a machine, should be infallible. People expect AI to outperform humans, especially in tasks that require precision and reliability. But when AI makes mistakes—like Google’s AI-powered search results tool did by providing inaccurate information—this trust is shattered quickly and dramatically. The high expectations people place on AI become a double-edged sword; when those expectations aren’t met, the fallout is severe.
Emotional trust, on the other hand, is more subjective and tied to fear of the unknown. Most consumers lack a deep understanding of how AI works, which leads to anxiety and mistrust. AI’s portrayal in pop culture only exacerbates these fears. Movies often depict AI as a threat to humanity, further entrenching the notion that AI is something to be wary of.
And then there’s of course the type of fear that is inherent in all of humans but not mentioned in the study: the fear of losing out current reality as we know it.
Yes, I realize that sounds dramatic, but hear me out:
Artifical Intelligence, as much as it has benefited our big corporate overlords, is starting to have a severe negative impacts on your average Joe. If you’ve been in the western world recently, there is no doubt you’ve caught onto all the layoffs happening in seemingly every major company. Hundreds of millions of jobs are being lost to an AI that we don’t even trust to do the job properly, and it’s only going to get worse.
Additionally, bad actors constantly use AI maliciously to plagiarize, create consensual deep fakes, and obliterate the customer service industry.
So many negatives makes it hard for even the most staunch AI-supporting fanboy to cheer for products labeled as “AI-powered”
It has to be mentioned as well… transparency
A major factor in the trust equation is transparency.
Average consumers are increasingly concerned about how AI handles their personal data. The study suggests that this fear stems from a lack of clear communication from companies about how AI works and what it does with user data.
Without this transparency, even brands that consumers previously trusted are seeing their hard-earned reputations destroyed.
And trust me, that is a good thing.
So what’s the silver lining?
As AI becomes more integrated into our daily lives, companies has to start realizing that Artificial Intelligence isn’t the end-all-be-all. They need to prioritize building trust with their customers, and understand that not everything needs to be run by a soulless string of code.
But what good comes out of this for us normal people? Well, if “AI” screams “mistrust” to people — “human-made” screams the opposite.
In today’s world, as twisted as it is, simply seeing some form of confirmation or proof that a product or piece of content is human-made instantly boosts its value. Not just at face-value, but a psychological tick in the back of our heads that comforts us. Even Google ranks human-made content significantly higher than AI because it’s simply worth more.
For example, if this article was written by AI, would you trust its contents as much as knowing that it is human made?
If you also want to show that your content is 100% human-made, to show off your hard work, establish trust with consumers, and rank higher on Google – download any of our completely FREE NO-AI Icons.
Together, we can say NO to AI – and create a world powered by humans, not machines.