AI Lacks Empathy: Artificial Intelligence’s Emotional Blindspot And The Risk That It Poses (2023)

Macro shot of binary code on the monitor of an office computer

Table of Contents

As Artificial Intelligence technology advances and makes its way into language models, image generators, and beyond, the emotional complexities of what makes us human are being left on the wayside to a worrisome degree. No matter how diligently AI may attempt to imitate our characteristics and emotions, it is still at its core nothing more than a lifeless string of code.

Today we’ll be discussing the emotional blindspot that AI has, and how the lack of nuances and emotional intelligence creates some of the main issues related to Artificial Intelligence. Using the TriDelta Method of AI Risk Assessment, I’ll show you exactly how negatively AI will impact not just you as an individual, but the entire human race.

Empathy, emotions, and opinions based on genuine experiences are at the core of what makes us human. By allowing soulless AI to take over all our content production, marketing strategies, and other important tasks previously manned by humans, we are setting ourselves up for sociopathic, emotional failure. There’s no way to get around the fact that AI lacks empathy.

AI lacks empathy

AI does not only lack emotional intelligence in everything it produces, but it perpetuates no sense of depth or nuance. Robots like ChatGPT aren’t C-3PO. It is nothing but a husk. A husk with the emotional depth of a rock.

Let me tell you why.

The Importance of Emotional Intelligence

First off, let’s learn what emotional intelligence really is and why it is so important. The ability to perceive true human feelings and the possession of empathy are at the core of what makes us human. Emotional intelligence allows us to understand and regulate our own emotions, and recognize and respond appropriately to those of others.

Developing emotional intelligence as we grow older is what helps us to connect with others, navigate challenging social situations, and most crucially – make ethical choices. Emotional intelligence is a quality that elevates us above other species on this planet and is truly what makes us human.

We are social beings. We need assurance and confirmation from other beings, and no matter what prompt you add, ChatGPT cannot replicate this. So when ChatGPT, a dead robot with absolutely 0 emotional intelligence, starts dictating what we consume online, real problems arise.

Without emotional intelligence being prevalent in the content we consume, we struggle to navigate the nuances of online interactions, and that causes both harm and misunderstandings. It subconsciously messes with the human brain as we never really know if what we are reading reflects the emotions of a human – or is a replica by a soulless robot.

Additionally, think of all the other scenarios besides content creation and consumption that are absolutely reliant on emotional intelligence to ensure ethical practices.

Why AI Lacks Emotional Intelligence

It is a fact that Artificial Intelligence can never possess emotional intelligence, no matter how advanced its coding and dataset may be. Without the capacity to process subjective experiences, feelings, or emotions – something only human beings can do – AI cannot gain an understanding of emotion-based behavior.

Robot android cyborg

There are countless studies that have looked into the lack of emotional intelligence in the most advanced AI. For example, this 2018 Harvard by Shrier finds that all-natural language processing algorithms don’t capture the nuances of sarcasm, irony, or humor, which are not just important, but essential elements of human communication and connection.

What’s worse, AI systems can’t even take in emotional factors when it comes to literally anything else either, and it makes piss poor decisions as a result. An example I personally thought was beyond revolting is Amazon’s failed AI recruitment tool that ended up being super racist. It couldn’t recognize faces of color properly, and would therefore disproportionally disqualify solid applicants purely based on race.

AI data is based on software that is significantly more accurate at recognizing white faces than that of other races, and the harmful consequences of this are absolutely imminent. Can you imagine if police stations get their hand on that kind of technology? It’s like dumping racism on top of racism, but this time it has access to guns.

Seriously, AI will cause even more riots than we’ve had before. And with good reason. A world driven by no emotion is not a human world. Such dystopias should be kept exclusively inside George Orwell novels to terrorize AP English students.

The TriDelt Method – The Harmful Impact AI’s Lack of Emotion Possess

With AI-generated image software and chatbots becoming commonplace, many have lost sight of the fact that there is nothing but a calculated, unfeeling string of code beneath it all.

It’s crucial to always keep in mind that Artificial Intelligence systems, no matter in what form, will always have been designed for specific tasks and environments. When you include something that is beyond its configuration, artificial intelligence will never be able to manage situations with the same emotional sensitivity as a real human.

When assessing the drawbacks of AI, we need to categorize the issues that come with it. The consequences of AI have a negative impact on three wildly different aspects of life and society, ranging from mild to moderate to extreme. We call this the TriDelt Risk Assessment Method, which carefully categorizes the consequences of AI into three D’s: Displeasure, Disruption, and Destruction.


AI disrupts more than we think about, and sometimes it can be the simple things – like trying to navigate customer service robots that don’t have a hint of empathy and human nuance. I’m sure you’ve at least once in your life had the unfortunate experience of having to talk to one of those terrible chatbots that all the banks and phone services are using these days, where getting a real person to talk to is harder than teaching a brick how to swim.

Frustrated business executive talking on telephone

When a person reaches out to customer service, it is often because they are distressed. It does not really matter if their feelings stem from legitimate grievances or not-so-valid ones, because either way, it requires a human touch and emotional intelligence that can comprehend the speaker’s tone of voice and apply practical wisdom in ways that AI simply cannot.

I realize this is more annoying than dangerous, but it is a legitimate issue that a lot of people, especially those of the older generation, face. It is obnoxious, time-wasting and, worst of all, it is taking jobs away from people who could do this work much, much better.

And honestly, I will never blame big bizz for doing what they can to get more money, it’s the natural progression of capitalism – but holy smokes it has an effect on the average consumer. Activision-Blizzard, the popular Anaheim-based game company, has been progressively laying off their customer service department – starting back in 2012, and last back in 2019.

Their previously award-winning customer service program used to be run by real gamers who would sometimes fly their characters into specific situations and resolve issues in the game.

Now, however, after laying off thousands of employees, an AI is attempting to solve the same queries – but it simply lacks the depth to do so. It takes days, sometimes over a week, for players to get as much as a response some times.

Additionally, the games are overrun by Chinese “bots” (a gamer term for player characters actually controlled by AI) whose sole purpose in the game is to generate the in-game currency by doing otherwise boring and tedious tasks, and then sell that curency to real playerthat they then sell for real money.

Games like World of Warcraft have lost millions of players as a result. The hostile AI is taking over the game and cluttering the world by taking all the materials, not responding, and being generally terrible at combat. Meanwhile, the customer service AI that is supposed to help the players lacks the skill set to solve most issues generated by this. Clear frustration, and obvious displeasure.

The Popular MMORPG World of Warcraft lost millions of players due to hostile AI


Disruption is when the impact of AI goes beyond frustrations and obnoxiousness but starts having a direct effect on your livelihood. Either because of examples like the racist Amazon recruitment tool mentioned earlier preventing qualified applicants to land a job, or the misogynistic nature of AI in general.

A study by McKinsey predicts that at least 400 million people worldwide will lose their job as a result of AI, and that is the best-case scenario. Worst case, 800 million people could be affected. To put that in perspective, 800 million is 35% of the entire world’s workforce.

The people who are most likely to lose their jobs due to automation are also those from lower-income brackets, since these jobs tend to be easiest for a robot or AI system to handle — regardless of how terribly it is done.

I get it, capitalism demands employers and big businessmen in nice suits to do what they can to add even more money on top of their other money, but it doesn’t mean that it’s not terrible for the human race and our society in general.


Lastly, we have destruction. We’re talking hacking, terrorism, and even wars.

Hands of hackers trying to steal important information

Increased automation of code and the lower barrier of entry for people with bad intent to access otherwise complicated sets of code as a result of AI is leading to an increasing force of hostile hackers on the internet.

More crucially, the US military has started using autonomous drones. Just imagine what kind of destruction these artificial intelligence-powered robots can create in untrustworthy hands. Just think for a second: what if there were a glitch in their code that caused an explosion where it shouldn’t—which country would be made to answer for such wrongdoings? Even more alarming is that The UN has begun looking into utilizing AI to make major warfare decisions.

Where is the line? How can a machine with no empathy, emotional intelligence, or basic sympathetic nuances be able to decide who dies and who doesn’t? The consequences will be catastrophic, and lead to dystopias beyond our wildest Hollywood-based movie dreams.

AI Lacks Empathy and the Future of Our Humanity is at Risk

Seriously. If you take nothing away from this article, at least recognize that the emotional blindspot of AI is something that can and will disrupt mankind in many, many generations to come.

Sometimes it won’t always seem like that big of a deal – such as me hating having to read blog posts I suspect were actually made by AI, or my favorite artists who spent decades perfecting their craft being overshadowed by an 18-year-old who knows how to prompt an image generation AI well.

Sometimes it can be truly disruptive, such as losing my livelihood and not being able to get a new one because of discrimination.

And sometimes it can be the fear of all-out war, terrorism, and lives lost to self-driving cars that all of a sudden have snow clogging up their sensors.

There’s no denying AI has made some workplaces more efficient, but it seems people have become too preoccupied with its potential to pay attention to the reality behind it: if something sounds too good to be true, then probably is. In this case, the drawbacks of using artificial intelligence are severe and have dire consequences.

As humans who still possess at least some emotional intelligence, it is our job to stay knowledgeable. Be aware of the risks that come with AI, so that you can navigate our AI-driven future as safely and responsibly as possible.

Do you have content you want to prove was made by you or your team, and not AI? Feel free to use any of our free icons on this page!

And of course:


Please share this article
Share this article