It Is Worse Than You Think: Why AI Is Bad

Table of Contents

The problem with AI have gone beyond simple ethical considerations. AI technology, especially over the last year, has completely taken over not just the internet – but our very minds. It has quickly managed to chip away at all human-based emotions, empathy, and experiences.

Why AI is bad isn’t a simple answer of “machines are going to kill us” — it is so much more subtle and terrifying than that.

why ai is bad

AI is everywhere on the internet, even if you don’t think it is. It spreads over binary lines and codes like a parasite, eradicating any sense of genuine human emotions from every nook and cranny of the web.

That picture? That blog post? Can we even differentiate between humans and AI anymore?

We get it, AI is easy and convenient. It’s the path of least resistance:

Why build genuine content and art (you know, the very thing that separates us from animals) from a creative mind and waste hours when a dead robot can conjure up the same thing in less than a minute?

Why use the brilliant minds of both educated and self-taught humans, if a string of code on the internet can just type out whatever you need while you scroll TikTok?

I will tell you why.

SOCIAL MANIPULATION: OLD, NEW AND EMERGING THREATS

The AI movement is moving too fast for anyone to quite comprehend what is happening, and with this comes an introduction of threats that we simply are not ready for yet. A 2018 report by Oxford and Cambridge, lists a number of concerns regarding AI in both web and social settings–

In 2018, when the report came out, AI language models such as ChatGPT weren’t really widespread thing yet – but it still covers a lot of the same issues based on the plethora of other AI-related problems.

The Problem With AI

Social manipulation has been one of the main concerns of AI over the past few years. It is defined as the process that happens when AI-based algorithms specifically alter the online experience to ensure that you are fed the correct and biased information that perpetuated and feeds into already existing stereotypes.

TikTok is the main villain when it comes to this sort of technology. Through hours of scrolling, it will figure out what triggers you and what doesn’t – and send you in a direction that is curated through a mixture of AI algorithms and your own subconscious.

Although this often provides you with the most personalized and entertaining content, it builds on the same veins that spread fake news, perpetuates stereotypes, and build massive cult followings of online personas (how’s Andrew Tate doing these days, anyway?)

Not to mention the introduction and advancement of deep fake technology. Look it up, it can be nearly impossible to tell what is real and what isn’t.

AI language models such as OpenAI, Jasper, DALL-E etc. build on the core of social manipulation. When you put in a prompt to any such model, they are scourging the internet for opinions already laid down by other people. It is using algorithms to feed you the data it has gathered, without doing a critical thinking empathy check. It is a lifeless robot, not a human.

THE CONTINUOUS PERPETUATION OF RACIAL AND GENDER STEREOTYPES

AI, thankfully, isn’t at a point where it can form its own opinions and thoughts out of thin air. This does, however, mean that it has to base all its information on a base set of data. A Stanford study found that because AI learns through word embedding in language, it is virtually impossible to prevent AI from perpetuating stereotypes and biases.

We are currently in a world where race, gender, and sexuality are major topics of conversation on all ends of the sociological political spectrum. Can you imagine how these opinions are affected when all content you read online is influenced by a BIASED AI??

Hands of business person typing on computer keyboard in office at work, working on pc programming a

Humans are biased, and that is okay, but if we allow our opinions, biases, and stereotypes to be influenced, perpetuated, and reinstated by something that is not even human – we lose track of our moral compass and critical thinking skills. We cannot let this happen, because it is morality and critical thinking that makes us human.

Time and time again I find myself reading a blog post and thinking to myself: “did a human really write this?”. I always hope that the answer is yes, but I don’t know. This constant feeling of doubt is what is removing the joy of content consumption online for me.

It is what led us to create the NO-AI logo and statement for any content creator to add to their site, to ensure that all consumers know that what they are reading is 100% human-made. There is no money being made here, I just want to make sure I am reading something human.

ANWAYS, HOW DOES THIS BIAS REALLY HURT MARGINALIZED GROUPS?

Unless you’ve lived under a rock for the past 50 years, you know that bias is at all people’s core. There have been riots over this stuff a bunch for a reason.

Now imagine that you’re a hiring manager, and you decide to take the path of least resistance and use AI to help hire for a new position at your company. Less work sounds awesome, and I won’t blame anyone for it, but this is where a major issue truly comes to light.

AI perpetuates stereotypes based on all the preconceived notions and generalizations about certain groups of people that have been built up over millennia. Based on this systematic discrimination, AI is going to assume that women are not as qualified for technical positions as men. The AI will straight up disqualify female candidates because that is the data it has been fed, and it doesn’t have any emotional intelligence to realize the obvious faults in doing so.

So now our buddy who took the path of least resistance has allowed his AI to disqualify wonderful candidates – simply because the dataset said that historically, women are not qualified for technical jobs. ChatGPT doesn’t understand that this is no longer true because it only feeds off of data, and to it, the morality of gender equality doesn’t matter. It’s a robot. Nothing matters to it.

AI doesn’t have any emotional intelligence or empathy. It doesn’t recognize any right from wrong, it just jumps the gun on any decision and ability based on datasets. It doesn’t matter that the datasets are faulty and biased.

AI Bias isn’t limited to just our examples but rather spans a whole array of marginalized groups and conceptions associated with it. Princeton Computer Scientist Olga Russakovsky confirms that not only does AI have a bias when it comes to gender and race, but it goes “way beyond that.”

SHOCKING NUMBERS: AI RIPPING PEOPLE OF THEIR LIVELIHOOD

Human workers being replaced by lifeless machines is unfortunately nothing new. Man-driven manufacturing jobs were replaced by machines a very long time ago. Over the years, as AI becomes more and more intelligent, the automation of jobs has done nothing but expand. As AI advances, machines have gained new abilities and can complete increasingly complex tasks.

No longer does machine replacement of humans limit itself to moving items on conveyor belts, or taking your order at McDonald’s. It can create code, discuss pharmaceutical solutions, write papers and articles and create art. It can be an accountant, a banker, a healthcare professional, musician, a translator, a project manager – the list goes on and on.

The first conveyor belt used for Ford manufacturing factories (1913)

It can even make warfare decisions, deciding who dies and doesn’t. This is terrifying.

Unfortunately, it is just the inevitable curve of AI: the smarter it becomes, the more humans it can replace, and the more decisions it makes for us. The more humans it replaces, the higher job displacement happens. Now this is starting to become a real problem – one that has been continuously countered by simply saying “but it replaces jobs nobody wants anyways” and “it creates more jobs”!

No. Listen. It’s taking away jobs from those who are not privileged enough to go to a fancy University and get a fancy degree to work in AI.

Times are changing though. AI and machine automation might have taken jobs away from blue-collar workers for a century, but now it is taking away jobs from the skyscraping white-collar university graduate as well

Maybe this is why people are suddenly starting to care a little bit. Nobody was batting an eye when those with a lower income lost their jobs, but now that AI is being integrated into the computer science and tech world, stripping away the livelihood 6-figure salary workers, eyes are suddenly starting to widen.

I don’t discriminate, I think it sucks equally much for anyone to lose their job to a machine no matter how much or little you earn. Because this is scary. Take a look at this: a study by McKinsey predicts that by 2030, over 400 million people worldwide will have lost their job to AI such as ChatGPT. They also state that if the spread goes into the worst-case scenario, this number could be close to 800 million.

That is 30% of the entire worlds workforce.

AI CANNOT COMPLETELY TAKE OVER – WE HAVE TO TAKE ACTION

The numbers are real, and they are out there. This issue isn’t a tinfoil hat conspiracy, but a real sociological issue that is going to impact the livelihood, opinions, and core moral compass of humans for generations to come.

It seems that every time more data and research is published on the topic, the more we learn about the true danger of AI and the associated language models that are now widespread to every person on the internet.

AI is biased, it takes away people’s livelihood, and it lacks empathy. This article barely scratches the surface of the harmful ideals and issues that AI imposes, but we aim to expose the truth. We will ensure that people have access to data collected and formatted by real humans. And trust me:

NO AI WAS MADE IN THE CREATION OF THIS ARTICLE.

Please share this article

Leave a Reply

Your email address will not be published. Required fields are marked *

Share this article
Facebook
LinkedIn
Reddit
X
Email