The Danger of ChatGPT & The Mass Production of Misinformation in AI

Table of Contents

More and more content across the internet is now being mass-produced by AI language models such as ChatGPT. However, few people seem to talk about the dangers and drawbacks of relying on information from a machine that not only lacks emotional intelligence and human empathy but also doesn’t even have access to information from after September 2021. The truth of the matter is that there is too much misinformation in AI for it too be reliable.

There are many reasons why relying on AI information is harmful to us humans, which I have covered in a variety of articles on our Blog. But there is one issue that has not been covered, especially in an era of fake news and misinformation; what happens when you ask for information from someone who just woke up from a coma that started years ago?

In this article, I will cover why ChatGPT is unable to access data from after 2021, this misinformation in AI, why this is a problem, and what we can do to ensure that we read true information from a reliable source when consuming content online.

Fact, Fake, wordplay banner headline

Why Doesn’t ChatGPT Have Data Post 2021?

ChatGPT is limited to information on or before September 2021. The AI language model has been trained on a massive dataset of information that has been scraped off the internet till that point, but for whatever reason (which OpenAI has not disclosed) they have not updated it since.

This means that any new developments, news, events, or factual updates that have occurred since then are literally unavailable for the language model.

It has always been a frustration of mine that OpenAI does not disclose as much information as I believe they should, which you can read more about in my article: “No OpenAI Transparency: Admits To Withholding Details On Model Architecture, Training Methods

This has also been an issue for a lot of other people, which is why there is a huge movement to pause any AI development for 6 months to allow the world to simply catch up to the insane developments we’re seeing. Even huge personalities such as Elon Musk, who was one of the founders of OpenAI, have signed on to the movement.

The Danger of Misinformation in AI

When ChatGPT was released, thousands of bloggers and content creators picked up the technology and used it for absolutely everything related to their job. This means that the internet became flooded with articles containing FALSE information.

Yes, ChatGPT might have perfect grammar, a beautiful and diverse vocabulary, and the ability to spit facts about seemingly everything, but a lot of the facts it came up with have been wrong.

Even though a lot of people are aware of this, content creators have just been so excited to save time and money and be able to walk a path of least resistance that they’ve allowed themselves to flood the internet with this false information.

Remember that ChatGPT just scrapes the internet for “facts”, but doesn’t sit down and reflect on whether the information it’s scraping comes from liable sources. AI language models don’t know if experts are real experts, Or, if it’s some random dude in a basement just mass-producing random content on a blog with no fact-checking.

Daily fake news

Where-as Google has a large focus on authority, expertise, and trustworthiness – OpenAI does not seem to put in this same effort. When you google something and it shows up in the top 3 results, you know for a fact that it has been passed through a wide array of filters that Google has crawled with fine-tuned algorithms to detect the page’s authority, expertise, and trustworthiness.

I don’t even blame OpenAI for all the fake news that ChatGPT produces, but I do take great issue with all the content creators who simply allow themselves to post potentially harmful misinformation simply because it was the faster and easiest way to do it.

It’s impossible to ensure that the dataset that ChatGPT is built on actually contains 100% true and fact-checked information, so it is our job as content creators and consumers to make sure we don’t rely on the information it produces.

ChatGPT cannot contextualize information. For example, if you ask it when the first iMac came out, it is going to say May 1998, which is untrue. The iMac was announced in May 1998 but didn’t come out until August. However, ChatGPT doesn’t understand the difference; it just scraped the internet and saw that there was a connection between May 1998 and the iMac and called it a day.

Now imagine reading a tech blog that used OpenAI to produce its content. You just consumed incorrect information.

How Bad Is It? A Video Example.

Here is a video from 2022 that covers how using an AI language model for writing can be incredibly dangerous. In the video, he tries to write a blog about Beethoven’s early life using just the language model – and it turns out that it just made up dozens of facts.

Like, straight-up lies that make no sense such as “Beethoven’s dad was a good father” (he beat him up and locked him in the basement if he played a wrong note on the piano). What information it provided that was not a lie, was plagiarized from other articles across the web.

The Misinformation Epidemic Is Real

There is tons of research and proof showing that ChatGPT and similar language models are not able to reliably provide true and accurate information. Although to me this is obvious, and the answer could just be “then don’t use AI for fact-checking”, the real problem arises when we analyze how modern-day content creators produce articles, art, and blog posts.

AI revolutionized how content creators go about writing their content, so what happens when everything produced has had AI’s sticky fingers all over it? Even if a creator doesn’t use AI for all the information, the video above clearly shows that the language models are more than happy to just add random, bogus facts to spice up articles.

This means that if you’re reading a blog that has been produced by AI, there is a high probability you’re consuming misinformation and fake news. And trust me, more and more huge content creators are using AI. For example, Buzzfeed just laid off 12% of its employees to AI.

Buzzfeed isn’t an exception. More and more writers are being laid off from both small and big content corporations to be replaced by Artificial Intelligence. This is going to severely hurt the way we take in new information.

I truly believe that all forms of media that use AI should, by law, be required to put a disclaimer on their content that clearly states AI was used in the creation of that content.

For example, Buzzfeed should clearly state that its main content creators are now dead strings of code in some form of statement or logo on their website. At least be transparent about their lack of humanity, if nothing else.

Obviously, and unfortunately, it doesn’t seem like that is going to happen any time soon. So as an alternative, I have created a way for content creators to easily state that their content is 100% human-made by putting any of our free No-Ai badges on their page.

The icon then links to this statement, and their content is quality checked by us to ensure that no AI was part of the process.

THIS ARTICLE WAS WRITTEN WITHOUT THE ASSISTANCE OF ARTIFICIAL INTELLIGENCE.

Please share this article

Leave a Reply

Your email address will not be published. Required fields are marked *

Share this article
Facebook
LinkedIn
Reddit
X
Email