Sam Altman is the CEO of OpenAI, the creator behind ChatGPT, and has over the years expressed serious concern when it comes to Artificial Intelligence. One of the main reasons that we need to take the negative impacts of AI so seriously is because the very creators of it are terrified. So why is AI bad?
I’ve mentioned before how the greatest AI researchers don’t understand the technology as much as they should, and neural network (AKA the “brain” of AI) is often called “black boxes” because researchers often don’t understand why it lands on the conclusions it does.
Yet aside from Hawking, what do all these people have in common?
They keep developing it, completely disregarding the massive influx of democratic ruin we are looking at as a result.
In this article, I will go over Sam Altman’s main worries regarding Artificial Intelligence so that you too can be informed about what the experts are dreading.
Bad, Bad, Malicious Actors.
One of the greatest worries Sam Altman has been vocal about is his concerns regarding the possibility of AI technology being misused or ending up in the wrong hands. His main worry mainly focuses on misinformation and bad actors’ potential to take advantage of technology.
Just imagine the consequences we are going to face once malicious individuals and organizations start getting control over AI systems. The possibilities are endless: cyber warfare, misinformation campaigns, and autonomous weapons just to name a couple of examples. Even Putin said that whoever controls AI will control the entirety of the human race.
Additionally, Artificial Intelligence has the possibility to teach humans things that they really shouldn’t know. For example, someone asked an AI to help them create a biological weapon – and within just a couple of hours, it had given 40,000 different examples.
Yes. Forty thousand.
It knows how to make a bomb. How to spread misinformation. How to get away with murder.
It’s a super tool for criminals and bad actors, and there’s nothing stopping them from taking advantage of it because although Sam Altman and OpenAI pride themselves in talking about safeguards and what-not, ChatGPT is incredibly easy to jailbreak to give you just the information you need.
And that’s just now; imagine a future where competing language models stop losing control. It’s just a matter of time before we see mass destruction as a direct result of Artificial Intelligence.
The Wealth Gap and Evil Capitalistic Overlords
A major thing that Sam Altman and I have in common, is that the destructive potential of exacerbated wealth inequality both keeps us up at night. The main difference is that he’s a billionaire and I’m not.
Wealth inequality is already a pressing concern in today’s world, and the gap between the rich and the poor has increased vastly over just the last few decades. Yet the advancements of AI and automation technology have the potential to completely reshape entire industries and labor markets so fast that there’s just no reality where we can keep up.
Altman realizes that the displacement of jobs we are going to see as a result of AI doesn’t even come close to what we’ve seen before in previous industrial revolutions.
Because it is so, so much worse.
Whereas before the automation of labor markets meant that within a few years, people slowly started to lose their jobs, the AI boom is…well, a boom. It is happening so quickly that both highly-educated and lower-educated people are losing their jobs at a rapid rate. In fact, McKinsey predicts that by 2030, up to 800 million people will have lost their jobs to AI.
That is 30% of the workforce of the entire world.
Naturally, as it often goes, this will hurt the people in lower socio-economic classes a lot harder. Only a few jobs will be created compared to the millions lost, meaning that again only a small percentage of the world gets to stay in control and get richer – while the remaining 90% suffer.
Not only that, but Altman realizes the control issue at stake: power.
As Artificial Intelligence progresses, it is going to lead to an extreme accumulation of economic power in the hands of very few individuals and corporations. The capitalistic hellscape we’re already in is going to be even more exacerbated, and the wealth inequalities we know now are going to look like pennies in comparison.
This inevitably will lead to massive societal issues and create an even larger divide between the haves and the have-nots.
Trust me, the rich do not care about anything but getting richer. The small group that possesses the most control over advanced AI systems won’t lose a second of sleep as increased economic disparity, limited access to resources, and limited opportunities for socioeconomic mobility plague the world as we know it.
Want To Hear It From The Source?
Sam Altman has been publicly speaking up about his worries on the dangers of Artificial Intelligence for years, all over the place. If you’d like to hear it straight from the source, I encourage you to go down the rabbithole of interviews he’s done starting with this one:
Why Is AI Bad? Because Even the OpenAI CEO is Telling You to be Scares
The fact that the leading developer of Artificial Intelligence is as terrified as I am shows that we cannot take the dangers of AI lightly. Although I respect Altman for voicing his concerns publicly, it is incredibly frustrating to see the hypocrisy surrounding his work.
He speaks up against the dangers of AI, yet continuously develops in a way that just isn’t safe enough. He talks about “guard rails that need to be placed” and then proceeds to just not install any.
That’s not how the world works, Sam.
I’ve talked about the lack of transparency and the hypocrisy of OpenAI before, and since that article, we have seen no changes at all.
The OpenAI CEO also only touches on a couple of the worries that we need to have about AI. He completely disregards the mental health epidemic, and how this will hurt our children, our privacy, and our democracy.
economic one that is going to leave a lot of people worse off.
A lot of people already know how dangerous it is, which makes it all even more terrifying. Is it apathy or dread that is preventing people from avoiding AI? Regardless, you can read all about what the western worlds public opinion of Artificial Intelligence is right here.
We need to start holding these massive corporations accountable. The best way to do so is to do your due research on the impacts that Artificial Intelligence is going to have on your life. On your children and family’s life. On the very backbones of our society and democracy.
A great way to do so is to read all about the negative consequences of Artificial Intelligence right here.
Trust me, I don’t sugarcoat. I don’t fear-monger. I simply speak the truth.
Stay in the know, and say no to AI.