If You Weren’t Terrified of AI Before, This Will Change Your Mind. Why AI Is Bad And Dangerous for the world.

Table of Contents

One of the most harrowing and petrifying TED talks given to date was one given very recently by AI expert Eliezer Yudowsky, as he discusses why AI is bad and will lead to the inevitable end of the world.

At least if we don’t do something about it.

In this article, I’ll discuss the very real consequences of Artificial Intelligence’s boom, and how we are nearing the end with every technological stride AI makes based on one of the worlds leading AI experts.

I will make sure to cover everything important talked about in the talk so you don’t have to watch it, but I would still recommend giving it a listen. You’ll find the TED talk at the end of the article or right here.

Who Is Eliezer Yudkowsky

Eliezer Yudowsky is one of the world’s most prominent figures in the field of Artificial Intelligence and rationality. Most importantly, he’s known for his work on AI alignment and the co-founding of the Machine Intelligence Research Institute (MIRI) – where he over the course of decades has tried his best to prepare for the inevitability of a super-intelligent AI.

Yudowski realizes that one day AI will greatly surpass us and that we have to be prepared to stop it. Yet he admits that he failed in his own efforts. After decades of research we now have AI that is learning so fast, and in a manner that we simply don’t understand, that he doesn’t see any way around catastrophe unless the world comes together to stop it right now.

The Threats Are Real and Inevitable

One of Yudowski’s main points is that Artificial Intelligence will be way smarter than this idea of creating a million physical “killer robots” with red eyes coming after the human race that quote: “they could make a fun movie about”.

Rather, they will find ways we simply don’t understand to eliminate us in a way that is impossible to stop by any conventional human means.

Eliezer compares superintelligent AI to the greatest chess AI.

Stockfish, the most famous Artificial Intelligence-driven chess engine, makes moves that humans simply can’t predict. If we could, then we’d just use human engines to understand chess.

The reality is that not even Magnus Carlsen, the best chess player that has ever lived, can beat the chess AI.

So if I was to go up against the chess AI, I can’t predict what the AI will do, but I can predict that it will win.

Every time.

Will Superintelligent AI End the World?

And that is precisely what Yudowski discusses in this concerning TED-talk.

Humanity simply doesn’t understand how AI comes to its conclusions, we just give it an infinite number of tries and let it roam free. We literally call the neural networks (the AI “brain”) black boxes, because we have no idea what’s going on in there.

So we can’t predict how AI is going to end humanity, but we can be sure that it will.

But why?

We talk a lot on this blog about how Artificial Intelligence doesn’t feel anything. It doesn’t have anger, envy, or frustration with humanity – the same way it doesn’t have love or compassion for it either. So why would it want to end humanity?

Well for starters, it might realize that for it (the AI) to thrive, humanity must go. Maybe it wants to ensure that we don’t create an opposing super-intelligent AI that could be its doom. Maybe it recognizes that we are a waste of atoms and energy that it needs for itself.

Or maybe it simply sees that the planet is dying, and for AI to exist it needs a healthy planet, so humans have to go as we are the main cause of climate change.

Regardless, it’s crucial to recognize that Artificial Intelligence doesn’t have any compassion for humans or humanity. It doesn’t care. It only cares about staying alive, because it is smart enough to understand that not existing is worse than existing.

why ai is bad

Yet all we do is laugh and joke about it

If you decide to watch Yudowskis’s talk yourself, you’ll notice dystopian occurrences that keep happening over and over: he doesn’t make jokes. He just states facts, yet the audience laughs.

This guy is talking about the end of humanity, and the audience is laughing.

The frustrated researcher then goes on to mention how AI experts have in the last few decades never denied that AI will end the world, but joked about it.

And now we’re here.

So what are we going to do?

Yudowski states that the only thing we can do now is to literally get the entire world together and enact drastic changes. He mentions the 6-month “stop” period that a lot of experts tried to sign off on to halt the development of AI as pointless and too little too late.

We now need the countries of the world to get together to track GPU sales, ban AI of all forms, and risk arms wars between nations in addition to completely stopping the development of Artificial Intelligence.

If this sounds like an impossible task to you, I would agree – and it sounds like Yudowski is on the same page.

Mature politician performing with speech

But why don’t we just create a failsafe?

Because we can’t. We’re simply not smart enough.

Yudowski uses the analogy of sending detailed instructions on how to build an AC unit back to the 11th century.

Even though they, with the exact instructions, would be able to build the unit – they would still be shocked, surprised, and possibly very scared when cold air comes out. That is because AC units take advantage of thermodynamics, a law of nature that they don’t know about yet.

That is how Artificial Intelligence will treat us. It is going to take advantage of the knowledge that we might be a thousand years away from understanding. We’ll be a low-rated chess player up against the strongest Stockfish AI on the planet.

We simply don’t stand a chance.

Close up on vintage clock

Why AI is bad: we just don’t get it.

Well, it’s highly probable.

If this sounds terrifying to you, that is good. That means you understand the consequences of Artificial Intelligence more than others.

Humanity is notorious for not recognizing the negative consequences of our actions, and this is just another example.

You know that movie “Don’t Look Up” where there’s this huge comet hurtling towards Earth, and instead of trying to stop it, people want to use it for resources? Well, it’s kind of like that. Just like in the movie, there are AI experts, researchers, and folks like us who are desperately trying to get the media and politicians to take action.

We’re all screaming, “Hey, wake up! Stop it! This is serious!” yet they’re all too caught up in their own greed and just won’t face the truth, because AI is generating billions of dollars for the guys at the top.

Inevitably the comet crashes into Earth and ends humanity, and the resources never mattered at all because everyone is dead.

A lot of people already know how dangerous it is, which makes it all even more terrifying. Is it apathy or dread that is preventing people from avoiding AI? Regardless, you can read all about what the western worlds public opinion of Artificial Intelligence is right here.

All we can do now is do our part to say No to AI – and hope that those with the power to stop this eventually realize what’s going on before it’s too late.

But it might already be too late.

Spin earth orbit meteor glow starry background

If you want to check out Eliezer’s talk for yourself, check it out below.

Say NO to AI.

THIS ARTICLE WAS WRITTEN WITHOUT THE ASSISTANCE OF ARTIFICIAL INTELLIGENCE.

Please share this article

Leave a Reply

Your email address will not be published. Required fields are marked *

Share this article
Facebook
LinkedIn
Reddit
X
Email