The End Of Humanity: Artificial General Intelligence

Shot of the earth engulfed in flames against a black background

Table of Contents

The greatest fear for most experts is the rapid advancement of Artificial General Intelligence, the idea of an AI that can essentially think completely for itself and compete with the human brain on close to any topic.

In this article, I will explain exactly what Artificial General Intelligence is, why experts fear it, and why their fears are legitimate. The dangers and threats of Artificial General Intelligence are many, and we need to stay knowledgeable to secure the future of humanity.

artificial general intelligence

What exactly is Artificial General Intelligence?

Artificial General Intelligence, or AGI, is the idea of an AI that has the ability to learn, reason, and interact with its environment in a way that is indistinguishable from how humans do. AGI can think abstractly, engage in creative problem-solving, and plan for future events. It’s artificial life.

The reason AGI is such a scary subject is that it can gain new knowledge through self-learning algorithms, which will allow it to become more intelligent over time. Compared to current AI systems which are limited to specialized tasks, AGI will be designed to handle a wide range of tasks that would normally require human intelligence.

This means it could be used for medical diagnoses, financial analysis, robotics navigation, and many other complex tasks, which sounds awesome — until you realize that it can, in the same vein, operate a lot of things we don’t want it to touch.

Regardless of how intelligent AI becomes, it will still always be incapable of moral reasoning and ethical dilemmas. If it is granted too much autonomy and legal power and decides that humanity just isn’t worth it… well, we’re screwed.

Not only is this a possibility. It is inevitable.

Now I have to make it clear that as of writing, we have not yet actually achieved Artificial General Intelligence. Some believe it’s a couple of years away, some people say that it’s 50 years away. However, at the current rate that AI has developed in just a few months, I worry that it might be just around the corner.

Regardless of its emerging timeline, the public needs to start gaining more knowledge on the technology so that we can start speaking up. We need to ensure that AGI has failsafes and parameters in place to ensure ethical development and integration. If Artificial General Intelligence is achieved tomorrow morning, I worry there might not be a lot of humans left very soon, and I will tell you exactly why.

How Will Artificial General Intelligence Impact The Future of Humanity?

If I was to list out every single concern that experts have regarding Artificial General Intelligence, I would have to turn this article into an entire book. However, the main concerns are mostly rooted in three main areas: super-intelligence, the lack of control of this intelligence, and the unintended consequences that would happen as a result.

The main concern regarding AI, which you might have seen in countless works of fiction, is this idea of super-intelligence. Essentially, when AI systems become smarter than humans. It’s a step above Artificial General Intelligence, and the reason this is such a massive issue is that AI does not have values that align with ours.

Super-intelligence would have unmatched cognitive abilities that humans wouldn’t even be able to comprehend. It will process information faster, make complex decisions, and outperform humanity in scientific research, planning, and engineering. So what happens when this all-powerful genius no longer matches our values?

What happens when it realizes that to stay alive, the earth has to be alive, and that the earth is currently being killed by humans because of wars, climate change, and lack of conservation? If super-intelligence’s goals aren’t well-defined and set in stone exactly like humans, it might decide that for its own preservation, it has to get rid of humanity.

Artificial intelligence digital brain. Shaped with blue neural connection glowing lines.

This concept is called instrumental convergence, which is when a super-intelligent system may converge on goals like self-preservation or resource acquisition. It doesn’t even need to be as dramatic as saving the planet from global warming, it could be something super simple. Let’s create an example:

If a massive corporation selling baseballs all of a sudden cannot keep up with the means of production and are no longer able to produce more baseballs than they’re selling, they might turn to AI to help them. The corporation goes: “Dear AGI, please help us maximize the production of baseballs so that we can earn lots and lots of money. Something something capitalism.

Baseballs

If this goal isn’t more clearly defined, the AI might decide that the main obstacle keeping the corporation from optimizing the production of baseballs is, you guessed it, humans. To maximize the number of baseballs on the planet, it could then follow through on its instrumental convergence and lead to committing a variety of harmful actions since it does not have a moral compass to understand human values.

Artificial Intelligence is an optimization tool, but it doesn’t necessarily care how it optimizes it.

In this example, everything would be all fine and dandy if a human is still in control and goes: “Well hang on, eradicating humanity for the sake of baseball production is not the morally just thing to do” — however, what happens in a scenario in which we have no control?

AGI autonomy is pretty much inevitable, and as it becomes more and more independent we are looking at a future in which we have completely lost control. Regardless of value alignments, it is going to be near impossible to ensure that AGI systems understand and prioritize human values and objectives. It is simply too difficult to understand because our values and too complex, subjective, and varied depending on which culture you’re in.

Even if it does understand our values, there are few ways to prevent it from deviating from the goals or “wireheading” — which means constantly finding shortcuts that would then conflict with our values and lead to threats and harmful consequences.

In a perfect world, effective control mechanisms would be developed to maintain oversight and intervention over these AGI systems. Essentially, putting in a failsafe that would give humans the power to interrupt or modify the system’s actions.

But if the AI is in control, completely independent from humans, what is there to stop it from just ignoring any failsafe? To deactivate us instead?

Nothing.

That is why the idea of control is so terrifying.

Humans have ruled the earth from the get-go because we’ve had intelligence and resources that vastly surpassed any of the animal kingdoms, now imagine if there’s a significantly smarter entity among us that we can no longer control.

Realistic Earth Planet against the the star sky

The Future of Humanity is At Stake Here, People!

This future seems scary and hopeless, and that might just be because it is. This is why experts are so terrified of the increasingly rapid pace at which AI technology is developing. Learn more about their fears and the letters they have posed to halt the technology in my article: “Why Experts Are Terrified of AI: A Grim Future“.

Artificial General Intelligence is going to revolutionize the world, our lives, and how we view ethical considerations. Right now, our job as the human race is to prevent this technology from happening so that we might see a future not controlled by a soulless machine.

We cannot allow AI to be our God, and we cannot allow developers to simply go on as if this technology doesn’t have the potential to harm everyone in its way. Professor Yuval Noah Harari recently hosted an AI keynote speech where he discusses all of these concerns, as well as one major complication:

Restriction.

Humanity has posed insane amounts of restrictions on the development of nuclear weapons, to ensure that we do not kill ourselves. However, Professor Harari brings up one point that is hard to debunk: nuclear weapons do not have the power to create more nuclear weapons. It cannot think on its own, and in the end, is the sole responsibility of the humans who own and control them.

What happens when we let go of control of a technology that as of now, has no restrictions?

I recommend that you check out Harari’s keynote below to further understand the unreal and dangerous complications of Artificial Intelligence, super-intelligence, and AGI.

Stand up for humanity.

Say No to AI.

THIS ARTICLE WAS WRITTEN WITHOUT THE ASSISTANCE OF ARTIFICIAL INTELLIGENCE

Please share this article

Leave a Reply

Your email address will not be published. Required fields are marked *

Share this article
Facebook
LinkedIn
Reddit
X
Email