Why Neural Networks Are Called Black Boxes & Why This Is A Serious Problem

Table of Contents

One of the major issues with modern-day Artificial Intelligence (AI), especially when it comes to widespread language models such as ChatGPT, is the lack of transparency.

I have already discussed the problematic OpenAI’s lack of transparency, which you can read about here. But today we will delve deeper into why attaining this transparency is so difficult, and why neural networks are called “black boxes”.

Scientists and researchers all over the globe are increasingly concerned with the lack of understanding we have about the inner workings of AI, and we’re seeing more and more frequently that language models such as ChatGPT and similar tools are arriving at conclusions that don’t actually add up.

In this article, I’ll explain what neural networks are and why we don’t understand them.

why neural networks are called black boxes

What is a Neural Network?

Before we delve into why neural networks are called black boxes, we need to understand what a neural network actually is. This stuff can seem pretty daunting at first, but I’ll do my best to explain it in a digestible way.

Neural networks are an AI computer model that essentially mimics how our brains work. It is made up of numerous artificial “neurons” arranged in countless layers that all communicate with each other. Each individual neuron takes the input (prompt), performs calculations, and then produces the output.

Bear with me here, because this is where it gets tricky.

Each neuron is weighted differently depending on how important it is for each individual input. The neural network will then shift the weights around as it starts learning and recognizing patterns in the prompt. The way these weightings and individual work of neurons works is what allows the networks to mimic our brain’s abilities to understand and process information.

When you put a prompt into a language model like ChatGPT, the decision-making process behind the response is multi-faceted. First, it’ll comprehend the meaning and context of the question, extracting keywords, elements, and any specific instructions you provide.

Then it’ll tap into the data it has been trained on, drawing on the crazy amount of knowledge and subjects that it has been provided. Then it essentially goes into this harder-to-see algorithmic decision-making process within its neural network where it leverages patterns and correlations to generate a coherent response. Spoiler: this is the part we don’t understand, let’s learn why.

How ChatGPT imagines its physical form

Why Neural Networks are Called Black Boxes

The short version: we just don’t get it. That’s it. Neural networks are called black boxes because it is simply too difficult to understand how they arrive at their predictions. The weights are shifted around, but we don’t know why.

The long version: neural networks are incredibly complicated and intricate, and it is the cumulative difficulty of understanding from four major categories that result in why neural networks are often called black boxes:

#1 Lack of transparency

Neural networks consist of a large number of layers and interconnected neurons that each perform their individual computations. I mentioned earlier how each neuron is weighted differently based on importance and is shifted around, prioritized, deprioritized, and sometimes removed completely.

It is the complexity of these structures that make it so difficult to understand how a language model input is processed and transformed into an output. Researchers and programmers alike simply can’t explain why specific decisions or predictions are made, because they’re unsure why the network weighted the neurons the way it did.

#2 Non-linear transformations

Because of the way neural networks utilize different functions and weightings to process their input data, the transformations tend to introduce complicated non-linear relationships between the inputs and outputs.

This results in a relationship within the network that is too intricate for humans to understand, and lacks all forms of intuitive linearity.

#3 High dimensionality

Then we have the issue of the high-dimensional data (think images or text) that these neural networks can handle. Because of the large number of parameters and connections, we can’t understand the contributions that each component is bringing.

#4 Distributed representation

Lastly, neural networks represent information in a distributed manner across all its different nodes, neurons, and layers. Meaning that each individual component of the network contributes, in combination, to produce an output.

Being able to understand the different contributions of each component is pretty much impossible, and it is this issue that eventually gave neural networks the nickname of “black boxes.”

I get it, this can all seem overly complicated and difficult to understand for the average consumer. But that is because it is. In fact, it is overly complicated and difficult to understand even for the programmers of the technology. Which is why we have to start treading carefully.

Part of hand of contemporary researcher putting circuit board in microscope

Why This is Scary: Artificial General Intelligence

Because we don’t understand how a lot of the processes behind AI reasoning works, it is crucial that we remain in control of the technology. If we let the AI go off to its own devices, it can quickly get to a point where it believes it is smarter than humans. This idea is called Artificial General Intelligence, or AGI for short.

AGI is the idea of an AI that has the ability to learn, reason, and interact with its environment in a way that is indistinguishable from how humans do so. AGI can think abstractly, engage in creative problem-solving, and plan for future events. It can also gain new knowledge through self-learning algorithms, allowing it to become more intelligent over time.

Compared to current AI systems which are limited to specialized tasks, AGI is designed to handle a wide range of tasks that would normally require human intelligence. This means it could be used for medical diagnoses, financial analysis, robotics navigation, and many other complex tasks.

We’ve not yet achieved Artificial General Intelligence; some people believe it’s 5 years away, and some people might say 50 years away. Regardless of its emerging timeline, we have to put in place certain fail-safes and perimeters to ensure ethical development and integration.

Can you imagine a world where we let AI make important decisions for us? A world where a machine that we don’t even understand how works has a say on how we act, think and consume? The idea is petrifying, yet it seems like programmers and researchers are just rushing to get it done as fast as possible without putting any thought into its implications.

The Imminent Real World Consequences

The PR for the massive AI software companies will come out and speak highly of their transparency and interpretability of neural networks, but no matter how they try, the term “black box” has and will persist forever because of the inherent complexity and opaqueness of the networks.

It might sound like a dystopian Sci-Fi scenario, but humans simply don’t understand what’s going on in the Artificial Intelligence brain. For example, every day more and more reports on growing concern among scientists and researchers of Artificial Intelligence come out – as we give AI more and more autonomy and it becomes increasingly difficult to interpret and understand.

This lack of transparency and interpretability becomes incredibly problematic in fields like healthcare, finance, and criminal justice, where decisions made by AI systems will have significant real-world consequences. If these researchers cannot explain how an AI system arrived at a decision, how do we know whether that decision was fair, unbiased, and accurate?

All-Knowing, All-Powerful?

At no point can we let AI become our God, and I worry that’s what is happening with the current progression of things. More and more companies are giving AI increased power and autonomy, and if we don’t place a failsafe anywhere and start recognizing the harm AI can have on humanity – we are going to be royally screwed.

It is clear to me that although Artificial Intelligence might seem fun and quirky with its features and widespread knowledge of the world, we forget that we barely understand it. Not just the average Joe’s like you and I, but the literal programmers of the technology barely understand what’s going on.

There are serious ethical questions at stake when we start looking at the risks associated with Artificial Intelligence, and we have to remain vigilant about the moral implications that come with the lack of understanding.

A lot of people already know how dangerous it is, which makes it all even more terrifying. Is it apathy or dread that is preventing people from avoiding AI? Regardless, you can read all about what the western worlds public opinion of Artificial Intelligence is right here.

If someone told you one day that they don’t understand how planes fly, they just kind of do, you’d be significantly more worried about getting on that plane: because if we don’t understand how it works, how will we understand whether or not it one day just… won’t?

The truth is we don’t, and that’s why neural networks are often called black boxes.

Cyborg shattered into dust

ALL TEXT IN THIS ARTICLE WAS USED WITHOUT THE ASSISTANCE OF ARTIFICIAL INTELLIGENCE

Please share this article

Leave a Reply

Your email address will not be published. Required fields are marked *

Share this article
Facebook
LinkedIn
Reddit
X
Email