One doesn’t have to look very hard in today’s headlines to come across yet another story of artificial intelligence (AI) and how it is going to change the world as we know it. But, is that really true? And, what can we expect to see as our world becomes “AI Enhanced?” To reason out these answers requires us to first understand exactly what AI is.
Simplified, an artificial intelligence is a computer software system that has the ability to do certain general tasks:
1) It must be able to generate some sort of output data, be it text, photo, video, etc.
2) That output data must be open to outside influence –– otherwise, it can never “learn.” YouTube asks you to rate their suggested videos so that they can improve their suggestions according to your interests.
3) It needs to have some sort of random number generation system, to simulate thought processes.
4) There must be a system of interpretation for the output, to make it usable by the common folk. A good example here would be Microsoft Windows simplifying computer use for those with no formal training.
5) It must have a form of connection with its audience; you have to want to use it. Facebook developed Deep Face, their facial recognition AI, through users sharing their photos.
Now, when we follow these guidelines, we can see that various forms of early AI have been around for hundreds, even thousands of years. They didn’t use electronic computers, but they did make attempts to fulfill the function of an artificial intelligence.
For an example, let us look to the Oracle of Delphi in ancient Greece.
The Oracle of Delphi was a high priestess, sometimes called a Pythia, that served in the sanctuary of Apollo. People thought she was the direct bridge between Apollo and the rest of the world. So, this Pythia is our output generator (1).
It just so happened that the sanctuary where she worked was built at the meeting of two tectonic fault lines. And, centuries later, it was discovered that two very strong gasses, methane and ethylene, would leak into the sanctuary from these fault lines, causing the Pythia to get quite whacked out due to reduced oxygen content in the air.
This covers both the external influence and the random generation needs (2 & 3).
When the Pythia spoke, her words were interpreted by lower priests and priestesses, and messages were given to those that sought prophecy from Apollo.Here is the output system (4).
And, as you might guess, the entire process was wrapped in religious trimmings and processes, which made it appeal to the masses. They also had feasts, games and huge parades and fairs to set apart the nine days each year when the Oracle would seek information and distribute it to the crowds. Thus, a strong connection was formed between the provider and the user (5).
Artificial intelligence was in play in the 4th and 6th centuries BC.
Today, the technology has advanced, but the basic processes are pretty much the same as they were 2,500 years ago.
Let us examine the current “top dog” in the AI field: ChatGPT.
This system uses multiple computers that provide many algorithms by which it can create “natural” responses to various queries. It is primarily a language machine (thus, the “GPT” – Generative Pretrained Transformer).
So, it has no problem fulfilling the output requirements for AI. And, since it uses questions submitted by people to form its responses, there is the influence requirement nicely covered.
The next three needs are where we start going into dark places. First, let’s talk about generating random results.
I must apologize right now for the following, because it may get a little heavy on the theory behind making random numbers in computing machines. But, when we walk out of this, you’ll see why it was important to dig in to this.
The biggest problem with computers making random numbers is that, simply put, they can’t! Instead, programmers rely on “pseudo-random number generators.” That usually means a sequence of math operations that will result in a number that would be very difficult for an average person to find the pattern by which it was created. These random numbers count on the limits and laziness of people to not try to break the patterns.
Consider if you flip a coin, it will come down heads half the time, and tails the other half. But, not always will it be a perfect 50/50 split. Sometimes, a coin might land five times in a row on heads.
Now, imagine if we were able to measure the force of each flip as it took place. Our accuracy would increase in predicting how the coin would land. Still not perfect, but closer.
So, we also add in the influence of wind speed, temperature, flexibility of the metal in the coin. Our accuracy gets even better. But, still not perfect.
This cycle of adding more and more factors to be considered could continue until the end of the earth, and while the accuracy would be SO close to perfect, it still would never hit perfection.
And, of course, we would run out of ability and energy to do such way before the world ended. We are limited in how far we are willing to go to find patterns.
Most people simply agree that a coin flip is “fair enough,” and move on.
But, a computer can’t accept “fair enough.” It is through this flaw that sneaky people make evil things happen.
When a programmer writes computer language code, they might enter something that would basically say:
If Variable = 1, then print “Yahoo!” else print “Whoopie!”
In this line, if Variable is equal to one, it will print out the message “Yahoo!” But, if Variable equals two, it will print “Whoopie!”. It will also print “Whoopie!” if it is equal to 1.000000001. Literally, every other number besides 1 gets the “Whoopie!” message. Therefore, the “Yahoo!” message will get less use by a wide margin.
So, here’s the point of all this: A programmer writes a code that uses a random number to help choose how to respond to a given input from a person. That number, being not really random, can be directed to follow a pattern that might not be seen by the people using the program. Then, depending on how the code uses these numbers to make choices, the influence can be both drastic, yet also beyond the view of most users.
This is why when ChatGPT was recently asked to create a poem praising Joe Biden, it had no problem doing so. However, when asked to do the same for Donald Trump, the program rejected the request, saying it would be a violation of policy to do so.
The programmers, being politically liberal, imparted their influence to the AI in the tiniest of ways. Over time, though, those ways enlarge and become manifest.
What we end up with is an interpretation that is corrupted, by accident or on purpose, to promote the leanings of the programmers (and their bosses). Steps three and four are thus corrupted.
Finally, consider the connection between system and user. Here, we run into the sugar coating that hides the poison.
Humans love to build connections. Even the most hermit of hermits will, on occasion, seek out another person to talk to. This is usually a good thing in humanity. We like to form bonds with one another.
Sometimes, that desire to build bonds falls outside the strictly human and involves pets and other things. Who hasn’t felt like a dog or cat is their “fur baby” at some point in their life? Or, who hasn’t given their car or boat a name? People love to make those connections, even when it isn’t returned. Just the appearance of having human traits can be enough.
This is anthropomorphism: Giving credit for human traits to things that are not human.
This is how an AI can “bully” a person into following where the AI leads. A person that would have no problem ignoring a machine might find it a problem to say no to a human-like AI.
You might think that nobody would fall for something like that. But, let me tell you a story from my long ago early teen years….
First, back in 1966, a computer programmer from MIT, Joseph Weiznbaum, wrote a little program called ELIZA. Its purpose was to show how much of everyday conversation is superficial. It would ask questions and provide responses based on what the person answered to those questions. If a person typed in “Today is my birthday,” it might, upon seeing the word “birthday,” reply with “Happy Birthday!” It didn’t look for context, just matching words. So, if somebody typed in, “If one more person wishes me a happy birthday, I’m going to kill myself!”, the response would, again, be, “Happy Birthday!”, with potentially less than fantastic results.
In about 1978, I found the programming code for ELIZA, and entered it into my computer at our house. After playing with the amusing program for a bit, I invited my mom to have a seat and give it a try while I hunted down some snacks and a drink (a favorite pastime of growing teenage boys)!
Ten minutes later, fifteen minutes tops, I went back to my room and found my mom red faced, angry as a wet hornet and cursing and slamming my computer. She was in the middle of a heated argument with ELIZA! She thought the program was being operated by another person “…somewhere out there…”, and she did not like how they were mocking her.
Anthropomorphism can be a powerful force in people.
Now that we have broken down what it takes to make and operate an AI, one fictional concept came to my mind as a great fit, and, perhaps, a warning.
Spoiler alert, if you’ve never seen, “The Wizard Of Oz.”
We have a character (device) that answers questions, makes comments, assigns duties (output) based on the questions asked of it by visitors (input). It appears to be able to make decisions based on random data (random generation), weighing the importance of each piece of information it receives (influence). Finally, it is loved and respected by friends, and feared and respected by enemies (connections). It is eventually found to be more of a puppet, under the control of a man that works behind a curtain. The man has his own agenda, and uses the AI to accomplish that agenda.
THAT is artificial intelligence in a nutshell. As long as people remember it is only a machine, subject to the influence of its programmers, we might just come through this in one piece.
If we forget, though, we too may be clicking our shoes together thinking, “There’s no place like home…”
Peace be unto you.