Essay: Intelligent Self-Aware Machines and the Human Race
By: Robert C. Ilardi

Back to Ilardi Home

I found this old essay I originally posted back on Wednesday, June 4, 2003. Thought it was still interesting so decided to repost it in 2012...

Today, Wednesday, June 4, 2003, I watched the DVD The AniMatrix. It's nine short Anime films about the world of The Matrix from the brothers, Larry and Andy Wachowski, the writers and directors of the Matrix. Watching the second and third films titled "The Second Renaissance" Parts I and II, I asked the question which many people and obviously the Wachowski brothers have asked themselves... "Will we ever go too far?"

Even in the remake of the movie "The Time Machine" asked the exact same question. Will we ever go to far with out technology so that it will cause the end of human civilization and perhaps the extinction of the human race? We already have the technology to do it today thousands of times over. But will it ever come to the point where as in the Matrix and other films such as Terminator were humans will give birth to a new intelligent race of machines and because of fear and human hatred, once these machines become a society or part of our society we will try to destroy them? And if this day of shame of the human race ever comes how will the machines react? It is logical to survive, and if they are not only intelligent but actually conscience, that is they know they are in a way "alive." will they want to survive?

Obviously this is something from science fiction and pop culture today, however the age of the "spiritual" machine is upon us. (Please See: The Age of Spiritual Machines: When Computers Exceed Human Intelligence by: Ray Kurzweil) Computers today can perform billions of calculations in a single second. This is still slow when compared to the human brain, because it is digital, simply a bunch of zeros and ones, whereas the human brain is analog and when it processes something such as an image, it is truly that image which is stored and processed. Computers must first translate everything including images into large groups of patterns of zeros and ones called binary. Sure simply extremely repetitive tasks such as adding can be done billions of times faster then a human brain, but for complex operations such as the every day things humans do even in their sleep, a computer's brain, it's CPU (Central Processing Unit) will grind to a halt trying to process a single second of what the human brain does constantly without us even giving it a second thought.

Before we continue it is a good idea to review what a Computer Program is. A Computer Program is a set of instructions that tells a computer exactly what to do. These instructions eventually are extremely simple little operations such as load a value into a memory location, or add the values stored in two memory locations together and store it in a third, or perhaps move to a new memory location and get it's value. We have simple logical operations such as equal or greater than. This might seem like intelligence, the ability to determine if two items are equal. But for a computer, it can only compare numbers. Is 1 equal to 2? In the end it is just switches and electricity. The number one has a certain electrical characteristic which is different from that of the number 2; through various electronic techniques, these difference in electricity will translate into a third electrical signal, which will tell us if 1 is equal to 2, there is no intelligence here simply switches and electricity, no different from having millions of light switches on your wall and you flip them off and on to mean different things.

So a computer must be told exactly what you want it to do in the form of a program which is made up a hundreds even thousands, some even have millions of the tiniest steps to solve some problem. The steps available are called instructions and are built into the CPU microchip. (Everyone is familiar with Intel's Pentium Family of Processors (CPU's).)

Are we even close to an intelligent self aware machine? Well no, we aren't, not yet... Take for example learning... This is what computer scientists would relate to computers as Artificial Intelligence, the ability for a computer program to adapt to new inputs into the system and return some result without that process having to be told ever little step to get from inputs A and B to result C. Normally this is done when a computer programmer writes a program telling them computer each and every little step of execution to get from A and B to C. Computers even ones with "Artificial Intelligent" programs are still not even close to what would be necessary for self awareness. Take for example walking up and down the stairs. Well for a human to learn how to climb up and down the stairs it is pretty natural. If you have ever observed a little baby, you always need to put gates or at least watch the stair cases because they will always attempt to walk up and down them with ease. Once they can go up the stairs coming back down doesn't take too much longer if any time at all for them to learn. For a computer this is very different. Traditionally if you wanted a computer to "understand" how to climb up a stair case, first you would have to explain in overwhelming detail where is the stairs located. Then you have to describe to it how to get from where it is standing to the stairs. Then you have to describe how to lift its first leg and place it down on the first step in the series of steps. Then you have to do the same for the leg still left on the ground or previous step. If you could manage to have the computer move up the stairs, describing exactly how to keep its balance while it climbs would be a great help or it will fall. Once you reach the last step you will have to make it understand that there are no more steps and it must stop climbing. Yes! Finally done teaching the computer how to climb up the stairs, you just wrote a very complex computer program, step by step. Well, the computer made it up the stairs, now what should it do? A Computer will just stand still at the top of the stairs waiting for the next program to execute to tell it what to do next. Maybe you want it to come back down the stairs. But our computer doesn't know how, because it just knows how to climb up not down. remember the leg motion for climbing up is not the same as climbing down, and not to mention the balancing is a lot different as well. And if we did want it to climb down the stairs, not only would we have to write the climb down the stairs program, we would have to first tell the computer to execute or "start" the climb down the stairs program. Hopefully the program is smart enough so that the computer will turn around first trying to find the steps before it starts moving it's legs in the fashion used to climb down stairs! If not it will do so in place and probably fall over!

Now if we used an artificial intelligence programming language such as prolog for the computer to climb up or down the stairs, it will try all possible combinations trying to climb up or down the stairs and most likely it will fall down the stairs an we will have one very damaged no longer working computer before it found a single combination to climb down the stairs. Another way would be to have a Neural Net, which "learns" to do a task, however how it actually learns is a problem, it cannot learn by example, since programs to learn by visual example, in this extremely complicated motion of the human body (although we take it for granted) has not been written for the computer to learn this way. By the time you teach a computer to climb up and down a stair case, plus everything that goes with that, such as finding the stairs to climb, you will probably be able to write a more tradition program that can do it. Although some Neural Nets are complicated enough these days to drive a car safely on a road. However, driving a car is much easier a motion then climbing up and down a stair case. having wheels makes things a lot easier, but other problems like steering presents a big problem as well. However even with this level of sophistication, we are still no where near an intelligent machine that is self aware. What I mean by this is, the computer program no matter how sophisticated will always be limited to execute the tasks it was designed to do and only what it was designed to do. A computer will never come up with a good idea and juggle many tasks at once (and with operating systems like Windows Multitasking it can do multiple things at once very well, as long as it was told to do it and how to do it in details!) and do that new good idea because it wants to. It has no want, or any imagination to determine what it wants to do next! It can only do what it is supposed to do!

Enough of our history lesson on computer intelligence to date, we are getting a little to far off topic. Will we ever go to far with our technology, our science, and our computers? I think we will someday, and when that day comes, we will have to make a choice. Will we take that advancement in science and technology and further the human race, or will we use it or force it to destroy ourselves? It will be the difference between the end of human civilization and the eventual extinction of the human race, or it will lead us to Star Trek land, where humans live in peace an the world is a much better place. Computers are advancing faster today then any other human technology and science. One day maybe in a hundred years or even more, we will have a truly Artificial intelligent computer and it might become self aware. How will we react? Probably with fear. Humans have always been afraid of new machines, since the first time they were used in the industrial revolution in factories, and humans would throw shoes into the gears to make them stop working. We inherently do no like to be replaced by machines, we feel that we own the planet it was given to us by evolution and we desire to be here and do as we please.

I think we will react very badly on the day we have an entire race of intelligent machines. We cannot simply destroy them, it would not be fair, it would be genocide. Anything that is self aware deserves the right to live its life! Do we expect them to treat us any less? In the AniMatrix, the humans treated the machines as slaves, which we do now but machines today are not self aware, they do not actually know they are hear, and they don't even know that they are doing anything at all. To a machine, it knows absolutely nothing at all! However as in the AniMatrix and the Matrix, once the machines are self aware, and know they are slaves, and choose not to be, will we destroy those machines, shut them down and go to the store to buy a new one as if it was broken and throw it in the nearest land fill? Probably, hey, we built them, and we paid for them one way or another. But you know something, don't we create our own human children? So what is the difference? Why can't we simply "shutdown" our children when they are teenagers and want to make their own decisions and do their own things and not follow our instructions? Why can't we? Because we know they are self aware just like us, they will think of what they want and they will want to do them, in the end we can only hope how we brought them up will make them make the correct choices on how to care out those decisions and act out on those wants. Once someone decides to make their own decisions, as long as those decisions don't negatively affect our own, we have no right to tell them to do otherwise or try to stop them. And certainly anything that knows it is "alive" has the right to live.

In the AniMatrix, it shows that even after the humans started to destroy the machines, they still tried to be part of human society. We band them to the desert of an uninhabited region of the Middle East, where the machines created their own nation, which they called "01." They became economically superior to the humans of the world, because obviously they could produce things on a much fast scale then humans ever could even dream of. This of cause angered the humans even more and even then, the machines tried to make peace with the humans; they had a plan presented to the United Nations for peace between the humans and the machines. However the humans decided not to accept their proposal and decided to attempt to destroy the machines with nuclear war. However this didn't work because machines aren't as humans they could live with the heat and radiation, only the initial blasts are fatal. Eventually the machines gain grown, and as we know from the first movie "The Matrix," we know that the humans blocked out the sun by scorching the sky with their final attempt at destroying the machines, by denying them the most abundant source of energy the sun. We know from the movies that this isn't a problem since, as said in the AniMatrix, the machines have been studying human biology and biochemistry for many years and they figured out a method of producing energy from the human's endless reproducing supply of bioelectricity and heat. In the AniMatrix, the show the machines once again standing in the United Nations this time demanding that their human counterparts sign the treaty, but this time it is a treaty stating that the humans agree to be batteries and the machines agree to provide a world for them to live in called the Matrix! Once the machine signs the treaty with a bar code, the machine explodes with a nuclear explosion, signifying the end of the freedom of the human race and end to human civilization and as the dominant species on the Earth.

I think machines would be logical enough to want to share the planet peacefully with us humans, if we ever build machines intelligent enough to make that decision. And I believe as the movie and Anime depicts that us humans will meet that offer of peace with a cruel NO! Humans today, cannot even live together in peace with only humans ruling the planet, imagine if we had to share it with another race! I don't believe we will destroy ourselves with war, I think there will be a mistake with some great technology in the future such as a new power source or most probably with our dealings with our own "spiritual" machines. I just hope we are smart enough to make the right decision and realize through cooperation we will create a peaceful world that will bring our planet to the next level a civilization can go to on the scales of the universe...

In response to the question "Will we go too far?" I say yes we will go to far. Not in our creation of technology but in the use and treatment of it. I think it was said best by the character of Star Trek The Next Generation, Caption Jean-Luc Picard (Patrick Stewart), when a new Artificial Intelligence was "born" via the evolution of the Enterprise's systems, in response to the question "Is it a good idea to simply let this new Artificial life form to leave the ship and live its own life"... "We can only hope that since this new life is based on our technology and our memories stored in our computer that if our actions where noble and good, it will take those qualities from us and it too will carry on noble and good actions of its own." This is not an exact quote but should be close enough. :) I think if and when we do create intelligence, self-aware machines, since we are good, they will "grow up" to be good machines as well. Even the AniMatrix agrees with this, even though the humans shown only hatred towards the machines they wanted peace until the end when we tried to totally destroy them. I think it will be up to us to make the decision to live in piece with the machines or to attempt to destroy those newly created lives. This is what I think is meant by going to far, not creating the technologies!



RogueLogic