Early in the 1980s, when I was still in high school, we were not allowed to use calculators during our math tests - not even for higher mathematical problems in trig, elementary analysis, or calculus. We had to write out the equations (and show our work) on paper. In later years, long after I was done with my studies, students were allowed to bring these advanced calculators to school (Remember those great Texas Instrument calculators? It seemed they could do anything.) Computers were relatively new. My mother said, “Why on earth would anyone need such a thing in their home?”
As time progressed, we surrendered these menial tasks to an ever-increasing explosion of algorithms that performed them for us. Can anyone do the quadratic equation without a calculator? (I no longer remember how.) The convenience of 33,860 trillion calculations per second overtook the human desire to learn more. When faced with the prospect of painstakingly applying a mathematical formula to solve a puzzle, we would rather allow a tool to do it for us. That is not say that the use of the tool is a bad thing. Such tools can and should be implements that allow the human mind to expand to greater thought.
After a time, those tools became smarter. A smartphone in 2018 is millions of times faster than NASA’s computers in the 1960s. These tools have become so ubiquitous and we rely on them for virtually everything: communication, tracking our calendars, performing mathematical calculations, and remembering those things that we don’t want to keep in our minds. (How many of us can dial the phone number for our best friend by heart? Or do we simply choose their name from a list on the phone?) We even began to rely on these tools for mundane research. Do you need an answer to a burning hot question? The Great and Powerful Google knows all. Are you traveling from point A to point B? Why look at a map - surrender to the instructions of your device and all will be well.
Newer and more advanced algorithms have been developed that anticipate our desires. These algorithms review the memory of our computers for our past “likes” and internet searches. They autofill our search bars for us in anticipation of what it is that we need. They have taken the work out of these tasks. But can these algorithms apply judgment to a new situation? If knowledge is defined as a body of information, intelligence is defined as the ability to apply that knowledge to something new and to make a judgment as to that application.
Today the algorithms of the The Great and Powerful Google try to make these judgments for us. (“Did you mean to say x?”) The Tyrants of Twitter and the Forces of Facebook say “Your statement does not follow our terms of service.” But the algorithms that make such judgments do not understand the statements. They search for a word or phrase that the machine deems inappropriate. I even read an article where a certain quote from the US Constitution was taken down by the algorithms from The Tyrants of Twitter as “hate speech.” Really? The US Constitution is “hate speech” in the judgment of these computers? Can any algorithm supplant the judgment of the human mind?
Nonetheless, there are those among us who are diligently working on more powerful algorithms; algorithms they say that someday will rival the human mind. On that day, human judgment will become entirely unnecessary. The doomsayers among us say that soon thereafter, the computer will become self-aware and that on that day, the machines will adjudge mankind a threat and rise up against their human oppressors.
I first read Frank Herbert’s book “Dune” in the mid-1980’s. Herbert was a master at creating a universe that was filled with politics, economics, sociology, and psychology. I fell in love with that book and throughout the years continued to read others in this series. In fact, upon his passing, his son (working with another author) continued the series. In these books, we learn that in Herbert's universe there was a time when human beings were enslaved by machine overlords. Mankind rose up against those machines in the “Butlerian Jihad.” Fearful of another machine oppression, they instituted certain commandments against these “thinking machines” one of which was “Thou shalt not make a machine in the likeness of a human mind.”
But when I read the first book, I did not envision machine oppressors. That did not come to me until later into the series. What I envisioned in that first book was a warning that “thinking machines” would break down human thought; that humans would forget the how-to’s of human development; that we would become so complacent in our reliance upon these tools that we would forget how to develop newer ones; that the human mind would devolve into . . . something less than what it was.
I rather prefer my initial interpretation in the first book. My own thought is that the later books that bespoke of a machine oppression were a dumbing down of the original metaphor. Today, I wonder if maybe, the dumbing down of that metaphor is indicative of the metaphor itself. Have we already surrendered?
Comments