As the human mind pushes more and more towards its potential, we have continued to create greater, more intricate tools. With this progression, we have come closer and closer to creating more and more of what we would have only imagined, or seen in science fiction movies. With each new innovation in technology, we have opened new doors for the next generation, and have gained dominion over a new part of ourselves. It can also be said that we each innovation moving us closer the inevitable infinity, we have succumb to new fears of the future of technology. Although it is from a rather corny movie, this brings to mind the quote, "With great power, come great responsibility."
We now come to the topic of Intelligent Machines—machines that can learn and interact with their environment, rather than having to rely solely on templates of programming created for individual situations. From the Rumba vacuum cleaner, to Honda's ASIMO, and now Google's self-driven automobile, we are obviously moving closer and closer to real intelligent machines.
Can we build Intelligent Machines?
I think the answer lies within the fact that we are discovering more and more that we can build almost anything. It is no longer a question of if we can, but a matter of how we can create Intelligent Machines.
As mentioned in Jeff Hawkins', "On Intelligence" human beings have built innovations that we today for which we still don't understand their capacities. We are constantly building new tools to deal with the things that we are currently aware of. We build new ways to hammer a nail, because that is what we feel we need, today. We create microwaves to speed up the process of cooking. Nobody can turn on a television, or walk into a mall without seeing "As Seen on TV" products demonstrating how our current tasks could be made easier or quicker by using a new innovative device. It is hard to think of what we will need in the future, as we are not equipped with the ability to foresee the future.
According to Jeff Hawkins, it may be possible to generalize what we may feel the future holds; however, we are still going to be hindered by our current capacities. There was no way for men like Leonardo Di Vinci to realize that flight could be successfully sustained with the discovery of electricity or fuel, no more than Sir Isaac Newton could have realized that we would be able to defy gravity with the creation of flying machines.
All this is still just to explain that we must work on the "how". Necessity is truly the mother of invention, while pursuit of perfection is the father of iteration. Jeff Hawkins spoke of the possibilities that we have yet to tap into, when it comes to storing programming and memory. He pointed out how silicon chips have allowed for computers to be much smaller than their forefathers, while providing more space to expand on their capabilities. We have yet to tap the full potential within silicon chips; and, how within pushing the capabilities of these chips, it may be possible to contain the number of "synapse like" connections required to produce human-like thought. Maybe the silicon chip is not the answer, but a step closer to the answer.
All-in-all, we are capable of building Intelligent Machines, once we determine how to house the programming required to function like a human brain, in the capacity of something close to the size of a human brain.
I, myself, subscribe to the balance taught by Taoism in that all things are neither good nor bad as long as they are in balance. In essence, it comes to down to application and moderation. I feel that one of the most driving forces within the human instinct is to want to have what nature did not provide us. In a way it is our most defining quality, and can truly be argued as the quality that has allowed our species to evolve and thrive as we have.
On the other hand, it is also our most destructive quality. Humanity has always had a constant need to have dominion over everything. We place it in our religion. We practice it in our domestication of other creatures. We even practice it on our own brethren. It is this nature that, of course, makes us much like many animals; however, what is different is our advancement of tools.
Lions fight over their territory and pride; however, the methods in which they have done this have remained the same. To those with the best combination of physical attributes and wisdom go the spoils.
Unlike lions, humans create tools to replace their own limitations, so that they can overcome advisories that would otherwise be on an equal playing field.
Should we build Intelligent Machines? Yes and no.
Yes, we should build intelligent machines to perform functions far above our capacities, such as exploring uninhabitable environments, or sensing things that are outside the capacity of our senses. Basically, intelligent machines would be able to do what would otherwise be impossible for a human to perform. They would expand our perceptions and build our knowledge about a world that is just out of reach.
On the other hand, my fear would be that we would use Intelligent Machines to perform things that are already within our capacity. Intelligent Machines have the potential to replace humans—not to the extreme of "The Matrix," but to the detriment of our continued wisdom.
Take for example, the ice tray. This basic innovation can be considered one of the simplest tools to work—even master.
- You pour water into the tray.
- You place it into the freezer, until it is frozen and ready to be used.
- You remove they tray from the freezer.
- You then pop the ice cubes from the tray, for use.
To anyone of our generation, or before that, this is a "no-brainer" activity; however, studies have found that newer generations are perplexed by such tools, and actually give up, when presented with the task of figuring it out (see, "Are we raising a generation of nincompoops?").
With the creation of Intelligent Machines, we run the risk of becoming a society that is not too different from the humans that you see in the movie, "Wall-E", that had relied so much on technology, that they were fat and clueless of the world around them. Humans (at least those in developed countries) have become complacent with technology doing what they once had to do themselves. Gone is the ability to search an encyclopedia or dictionary, as there is an app for that. We no longer have to search for good produce, as leading grocers stock their shelves with only the best picks of produce, and toss anything that is blemished. Only the rare backwoodsmen are capable to killing an animal and skinning it for food; however, a good majority of them rely on laser sight and probably could not hit the broadside of a barn without it.
If used improperly, intelligent machines will only make us more helpless. When was the last time you passed by a family vehicle to see the family singing "The Wheels on the Bus"? Now, a TV and headphones interact with the children, in place of a parent's attention. Now, imagine a robot nanny that interacts with the child, instead of parents filling that role. I think I would call her "Rosie".
Think of all the teens out there who can't fix a flat; and then, imagine what would happen if the auto-pilot on their vehicle went down on the highway.
If we are to build Intelligent Machines, it should only be for the purpose of expanding on our abilities, rather than replacing them. The problem is that once we have achieved the ability to create them, who are we to stop one from using them for such purposes?
I also worry about the possibilities that intelligent machines would provide to the avenue of war and violence. China has the most massive collection of foot soldiers in the world, yet the U.S., with less than half the man-power, has missiles that can be piloted right into a person's home.
Whether intelligent machines will resemble human-like forms, or if they will simply be the brains of any piece of machinery, they will eventually be considered the new nuke. It would drive humanity one step closer to being even more brutal and more desensitized to violence.
If a machine that has been equipped with Intelligence were to go into a battlefield, it would only need to be monitored as it killed. It would feel no pain, and would be programmed to adapt in many situations. Without feelings, it would fear nothing, and feel no pain. It would be able to continue going until the "mind" itself was stopped. It would be one more step in furthering us from the traumas of war, while still causing them.
Don't get me wrong, I would rather have a machine at war, than my nephew—but I would be even happier with no war. While that may be farfetched—we are on the brink of creating intelligent machines; so I can dream, too.
I hope so. The whole purpose of developing Intelligent Machines should be to compensate for our genetic, natural shortcomings. That is what all innovation is designed to do.
A long time ago, I was reading an issue of "Boy's Life Magazine", which is a publication to which I subscribed back when I was a Boy Scout. In the issue, I read about a passenger plane that would travel so fast, that a person could travel from New York to Japan in a matter of minutes, rather than hours. This would be made possible by "surfing" wind currents that can be found at a certain speed and altitude. To safely maneuver this aircraft, the article mentioned that the plain would need to have a super-computer that would pilot the plane, in place of a human pilot. This was due to the fact that a computer would be able to make split second decisions, where a human pilot couldn't.
If this plan were to ever be realized, it would change the business world as we know it. Imagine waking up in the morning, in Arizona, to go to work in Japan. Your commute would be the same as had you boarded a bus from Irvington and 6th Ave, and headed to the Tucson Mall.
Ideas like this can only be realized with the development of Intelligent Machines that can sense things far past our abilities. If these machines were to not surpass our own abilities, they would have no other use but to replace humans in tasks.
Now, will these machines ever rise up and take over? I can't really say, as I have never met an Intelligent Machine. As a tech support agent, you would be amazed with the conversations I have with customers. I have been posed with the question of whether I think machines would, one day, take over the world.
Every time I get the question, I respond with, "No, because they would get started with their coup, only to realize that a Microsoft update requires them to reboot."
The fact is that we place too much emphasis on the wrong function. We assume that because something can think, it is not only capable of evil, but destined for it. If that were truly the case, every living thing on this Earth would be evil.
We are only concerned with these possibilities, because we know of what things man is capable. We assume that to have the capacity for intelligence like ours, one would have to be capable of evil.
While one can assume that you can program a machine to think, it would be questionable if one could create a machine to feel (happiness, anger, sadness…contempt)—or would want a machine have this. I think that if machines were designed to feel, we would have to deal with the question posed by, "Short Circuit"," I, ROBOT", and every other movie like it: Would machines that can feel, have rights like humans?
Excellent post with regards to building intelligent machines. The way you brought Taoism into the ethical discussion made a lot of sense. Yes, there is a balance. I have to agree that we are always searching for anything to make tasks easier for us. I believe this started with the invention of the remote control however; it probably came much earlier than that. But is the development of intelligent machines to complete tasks tat are now daunting to us giving us the opprotunity to focus on new tasks that we have never explored.
ReplyDelete