Robot Consciousness

Over the last few years, we have made some pretty impressive advancements in the fields of robotics and computer science. One good example of the speed at which things can change is Moore’s Law.

Back in 1965, Gordon Moore discovered that, every year, the number of transistors that could fit on a one-inch silicon chip doubled. That is what we call a logarithmic growth pattern. While the observation would probably be adjusted by computer scientists increasing the time it would take, it can’t be ignored that the size of the transistors has now shrunk to nanoscale.

In robotics, engineers are creating new machines that have several articulation points, and some even have sensors that are used to gather data about the environment they are in. And this is what lets the robot navigate its way around obstacles.

No matter what industry we talk about, there is no doubt that robots are having a huge impact.

Although robots and computers are more advanced now than ever before, they still cannot be seen as anything more than tools. They are useful, especially for those tasks that would put human life at risk, or are just too difficult for humans, even those that are too time-consuming. But neither a robot nor a computer is aware that it even exists and can perform only those tasks they are programmed for.

But what would happen if a robot could think for itself?

We see it often enough in movies and books but can it actually happen?

Can Robots Gain Consciousness?

That is not an easy question to answer, because we are still very much in the dark about human consciousness. Yes, scientists can now create algorithms that claim to simulate human thinking, but only on a superficial level. Long story short, right now, it is well beyond our grasp and the realms of possibility that we could give machines consciousness.

Part of the issue is in actually defining consciousness.

According to the Professor of Philosophy, Eric Schwitzgebel, from the University of California, it is easiest to explain the concept by using examples of what consciousness is and isn’t. He says that we can label vivid sensation as being part of consciousness. And yes, you could argue that using sensors, robots can experience some of the things we label as sensations — at the very least, they can detect them. But he also points out that there are various other elements of consciousness, such as visual imagery, inner speech (we all have that little voice in our heads), dreams and emotions, that robots simply cannot experience.

However, there is no small amount of disagreement among philosophers about what consciousness can and cannot be defined as. As the best-case scenario, most of them will agree that the brain is where consciousness rest, but none of us fully understand what mechanisms are behind it. And without that understanding, it could prove impossible to provide machines with that consciousness.

Yes, we can create robots that can mimic thought. And they can, in some cases, detect emotion. Programming can provide robots with an ability to recognize patterns and to respond to them. But far from being aware of itself, it is merely responding to a series of commands.

It is also possible that computer scientists and neurologists could develop an artificial model of a brain that could produce consciousness. The problem here is not a trivial one, though. We don’t yet fully understand the way the brain works, so we couldn’t possibly build an adequate model that creates consciousness.

However, in spite of the challenges, scientists and engineers the world over are working on creating artificial consciousness. But ultimately, it remains to be seen if it will ever come to pass. What we should assume is that it may happen in the future. And, if it does, what would happen?

Robots are People

Creating artificial consciousness and giving it to robots could give rise to many serious questions surrounding ethics. If a robot were self-aware, would negative reactions be possible to any situation they are in? Could a robot then object to being used as nothing more than a tool? Would it have feelings of its own?

There is much debate on this subject. And because, so far, no artificially conscious machine has been created, we cannot say what it will and won’t do, how it may or may not react.

But if we do give robots self-reflective abilities, we may have to seriously consider how we think of them. At what point does a robot have enough consciousness and intelligence that we have to consider providing them with the same legal rights that humans have? Or will they just remain as tools, albeit conscious ones, and just see themselves as slaves of a kind?

We’ve all seen the movies where robots take over the world, movies like “The Terminator” or “The Matrix”. And the scenarios in those movies rely entirely on one concept, which is self-recursive improvement. But what is this?

It refers to a robot’s ability to take a look at itself, to find ways that its own design could be improved, then tweak itself, or even build a new and more improved version. Every new generation of robots would be that much smarter than the previous, and would be better designed. According to a futurist called Ray Kurzweil, in the future machines will be just like this, so adept at improving themselves technology would begin evolving far quicker than we could possibly keep up.

In a world like this, what would happen to humans? Some scenarios have us merging with the robots, while others would have the robots determine that humans are no longer useful nor necessary. At best, we would be ignored; at worst, humans would be wiped out.

Obviously, this is the stuff of movies and books. Even if we did manage to create artificial consciousness, it is highly unlikely that robots would act and have the same emotions and thoughts that humans do. It may well be that we just cannot recreate it.

But just in case we can, and just in case scientists do, you might want to consider treating your computer a little nicer!