Saturday, July 21, 2007

Robot ethics

Cronos - the anthropomimetic robot created by Rob Knight at the robot studio


Robots, power drills, ethics and phantom limbs - these are a few of my favourite things.
This odd amalgamation of seemingly disparate concepts and objects are held together by something even more peculiar: Consciousness - machine consciousness to be specific.
Machine consciousness is a relatively new field in robotics which is dedicated to the construction of machines that are conscious like us.

Even though most of us are self-proclaimed experts on our selves, consciousness is still one of those big unanswered questions that we know very little about. So it might seem a bit strange to try and build something when we do not even know how it works. However, this is exactly what Professor Owen Holland from the University of Essex has been working on for the past 3 years. Having been called 'gung-ho' for his approach to understanding consciousness, Holland's research consists of building a real-life robot that uses power drill motors and bungee cords to drive the 'muscles' and plastic for the bone structure.

Other attempts at understanding consciousness have involved designing software models based on popular theories of consciousness or by copying what we know about the various neuron connections in the brain. But so far no-one has tried to build an embodied system quite like Holland's.
The majority of current research in neuroscience, philosophy and now robotics emphasize the importance of embodiment. Experiments in neurology suggest that the brain uses an internal model of the body in order to simulate various scenarios before we actually encounter them. Major books on the topic of consciousness, like Ramachandran and Blakeslee's 'Phantoms in the Brain: Probing the Mysteries of the Human Mind', or Metzinger's 'Being No-one' are convinced that "the phenomenal self is a virtual agent". This implies something slightly unnerving and quite mind boggling, that what we experience as reality is actually a mere simulation.
Evidence of this theory can be found in neurological curiosities like phantom limbs where people who have had an arm or leg amputated are still experiencing sensations in the missing limb. It is as if the body's model has not been updated. Other examples include the fact that schizophrenics are able to tickle themselves, the hypothesis is that this is due to their inability to predict, or simulate.

The problem with machine consciousness is that in Holland's own words, “We are ignorant about what we are doing, we wouldn't even know if it was suffering terribly.” But he also says, “I'm not worried yet, in 15-20 years time, maybe.” Murray Shanahan, Professor of Cognitive Robotics Imperial College in London, does not believe that “a scientific understanding of consciousness will ever be achieved without such [computational] models” but finds himself confronted with the future prospect of creating an artificial entity that is capable of suffering. The concept of a robot suffering might seem alien, and not something that most people would concern themselves with considering the amount of human suffering that goes unnoticed in the world today. Nevertheless, governments worldwide have initiated robot ethics programmes, such as 'The Roboethics Roadmap' funded by the EU and the UK's ESPRC funded 'Walking With Robots' initiative that tries to encourage debate about the ethics of the future. To some, this might seem like a waste of time and money, but this could possibly be one of the few times when the ethics are ahead of the science. Other recent technological advances like GM, stem cell research and nanotechnology have had difficulties becoming publicly accepted exactly because the ethics had not been properly considered.

Asia has long been at the forefront of robotics research. Governments in Japan and South Korea have suggested elaborate guidelines to ensure the safety of both humans and robots. These guidelines indicate a need to have accepted standards before letting robots loose in our homes. Dr. Blay Whitby, whose research include the social and ethical implications of artificial intelligence at the University of Sussex, is cautious. “I'm not against the technology - it could make people's lives a lot better - I just want some ethical input .“

The military has also shown interest in the possibilities of conscious machines. It is therefore even more pressing that the ethical debate involves not only researchers in the field but the broader public as well. We must ask what the implications of machine consciousness are for humanity, as well as machinery, as we continue exploring the perplexing universe of the mind.


4 comments:

Paul said...

Hello, nice post. I think an interesting additional question (though not perhaps an inherently ethical one) is what consciousness actually 'adds' to functionality, both human and robot. So for example (hypothetically speaking) if one could 'design and build' a conscious robot, could one then also design and build a non-conscious robot with equivalent functionality? My wording is perhaps a bit off, but I hope you see what my point is...
Paul

Magdalena said...

Hi Paul,

I do see what you mean, I believe you are referring to the philosophical problem of zombies, except you've aimed problem at robots instead of humans. Much have been written about this issue, especially by David Chalmers and Daniel Dennett. Personally, however, I don't think it matters that much...

P@ said...

The difference in approach, of course, is that Paul is asking whether you can make something which is not conscious to do the same as something which is, whereas the Zombie argument is more to bring out the idea of whether something which is identical will necessarily have consciousness or not.

For many functions, I am sure you can build something to perform the same function without it being conscious. But not if your aim is studying and understanding consciousness. Not everyone who builds robots does so because they are 'engineers', although they are engineers, of course, but they can do it because they have enquiring scientific minds, instead.

Magdalena said...

I believe that given we don't know what consciousness is, trying to build something that act 'as if' it's conscious is 'good enough' and will indeed give pointers to what it means to be conscious as well as what consciousness is.

Sometimes it's useful build simple models that are approximation of what it is you're trying to understand, instead of trying to get to grips with the whole. Even if in reality it's not possible to truly understand without the bird's eye's view...

Then again, perhaps a little knowledge is a dangerous thing...