I have written another interesting (obviously!) blog entry for New Scientist. It's all about robots (again...), but this time they're emotional.
Have a read about Feline feelings.
Saturday, July 21, 2007
Cronos - the anthropomimetic robot created by Rob Knight at the robot studio
Robots, power drills, ethics and phantom limbs - these are a few of my favourite things.
This odd amalgamation of seemingly disparate concepts and objects are held together by something even more peculiar: Consciousness - machine consciousness to be specific.
Machine consciousness is a relatively new field in robotics which is dedicated to the construction of machines that are conscious like us.
Even though most of us are self-proclaimed experts on our selves, consciousness is still one of those big unanswered questions that we know very little about. So it might seem a bit strange to try and build something when we do not even know how it works. However, this is exactly what Professor Owen Holland from the University of Essex has been working on for the past 3 years. Having been called 'gung-ho' for his approach to understanding consciousness, Holland's research consists of building a real-life robot that uses power drill motors and bungee cords to drive the 'muscles' and plastic for the bone structure.
Other attempts at understanding consciousness have involved designing software models based on popular theories of consciousness or by copying what we know about the various neuron connections in the brain. But so far no-one has tried to build an embodied system quite like Holland's.
The majority of current research in neuroscience, philosophy and now robotics emphasize the importance of embodiment. Experiments in neurology suggest that the brain uses an internal model of the body in order to simulate various scenarios before we actually encounter them. Major books on the topic of consciousness, like Ramachandran and Blakeslee's 'Phantoms in the Brain: Probing the Mysteries of the Human Mind', or Metzinger's 'Being No-one' are convinced that "the phenomenal self is a virtual agent". This implies something slightly unnerving and quite mind boggling, that what we experience as reality is actually a mere simulation.
Evidence of this theory can be found in neurological curiosities like phantom limbs where people who have had an arm or leg amputated are still experiencing sensations in the missing limb. It is as if the body's model has not been updated. Other examples include the fact that schizophrenics are able to tickle themselves, the hypothesis is that this is due to their inability to predict, or simulate.
The problem with machine consciousness is that in Holland's own words, “We are ignorant about what we are doing, we wouldn't even know if it was suffering terribly.” But he also says, “I'm not worried yet, in 15-20 years time, maybe.” Murray Shanahan, Professor of Cognitive Robotics Imperial College in London, does not believe that “a scientific understanding of consciousness will ever be achieved without such [computational] models” but finds himself confronted with the future prospect of creating an artificial entity that is capable of suffering. The concept of a robot suffering might seem alien, and not something that most people would concern themselves with considering the amount of human suffering that goes unnoticed in the world today. Nevertheless, governments worldwide have initiated robot ethics programmes, such as 'The Roboethics Roadmap' funded by the EU and the UK's ESPRC funded 'Walking With Robots' initiative that tries to encourage debate about the ethics of the future. To some, this might seem like a waste of time and money, but this could possibly be one of the few times when the ethics are ahead of the science. Other recent technological advances like GM, stem cell research and nanotechnology have had difficulties becoming publicly accepted exactly because the ethics had not been properly considered.
Asia has long been at the forefront of robotics research. Governments in Japan and South Korea have suggested elaborate guidelines to ensure the safety of both humans and robots. These guidelines indicate a need to have accepted standards before letting robots loose in our homes. Dr. Blay Whitby, whose research include the social and ethical implications of artificial intelligence at the University of Sussex, is cautious. “I'm not against the technology - it could make people's lives a lot better - I just want some ethical input .“
The military has also shown interest in the possibilities of conscious machines. It is therefore even more pressing that the ethical debate involves not only researchers in the field but the broader public as well. We must ask what the implications of machine consciousness are for humanity, as well as machinery, as we continue exploring the perplexing universe of the mind.