Man, machine, and in between


TUBINGEN: We are so surrounded by gadgetry nowadays that it is sometimes hard to tell where devices end and people begin. From computers and scanners to mobile devices, an increasing number of humans spend much of their conscious lives interacting with the world through electronics, the only barrier between brain and machine being the senses — sight, sound, and touch — through which humans and devices interface. But remove those senses from the equation, and electronic devices can become our eyes, ears and even arms and legs, taking in the world around us and interacting with it through software and hardware.

This is no mere prediction. Brain-machine interfaces are already clinically well established — for example, in restoring hearing through cochlear implants. And patients with end-stage Parkinson’s disease can be treated with deep brain stimulation (DBS). Current experiments on neural prosthetics

point to the enormous future potential of similar interventions, whether retinal or brain-stem implants for

the blind or brain-recording devices for controlling

prostheses.

Non-invasive brain-machine interfaces based on electroencephalogram recordings have restored the communication skills of paralysed patients. Animal research and some human studies suggest that full control of artificial limbs in real time could further offer the paralysed an opportunity to grasp or even to stand and walk on brain-controlled, artificial legs, albeit likely through invasive means, with electrodes implanted directly in the brain.

Future advances in neurosciences, together with miniaturisation of microelectronic devices, will enable more widespread application of brain-machine interfaces. This could be seen to challenge our notions of personhood and moral agency. And the question will certainly loom that if functions can be restored for those in need, is it right to use these technologies to enhance the abilities of healthy individuals?

But the ethical problems that these technologies pose are conceptually similar to those presented by existing therapies, such as antidepressants. Although the technologies and situations that brain-machine interfacing devices present might seem new and unfamiliar, they pose few new ethical challenges.

In brain-controlled prosthetic devices, a computer that sits in the device decodes signals from the brain. These signals are then used to predict what a user intends to do. Invariably, predictions will sometimes fail, which could lead to dangerous, or at least embarrassing, situations. Who is responsible for involuntary acts? Is it the fault of the computer or the user? Will a user need some kind of license and obligatory insurance to operate a prosthesis?

Fortunately, there are precedents for dealing with liability when biology and technology fail. Increasing knowledge of human genetics, for example, led to attempts to reject criminal responsibility, based on the inappropriate belief that genes predetermine actions. These attempts failed, and neuroscientific pursuits seem similarly unlikely to overturn our views on human free will and responsibility.

Moreover, humans often control dangerous and unpredictable tools, such as cars and guns. Brain-machine interfaces represent a highly sophisticated case of tool use, but they are still just that. Legal responsibility should not be much harder to disentangle.

But what if machines change the brain? Evidence from early brain stimulation experiments a half-century ago suggests that sending a current into the brain may cause shifts in personality and alter behaviour. And, while many Parkinson’s patients report significant benefits from DBS, it has shown a greater incidence of serious adverse effects, such as nervous system and psychiatric disorders and a higher suicide rate. Case studies revealed hypomania and personality changes of which patients were unaware, and which disrupted family relationships before the stimulation parameters were readjusted. Such examples illustrate the possible dramatic side-effects of DBS, but subtler effects are also possible. Even without stimulation, mere recording devices such as brain-controlled motor prostheses may alter the patient’s personality. Patients will need to be trained in generating the appropriate neural signals to direct the prosthetic limb. Doing so might have slight effects on mood or memory function or impair speech control.

Nevertheless, this does not raise a new ethical problem. Side-effects are common in most medical interventions, including treatment with psychoactive drugs. In 2004, for example, the United States Food and Drug Administration told drug manufacturers to print warnings on certain antidepressants about the increased short-term risk of suicide in adolescents using them, and required increased monitoring of young people as they started medication.

Similar safeguards will be needed for neuroprostheses, including in research. The classic approach of biomedical ethics is to weigh the benefits for the patient against the risk of the intervention, and to respect the patient’s autonomous decisions. None of the new technologies warrants changing that approach.

Nevertheless, the availability of such technologies has already begun to cause friction. For example, many in the deaf community have rejected cochlear implants, because they do not regard deafness as a disability that needs to be corrected, but as a part of their life and cultural identity. To them, cochlear implants are an enhancement beyond normal functioning.

Distinguishing between enhancement and treatment requires defining normality and disease, which is notoriously difficult. For example, Christopher Boorse, a philosopher at the University of Delaware, defines disease as a statistical deviation from “species-typical functioning.”From this perspective, cochlear implants seem ethically unproblematic. Nevertheless, Anita Silvers, a philosopher at San Francisco State University and a disability scholar and activist, has described such treatments as “tyranny of the normal,” aimed at adjusting the deaf to a world designed by the hearing, ultimately implying the inferiority of deafness.

We should take such concerns seriously, but they should not prevent further research on brain-machine interfaces. Brain technologies should be presented as one option, but not the only solution, for, say, paralysis or deafness. In this and other medical applications, we are well prepared to deal with ethical questions in parallel to and in cooperation with neuroscientific research.

Jens Clausen is Research Assistant at the Institute for Ethics and History of Medicine, Tübingen, Germany.

Copyright: Project

Syndicate, 2009.