Aping Mankind: Neuromania, Darwinitis and the Misrepresentation of Humanity, by Raymond Tallis, Acumen Publishing, RRP£25, 400 pages
Naked Genes: Reinventing the Human in the Molecular Age, by Helga Nowotny and Giuseppe Testa, MIT Press, RRP£18.95, 192 pages
The Most Human Human: A Defence of Humanity in the Age of the Computer, by Brian Christian, Viking, RRP£18.99, 320 pages
What is human nature? A biologist might see it like this: humans are animals and, like all animals, consist mostly of a digestive tract into which they relentlessly stuff other organisms – whether animal or vegetable, pot-roasted or raw – in order to fuel their attempts to reproduce yet more such insatiable, self-replicating omnivores. The fundamentals of human nature, therefore, are the pursuit of food and sex.
But that, the biologist would add, is only half the story. What makes human nature distinctive is the particular attribute that Homo sapiens uses to hunt down prey and attract potential mates. Tigers have strength, cheetahs have speed – that, if you like, is tiger nature and cheetah nature. Humans have something less obviously useful: freakishly large brains. This has made them terrifyingly inventive in acquiring other organisms to consume – and, indeed, in preparing them (what other animal serves up its prey cordon bleu?) – if also more roundabout in their reproductive strategies (composing sonnets, for example, or breakdancing).
Human nature – the predilection for politics and war, industry and art – is, therefore, just the particularly brainy way that humans have evolved to solve the problems of eating and reproducing. Thus biologists believe that once they understand the human brain and the evolutionary history behind it, they will know all they need to about this ubiquitous brand of ape.
Viewing ourselves in this way, stripped back to the biological bones, is a form of “reductionism”, as it reduces the intricacies of human consciousness and society to the workings of genes and brain cells. It would once have seemed incredible, obviously wrong, not to say blasphemous. To reduce religious wonder, poetic sensibility and the richness of social life to mere animal instincts seems a travesty. Yet exactly this is the dominant account of what it is to be human in the early 21st century.
Samuel Brittan on ‘Economics of Good and Evil’ by Tomas Sedlacek
‘Dancing in the Glory of Monsters’ by Jason Stearns
‘Dante in Love’ by AN Wilson
Matthew Engel Matthew Engel reviews ‘Off Message’ by Bob Marshall-Andrews
‘Sea Wolves’ by Tim Clayton
Thus newspapers today are filled with stories of genes for this and neurons for that. Recent examples range from “The Love-Cheat Gene: One in Four Born to be Unfaithful” to “Scientists Reveal Brain Cells Devoted to Jennifer Aniston”. Partly, the reductionist worldview is gaining in prevalence because many of its claims are true: evolutionary theory is now firmly established, our genome is being deciphered and there are indisputable correlations between consciousness and brain activity. But a problem arises when scientists, policymakers or the media adopt this biological perspective in the search for simple solutions to complex problems, blaming the credit crunch, for example, on short-termism inherited from our primate ancestors. Some thinkers are, therefore, rebelling against the reductionist consensus.
Of course, those with a strongly religious perspective often reject it outright. But even secular thinkers are increasingly resisting its claim to be the whole truth. Although some go too far in their attacks – arguing wrongly, for example, that we have next to nothing to learn about ourselves from our evolutionary history – such critics are, nonetheless, right to point out that in accepting the reductionist view, we risk doing ourselves a dangerous disservice.
One of the most vocal is Raymond Tallis, philosopher, former professor of geriatric medicine and prolific writer. His latest book, Aping Mankind: Neuromania, Darwinitis and the Misrepresentation of Humanity, is an all-out assault on the exaggerated claims made on behalf of the biological sciences.
“Neuromania” is Tallis’s term for reducing all aspects of mind and behaviour to the firings of microscopic brain cells, whereas “Darwinitis” refers to the other strand of biological reductionism: explaining all aspects of our behaviour in terms of our evolutionary history and/or the genes that encode it. His attack on them both is two-fold: first, he criticises many of the specific experiments and hypotheses as hopelessly crude and, second, he argues that the reductionist project is anyway philosophically flawed.
His criticisms of reductionism in practice are frequently justified: one researcher, for example, claimed to have identified the brain centre for romantic love by showing subjects first a picture of “someone with whom they were in love”, then a picture of a mere friend and recording the difference in neural activity. This does seem naive – what it means to be besotted is not summed up by which neurons fire when we look at a photo. Indeed, when I look at a picture of my beloved I am more likely to wonder what she wants for dinner than to call to mind the fullness of our love.
But such experiments are lapped up by the media and are influential throughout the humanities, where evolutionary explanations for everything from free speech to fine art are increasingly fashionable. Tallis, who is at pains to point out that he agrees with Darwinism – he is not a closet creationist – is on strong ground when he argues that the crude application of biological reductionism will shed little light on how to reform the health service or how to read James Joyce’s Ulysses.
But his position becomes much shakier when it moves to the broader philosophy. He argues at length, for example, that the mind cannot even in principle be reduced to the workings of our brains alone. This is a respectable position, though one held by a minority of those paid to think about these questions. Most philosophers and scientists believe the opposite: that mind is just the product of certain brain activity, even if we do not currently know quite how. Tallis, therefore, does both the reader and these thinkers an injustice by describing his opponents’ position as “obviously” wrong and accusing them of “elementary” mistakes. His own inability to provide a convincing alternative account of the mind shows that this is a subject on which reasonable people can differ without stooping to insults.
But although much of the theoretical content of Aping Mankind is unconvincing – to which might be added that the book is twice as long as it needs to be and unpleasantly boorish in tone – it is, nonetheless, an important work. Tallis is right to point out that a fundamental shift in our self-perception is under way and frequently going too far. It is, however, possible to question these developments in a more measured way, as shown by Naked Genes: Reinventing the Human in the Molecular Age by Helga Nowotny, a leading sociologist of science, and the biologist Giuseppe Testa. Their book is a subtle and sophisticated analysis of how the life sciences are shifting our view of ourselves and the challenges this is posing.
The title stems from a simple but telling observation: that it is the role of the sciences to “make things visible that could not previously be seen”. Until very recently, we had no idea how heredity worked. Now, our genes are laid bare before us. When technology first makes some process visible, scientists attempt to isolate what they see, extracting it from its context so as to understand its nature better. The result, argue Nowotny and Testa, is that they tend initially to ascribe too much importance to these processes and underestimate other factors – in other words, reductionism.
This seems a perfect explanation of Tallis’s “Darwinitis” and “Neuromania”. Just as our genes are being laid bare, new technology is enabling us for the first time to peer inside the living brain. But in an attempt to understand what they are seeing, scientists give disproportional weight to these fuzzy images. In time, however, a counter-movement will argue for seeing the newly revealed entities – genes or brain cells – in a broader context. Indeed, Tallis’s polemic in Aping Mankind can be seen as that counter-movement in action.
Nowotny and Testa explore various case studies where the revelations of biology are challenging our self-image: for example, in the debate around doping in sport. They argue that the distinction between the natural (the genes we are born with, good food and hard training) and the artificial (drugs, genetic engineering and prostheses) is a fiction, and unsustainable. Indeed, the idea of a “level playing field” for competitors is itself a fiction: some people are born with genes that make them better athletes. What is “level” about that? Genetic engineering, a technology frequently perceived as “unnatural”, could in theory level out just such an inequality.
But Nowotny and Testa do not offer answers to these questions – they simply explore them, exposing underlying tensions and ironies. Their main conclusion is that we need institutions that are flexible enough to cope both with further insights into our nature and also diverse and evolving public attitudes (citing the UK’s Human Fertilisation and Embryology Authority as an example). Such institutions should support citizens in making autonomous choices, they argue, in which case they are optimistic that new developments in science and technology can “empower the creative individual”.
Which is a view shared by Brian Christian in his excellent first book The Most Human Human: A Defence of Humanity in the Age of the Computer. The reductionist viewpoint suggests we are merely biological machines; if this is so, then all and any of our capacities should be achievable by other kinds of machine, such as computers. This is the opinion of many in the science and technology community, and a good number are setting out to prove it in the race to build the first truly intelligent machine.
The conventional test of whether a computer can think like a human is known as the Turing Test after the English computing pioneer Alan Turing, who created it in the 1950s. It is simply this: an assessor converses separately, usually via a remote terminal, with a human and with a machine. If the assessor can’t tell which is which, the machine has passed the test – something that has never yet happened. One annual setting of the Turing Test is called the Loebner Prize, after its sponsor, Hugh Loebner, and it provides the brilliant conceit for Christian’s book.
The set-up is this: Christian takes part in the Loebner Prize as one of the humans who will go up against the machines. If an assessor is fooled into believing that one of the computers is the human, this is tantamount to saying that the computer is more human-like than Christian himself. This is not a challenge that the author takes lying down: indeed, it is the launch pad for a fascinating explanation of what it means to be human and how Christian, in the face of stiff competition from the world’s best artificial intelligence, can prove that he is the genuine article.
Along the way, he explores ideas of authenticity, humour, spontaneity and originality. In one particularly insightful section, he notes that we can only be replaced by machines if we have first allowed ourselves to become like them. Once, for example, we have abandoned local contacts in favour of distant and homogenous call centres, staffed by workers given no room for responsibility or creativity, then it is only a matter of time before those workers, who are trained to act like robots, will be replaced by them.
All three books, different as they are, point to the same conclusion: that we need not allow ourselves to be reduced by these powerful new disciplines of genetics, neuroscience and computing. Instead, we can learn from them and assimilate them into a broader understanding of ourselves. We can, in fact, use them to become better at being human.
Stephen Cave is a writer and philosopher based in Berlin. His book, ‘Immortality’, is published next year by Random
Jonathan Carey did not die for lack of money.
New York State and the federal government provided $1.4 million annually per person to care for Jonathan and the other residents of the Oswald D. Heck Developmental Center, a warren of low-rise concrete and brick buildings near Albany.
Yet on a February afternoon in 2007, Jonathan, a skinny, autistic 13-year-old, was asphyxiated, slowly crushed to death in the back seat of a van by a state employee who had worked nearly 200 hours without a day off over 15 days. The employee, a ninth-grade dropout with a criminal conviction for selling marijuana, had been on duty during at least one previous episode of alleged abuse involving Jonathan.
“I could be a good king or a bad king,” he told the dying boy beneath him, according to court documents.
In the front seat of the van, the driver, another state worker at O. D. Heck, watched through the rear-view mirror but said little. He had been fired from four different private providers of services to the developmentally disabled before the state hired him to care for the same vulnerable population.
O. D. Heck is one of nine large institutions in New York that house the developmentally disabled, those with cerebral palsy, autism, Down syndrome and other conditions.
These institutions spend two and a half times as much money, per resident, as the thousands of smaller group homes that care for far more of the 135,000 developmentally disabled New Yorkers receiving services.
But the institutions are hardly a model: Those who run them have tolerated physical and psychological abuse, knowingly hired unqualified workers, ignored complaints by whistle-blowers and failed to credibly investigate cases of abuse and neglect, according to a review by The New York Times of thousands of state records and court documents, along with interviews of current and former employees.
Since 2005, seven of the institutions have failed inspections by the State Health Department, which oversees the safety and living conditions of the residents. One was shut down altogether this year.
While Jonathan Carey was at O. D. Heck, Health Department inspectors accused its management of routinely failing to investigate fractures and lacerations suffered by residents.
Similar problems can be found across the state. The Broome Developmental Center in Binghamton has been cited for repeatedly failing to protect residents from staff members. One employee there was merely reassigned after encouraging adolescent residents to fight one another.
Patterns of abuse appear embedded in the culture of the Sunmount Developmental Center in the Adirondacks. Last year, one supervisor was accused of four different episodes of physical and psychological abuse of residents within a span of two and a half months; another employee bragged on Facebook about “beating retards.”
The most damning accounts about the operations come from employees — thwarted whistle-blowers from around the state — and the beleaguered family members of residents.
Dozens of people with direct experience in the system echoed a central complaint about the Office for People With Developmental Disabilities: that the agency fails to take complaints seriously or curtail abuse of its residents.
“I’ve never seen any outfit run the way this place is,” said Jim Lynch, a direct-care worker in Brooklyn. “You report stuff, and then you get retaliated against. They want everything kept quiet. People that are outspoken attract the heat. I don’t know who to talk to when I see a problem. Nothing ever gets done.”
Paul Borer, a dietitian who works for the agency in the Hudson Valley, said he saw another employee punch a resident twice in the face in 2008, but little ever came of the many complaints he made about the episode, to his supervisors, to the commissioner of the agency at the time, Diana Jones Ritter, and to the office of Gov. David A. Paterson.
“You can see a person get hit, then you can go through three years of writing back and forth and nothing happens, so why even report it?” Mr. Borer said.
Mary Maioriello, who worked at O. D. Heck, reported seeing several cases of abuse, including the repeated beating of a resident with a stick that staff members called “the magic wand.”
Upset that her concerns were not sent to law enforcement, she confronted two of the agency’s top officials and secretly recorded the encounter, in which they sought to play down what she saw. After state officials learned of the existence of the tape, which Ms. Maioriello gave to The Times, the two officials were reassigned.
“The people at this place, the only way I can describe it is as a cult,” Ms. Maioriello said of O. D. Heck. “It should be shut down.”
Earlier this year, Gov. Andrew M. Cuomo forced the resignation of the commissioner of the Office for People With Developmental Disabilities after learning of the Times investigation, and said his administration would undertake a broad review of the state’s care of the developmentally disabled.
Indications are, however, that the agency is still struggling. Its new commissioner, Courtney Burke, is a well-regarded policy analyst but lacks management experience. She has taken over an agency with 23,000 employees; previously, she managed no more than seven. Mr. Cuomo has asked two veteran commissioners to review the agency’s practices, and Ms. Burke has taken some decisive steps, firing two top officials, and trying to establish more independent investigations.
Still, the pattern of secrecy at the agency has been hard to break; even after Ms. Burke’s ascension, it has battled in court to preventthe disclosure of patient records to Albany Law School, even though the school has a contract to monitor care of the disabled.
The institutions have survived a 40-year deinstitutionalization effort in part because officials have argued that they need a place to house the most frail or physically unruly residents. But there is also big money at stake. New York has been adept at securing large amounts of cash from Washington, earmarked for the institutions.
The federal and state governments now allocate more than $1.8 million annually for each of the roughly 1,300 residents remaining in the nine institutions, a number that has steadily risen from $1.4 million in 2007, when Jonathan Carey died.
That adds up to more than $2.5 billion a year, with about 60 percent coming from Washington.
But the money does not actually all go to the care of the residents in the institutions.
The state agency recently conceded that only about $600 million is being spent on the residents’ care — a still-generous allocation of nearly $430,000 per person — while the rest is redirected throughout the agency for use at group homes and care in other areas. The state’s redistribution of the Medicaid money earmarked for the institutions is currently the subject of a federal audit. The Cuomo administration has said it is moving to further de-emphasize institutional care and will close some of the nine facilities.
Jonathan Carey arrived at O. D. Heck on Oct. 7, 2005.
Two months later, unbeknown to Jonathan’s parents, Michael and Lisa Carey, the federal government barred the facility from accepting new residents financed by Medicaid for a year because of its chronic problems.
One inspection by the State Health Department found at least 18 serious injuries of residents in a five-month period, in a facility holding only 57 people. Eight of the injuries, including five fractures, were of unknown origin.
The Health Department concluded that investigating the high number of injuries was not a priority for O. D. Heck’s management.
“There was no evidence that the facility examined the nature of all reportable injuries systemically in an effort to prevent such injuries in the future,” inspectors wrote. O. D. Heck managers were supposed to complete initial investigations within five days of a serious injury, but often left the inquiries open for weeks or months, the department found.
Some workers were hardly fit for duty. One had a history of showing up intoxicated, according to depositions in a civil case brought by the Carey family against the state, but he was kept on the job until he was once so drunk at work that he was sent to a hospital. He was later made a groundskeeper.
Direct-care workers were often high school dropouts, some with criminal convictions. One lower-level supervisor had a petty larceny conviction. Edwin Tirado, the employee eventually convicted of manslaughter in Jonathan’s death, had been convicted of selling marijuana and, as a youthful offender, for firing a shotgun in his attic.
Nadeem Mall, a trainee at O. D. Heck who pleaded guilty to criminally negligent homicide in Jonathan’s death, was fired from four different private providers of services to the developmentally disabled, lasting less than a year at each of them, before he was hired by the state.
One employer had accused Mr. Mall of sleeping and watching television on the job. Another found him sleeping while a resident’s thumb was bleeding profusely. He was let go from a third job after being accused of calling 1-900 sex lines using a company cellphone, and from a fourth job after he inexplicably had a hairdresser cut off all the hair of a disabled woman in his care. Mr. Mall’s lawyer declined to comment.
With that background, he was hired by the state, listing his sister and his wife as references on his application. A state official recently said in a deposition that the Office for People With Developmental Disabilities knew Mr. Mall had lied on his application form, claiming his driver’s license had never been suspended when it actually had been shortly before his hiring.
“O. D. Heck failed at every single possible level,” said Ilann Maazel, Mr. Carey’s lawyer. “It was a disaster waiting to happen.”
There was little tangible oversight of employees and no restraint on overtime, which employees coveted to supplement the low salaries, which started at less than $30,000 a year. Mr. Tirado was once allowed to work 84 straight days, and the former head of O. D. Heck acknowledged in a deposition that too much overtime had contributed to Jonathan’s death.
All of this was hidden from the families of O. D. Heck’s residents.
“If we had any clue that O. D. Heck was in this shape, do you think that we would have ever put Jonathan in there?” Jonathan’s father said.
Mr. Carey is a tall man with piercing blue eyes, who ran a used-car dealership before his son’s death. During a recent interview, Mr. Carey was surrounded by pictures of a grinning Jonathan, a contrast to his father’s crushing sadness.
Before Jonathan died, Michael, an evangelical Christian, would make regular missionary trips to Africa. He has largely given up his dealership, and now devotes his life mostly to advocacy for the developmentally disabled.
For the Careys, the journey to O. D. Heck was a last resort. Jonathan was born in 1993, the older of their two sons. When he was 19 months old, the Careys were told that he was mentally retarded, and when he was older that he was autistic — functionally a 2-year-old, his vocabulary limited to “daddy” and the phrase “Where you goin’?”
The Careys, who live near Albany, raised Jonathan until he was 9, but became worried that they could not teach their son basic living skills, like toilet training. They enrolled him at the Anderson Center for Autism, a privately run school in the Hudson Valley overseen by the state.
At first, the school seemed a good fit, until Jonathan, who was always thin, began losing weight. During one visit, an employee told the Careys to take home a duffel bag they had never used. They discovered a logbook inside the bag detailing startling changes to Jonathan’s treatment plan. Among other things, the school was withholding food from Jonathan to punish him for taking off his shirt at inappropriate times.
“They literally planned to withhold my son’s meals,” Mr. Carey said. “And when that was not working, then they began to seclude him in his bedroom for an extended period of time. He missed eight full days of school.”
Soon afterward, the Careys removed their son from Anderson, and cared for him at home for the next year. But now there were tantrums for no apparent reason. A doctor later told the Careys their son was suffering from post-traumatic stress disorder.
He became harder to contain. He was tall enough to jump their fence, had no sense of keeping himself safe and became increasingly hard to handle. About a year after he came home, Jonathan had what his father called “a full emotional meltdown,” and the Careys took him to a local hospital, where he was essentially knocked out with a drug cocktail and tied to his hospital bed.
Death in a Van
Running out of options, the Careys were directed to O. D. Heck, and they hoped that an institution run by the state would be more promising than the Anderson school.
But on Oct. 29, 2005, just a couple of weeks after Jonathan was enrolled, the Careys arrived to find their son’s nose so swollen that they took him to the hospital. None of the staff members claimed to know what had happened, and they speculated that it had occurred during a dental procedure. Another time, Jonathan was taken to the hospital with a black eye and a broken nose. That time, the staff suggested that Jonathan might have fallen out of bed.
On a third occasion, Jonathan was taken to the hospital with severe bruising on both sides of his face.
“They basically told us that Jonathan had fallen out of a rocking chair and hit his head on a table, and I said, ‘Absolutely not,’ ” Lisa Carey said.
In a recent deposition, a lower-level supervisor at O. D. Heck, Tedra Hamilton, recalled the third episode, saying Jonathan “had bruises everywhere.”
“It looked bad to me,” she added. “It scared me.”
Edwin Tirado had been one of two employees on duty right before the bruises were discovered; Mr. Tirado invoked his Fifth Amendment rights and declined to speak during a recent deposition when asked about prior abuse cases involving Jonathan.
The situation came to a head on Feb. 15, 2007. Mr. Tirado and Mr. Mall took Jonathan and another resident on an outing. Mr. Tirado had worked 197 hours over a 15-day period and was so exhausted that he let Mr. Mall drive, fearing he would fall asleep.
Mr. Mall first drove to his bank, leaving Mr. Tirado in the van with Jonathan and the other resident. While they were waiting, Jonathan got up from his seat. Mr. Tirado went to the back of the van and began to restrain Jonathan, trying to subdue him. Mr. Mall and the other resident, identified in court documents by his initials, E. C., later said that Mr. Tirado sat on Jonathan, who was face down, his legs flailing.
When Mr. Mall returned to the van several minutes later, Mr. Tirado declined an offer for help.
Mr. Tirado restrained Jonathan for about 15 minutes, continuing as the group drove to a gas station. Mr. Mall said he heard Mr. Tirado tell the boy, “I could be a good king or a bad king.” Mr. Tirado denied making the remark, but another employee had heard him make a similar comment before, according to court documents.
E. C., watching with apparent concern from the front of the van, told Mr. Tirado, “Get off of him,” and “Let him breathe,” according to Mr. Mall.
When they got to the gas station, Mr. Mall went inside to buy some drinks, including a Snapple iced tea for Mr. Tirado. Mr. Mall has testified that when he returned to the van, Mr. Tirado told him that Jonathan had stopped breathing and the two panicked.
Mr. Tirado has changed his story over time. In a re-enactment videotaped by the police soon after the death, he said that at the gas station, he realized Jonathan had stopped breathing.
“I just froze,” he said, adding that he was afraid of “losing my job and going to jail.” Mr. Tirado has since recanted, saying he had believed that Jonathan had gone to sleep.
Regardless, the two men drove around for more than an hour with a suddenly silent boy in the back without checking on him or calling 911. They went to a video game store, where Mr. Tirado bought a special bag for his PlayStation, then to Mr. Tirado’s house, where they smoked and chatted with a neighbor, and eventually back to O. D. Heck.
An autopsy found the cause of death to be compressive asphyxia — basically, so much pressure was put on Jonathan’s chest that he could not get enough oxygen into his lungs.
Mr. Carey and his wife were together when they got the call.
“I just lost it,” Mr. Carey said. “My wife was yelling and screaming ‘What happened? What happened?’ I just couldn’t even, I don’t even think I could communicate well. And she finally said, ‘Which one?’ She realized something had happened to one of the boys. And I said ‘Jonathan.’ And we literally both fell under the weight of the grief, collapsed to the sidewalk, just uncontrollably weeping. It’s hard to explain the pain and the trauma that one experiences getting that kind of news. You’re in a cloud. It’s like you don’t even know what’s going on around you.”
‘Here’s to Beating Retards’
Employment standards are low at the Office for People With Developmental Disabilities. Not only were people with criminal convictions hired, but since 2006, some 125 workers who were fired from jobs there were rehired — a practice that agency officials said they would move to halt after The Times questioned them about it.
A recent case at the Monroe Developmental Center in Rochester, which failed a December inspection by the Health Department, highlighted the lax practices. Inspectors found that an allegation of physical abuse was substantiated against an employee who yelled at a resident, lunged toward him and “pushed him into the wall.”
Inspectors discovered that the same employee had previously been fired in 2007, after being involved in a case of misconduct and for threatening a supervisor. The employee also had been convicted of criminal mischief, a misdemeanor, not related to her job. In her personnel file, there were “do not rehire” recommendations from “numerous supervisors and administrators at the facility when she was terminated in 2007,” inspectors found. And yet she was hired again.
Ms. Burke, the agency’s commissioner, said in a statement that despite the past practice, she “will do everything in my power to not allow the rehiring of employees who have been previously terminated.”
The Sunmount Developmental Center in the Adirondacks, a repository for residents deemed more challenging, also failed an inspection last year. The supervisor accused of four episodes of abuse of residents continued to have contact with them even as the investigations took place, inspectors found.
Inspectors also found that a resident had claimed that a caretaker had called him a “retard” and threatened to have another resident “beat him up.” When the resident was indeed assaulted by the second resident, inspectors found that Sunmount officials did not investigate whether the employee had instigated the fight. In another episode, an employee dumped ketchup, salt and pepper on the head of a resident during dinner. The agency’s response was to transfer the employee to another unit.
Around the same time, one Sunmount resident, Eddie Adkins, was set upon by several staff members after he grew upset that he was not allowed to go to the bathroom, according to an internal report provided to The Times by Mr. Adkins’s family, who were able to get the report because of a disclosure law passed in the wake of Jonathan Carey’s death.
A deaf resident told state investigators that he saw four state employees punching Mr. Adkins while he was sitting on a couch — “I did not like that,” he told investigators, adding that he was so disturbed that he turned his hearing aid off during the melee.
Mr. Adkins’s case underscores the difficulty of this work: While many residents are defenseless — children like Jonathan Carey, or those with cerebral palsy or other debilitating diseases — Mr. Adkins stands 6 feet 5, and his weight has topped 300 pounds. He is autistic and bipolar, and has a history of biting himself and his caregivers and has been jailed for doing so.
But his caretakers can also be violent. The internal report found that Mr. Adkins’s “left eye was swollen and there was bruising under the left eye.”
“There was a large vertical abrasion to Mr. Adkins’s outer left eye,” the report continued, “and a small abrasion on the left side, inner corner of Mr. Adkins’s nose near his eye. There was a small linear abrasion on the outer corner of Mr. Adkins’s right eye.”
An agency spokeswoman said she could not comment on specific cases, citing confidentiality rights of residents.
After the attack, five staff members were placed on administrative leave. One of them wrote in a Facebook posting: “im on administration vacation as well,” adding, “cheers brother here’s to beating retards.”
State officials have said they took a number of steps to clean up O. D. Heck after Jonathan’s death.
Those included increasing the number of clinical staff members and direct-care workers and putting more emphasis on teaching residents skills that will help them move to small group homes, the agency said.
But Mary Maioriello, an employee at O. D. Heck until she resigned this year, said a culture of abuse continued. Ms. Maioriello was hired as a trainee last year, and witnessed several disturbing episodes. In one case, two employees played a game they called “Fetch,” throwing French fries on the floor and laughing as one resident dived to get them, while another jumped out of his recliner and a third ate them off the floor.
Ms. Maioriello was a 24-year-old trainee at the time. She was horrified, but also intimidated.
“When I first started working there, I was told, ‘Keep your eyes open and your mouth shut and you’ll do just fine here,’ ” she recalled in an interview. “It was kind of like a code that you just didn’t turn anything in. A word that they used a lot was a ‘snitch.’ That’s what it felt like to me, like I was in some kind of gang or cult.”
Ms. Maioriello told her mother what she had seen; her mother told a friend who knew someone at the agency. When Ms. Maioriello was brought in for questioning, she went further, telling her supervisors about several other episodes she had witnessed.
The most serious involved a blue wooden stick stashed in a cabinet drawer in a common room. One supervisor, Ms. Maioriello wrote, called the stick the “magic wand,” and it was used to repeatedly beat a resident whom Ms. Maioriello described as nonverbal and weighing less than 90 pounds.
Ms. Maioriello told management that she had seen three employees, including the supervisor, hitting a resident with the stick at different times. The same resident was confined by employees to a gym mat, and if he stepped off it, he was hit with the stick, snapped with a towel or had his hands stepped on. Employees also appeared to enjoy taunting residents; two workers told one resident they were going to knock over his ceramic frog collection — they called the game “Kick the Frog.”
For Ms. Maioriello, it was a painful experience. She said she had chosen to work with the disabled because her 3-year-old son has developmental problems. She aspires to being a nurse and has been a vigorous advocate for her son. But at O. D. Heck, she felt stuck between her need for a job and her determination to speak up about the behavior.
“I just thought, oh my God, what is wrong with these people?” Ms. Maioriello said of the other employees, adding: “I spoke to my mother, I spoke to some friends, I was telling them, you know, these terrible things. Should I quit? I’m a single mother, I need a paycheck. I don’t know what to do. I’m scared of retaliation. And then once I finally turned it in, I feel like it fell on deaf ears.”
A second state employee who worked at O. D. Heck corroborated much of Ms. Maioriello’s account, but asked not to be identified for fear of being fired.
“There’s abuse going on all the time,” the employee said. “They don’t report anything. They hide everything and cover for each other.”
This employee also saw the resident who was restricted to a gym mat. “I saw them shove socks in his mouth, they shake keys in front of him, they treated him like an animal,” the employee said.
Little resulted from Ms. Maioriello’s reports to management. Her co-workers at first blamed someone else for reporting them to management, and the word “snitch” was spray-painted on the worker’s car. Ms. Maioriello went on leave and resigned in March, threatening to go to the news media before she left.
The Official Response
It was then that Kate Bishop, who supervises O. D. Heck and group homes in a nine-county region stretching from Albany into the Adirondacks, met with Ms. Maioriello.
In an emotional hourlong encounter that she secretly recorded, Ms. Maioriello challenged Ms. Bishop and Andrew Morgese, the agency’s head of internal affairs, who was also present, reminding them that she had reported that a resident was being regularly beaten with a stick. She asked why the matter had not been reported to law enforcement. “Were the police notified?” Ms. Maioriello asked, according to the tape, which was provided to The Times. “Because it was an assault. That is the law, that the police are to be notified when an individual is assaulted. Were they notified?”
“Well,” Ms. Bishop said, “in the original report that you made, it didn’t appear to rise to the level of …”
“Hitting someone with a stick?” Ms. Maioriello asked.
“In the initial manner described …” Ms. Bishop responded.
“Really?” Ms. Maioriello said. “So what’s the severity that you have to make an assault?”
Later in the conversation, Ms. Maioriello again asked Ms. Bishop, “Is it an assault to hit him with a stick?”
Ms. Bishop replied, “Not seeing it, I couldn’t answer that question.”
She put the same question to Mr. Morgese.
“Shift after shift after shift, he was hit with this stick by several employees,” she said. “Is that an assault?”
Mr. Morgese replied, “I don’t think I can answer that question.”
At one point during the exchange, Mr. Morgese suggested that it was the responsibility of Ms. Maioriello, a trainee, to report the cases to law enforcement, even though management had been made aware of them.
“I’m not trying to turn this around,” he said, “but if you’re indicating that you believe you witnessed the assault of an individual who was being repeatedly hit with a stick, any one of our employees has not only an opportunity to report, they have a duty to report, they have a duty to intervene on behalf of that individual. If they can’t intervene safely on behalf of that individual, if he’s being assaulted, they have a duty to notify law enforcement.”
Shortly after the meeting, Ms. Maioriello reported the matter to the Niskayuna Police Department. While an officer who met with her said he was not sure how to respond to such episodes inside a state facility, she has since been contacted by the department seeking more information.
The Times asked the Office for People With Developmental Disabilities why Ms. Bishop and Mr. Morgese could not say what an assault was and why Ms. Maioriello’s supervisors had not forwarded her allegations to law enforcement.
The state disputed the framing of the question.
“Your characterization of these exchanges is not consistent with our understanding of the facts regarding those conversations,” an agency spokeswoman said, adding, “Without question, it is the agency policy that if a staff person hit an individual with a stick, law enforcement should be notified.”
The state was subsequently informed by The Times that a tape existed of the encounter, and shortly thereafter both Ms. Bishop and Mr. Morgese were removed from their positions. Ms. Bishop was reassigned to the central office, and Mr. Morgese was demoted and sent to a regional office.
Mr. Morgese, through the agency, declined to comment. In a brief statement, Ms. Bishop said she was inspired to get into the field by a developmentally disabled sister.
“I believe that I provided the highest-quality leadership,” she said, “always guided by respect and dignity for the people we are honored to serve.”
On Feb. 15, 1965, a diffident but self-possessed high school student named Raymond Kurzweil appeared as a guest on a game show called I’ve Got a Secret. He was introduced by the host, Steve Allen, then he played a short musical composition on a piano. The idea was that Kurzweil was hiding an unusual fact and the panelists — they included a comedian and a former Miss America — had to guess what it was.
On the show (see the clip on YouTube), the beauty queen did a good job of grilling Kurzweil, but the comedian got the win: the music was composed by a computer. Kurzweil got $200. (See TIME’s photo-essay “Cyberdyne’s Real Robot.”)
Kurzweil then demonstrated the computer, which he built himself — a desk-size affair with loudly clacking relays, hooked up to a typewriter. The panelists were pretty blasé about it; they were more impressed by Kurzweil’s age than by anything he’d actually done. They were ready to move on to Mrs. Chester Loney of Rough and Ready, Calif., whose secret was that she’d been President Lyndon Johnson’s first-grade teacher.
But Kurzweil would spend much of the rest of his career working out what his demonstration meant. Creating a work of art is one of those activities we reserve for humans and humans only. It’s an act of self-expression; you’re not supposed to be able to do it if you don’t have a self. To see creativity, the exclusive domain of humans, usurped by a computer built by a 17-year-old is to watch a line blur that cannot be unblurred, the line between organic intelligence and artificial intelligence.
That was Kurzweil’s real secret, and back in 1965 nobody guessed it. Maybe not even him, not yet. But now, 46 years later, Kurzweil believes that we’re approaching a moment when computers will become intelligent, and not just intelligent but more intelligent than humans. When that happens, humanity — our bodies, our minds, our civilization — will be completely and irreversibly transformed. He believes that this moment is not only inevitable but imminent. According to his calculations, the end of human civilization as we know it is about 35 years away. (See the best inventions of 2010.)
Computers are getting faster. Everybody knows that. Also, computers are getting faster faster — that is, the rate at which they’re getting faster is increasing.
So if computers are getting so much faster, so incredibly fast, there might conceivably come a moment when they are capable of something comparable to human intelligence. Artificial intelligence. All that horsepower could be put in the service of emulating whatever it is our brains are doing when they create consciousness — not just doing arithmetic very quickly or composing piano music but also driving cars, writing books, making ethical decisions, appreciating fancy paintings, making witty observations at cocktail parties.
If you can swallow that idea, and Kurzweil and a lot of other very smart people can, then all bets are off. From that point on, there’s no reason to think computers would stop getting more powerful. They would keep on developing until they were far more intelligent than we are. Their rate of development would also continue to increase, because they would take over their own development from their slower-thinking human creators. Imagine a computer scientist that was itself a super-intelligent computer. It would work incredibly quickly. It could draw on huge amounts of data effortlessly. It wouldn’t even take breaks to play Farmville.
Probably. It’s impossible to predict the behavior of these smarter-than-human intelligences with which (with whom?) we might one day share the planet, because if you could, you’d be as smart as they would be. But there are a lot of theories about it. Maybe we’ll merge with them to become super-intelligent cyborgs, using computers to extend our intellectual abilities the same way that cars and planes extend our physical abilities. Maybe the artificial intelligences will help us treat the effects of old age and prolong our life spans indefinitely. Maybe we’ll scan our consciousnesses into computers and live inside them as software, forever, virtually. Maybe the computers will turn on humanity and annihilate us. The one thing all these theories have in common is the transformation of our species into something that is no longer recognizable as such to humanity circa 2011. This transformation has a name: the Singularity.
The difficult thing to keep sight of when you’re talking about the Singularity is that even though it sounds like science fiction, it isn’t, no more than a weather forecast is science fiction. It’s not a fringe idea; it’s a serious hypothesis about the future of life on Earth. There’s an intellectual gag reflex that kicks in anytime you try to swallow an idea that involves super-intelligent immortal cyborgs, but suppress it if you can, because while the Singularity appears to be, on the face of it, preposterous, it’s an idea that rewards sober, careful evaluation.
See pictures of cinema’s most memorable robots.
From TIME’s archives: “Can Machines Think?”
See TIME’s special report on gadgets, then and now.
People are spending a lot of money trying to understand it. The three-year-old Singularity University, which offers inter-disciplinary courses of study for graduate students and executives, is hosted by NASA. Google was a founding sponsor; its CEO and co-founder Larry Page spoke there last year. People are attracted to the Singularity for the shock value, like an intellectual freak show, but they stay because there’s more to it than they expected. And of course, in the event that it turns out to be real, it will be the most important thing to happen to human beings since the invention of language. (See “Is Technology Making Us Lonelier?”)
The Singularity isn’t a wholly new idea, just newish. In 1965 the British mathematician I.J. Good described something he called an “intelligence explosion”:
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.
The word singularity is borrowed from astrophysics: it refers to a point in space-time — for example, inside a black hole — at which the rules of ordinary physics do not apply. In the 1980s the science-fiction novelist Vernor Vinge attached it to Good’s intelligence-explosion scenario. At a NASA symposium in 1993, Vinge announced that “within 30 years, we will have the technological means to create super-human intelligence. Shortly after, the human era will be ended.”
By that time Kurzweil was thinking about the Singularity too. He’d been busy since his appearance on I’ve Got a Secret. He’d made several fortunes as an engineer and inventor; he founded and then sold his first software company while he was still at MIT. He went on to build the first print-to-speech reading machine for the blind — Stevie Wonder was customer No. 1 — and made innovations in a range of technical fields, including music synthesizers and speech recognition. He holds 39 patents and 19 honorary doctorates. In 1999 President Bill Clinton awarded him the National Medal of Technology. (See pictures of adorable robots.)
But Kurzweil was also pursuing a parallel career as a futurist: he has been publishing his thoughts about the future of human and machine-kind for 20 years, most recently in The Singularity Is Near, which was a best seller when it came out in 2005. A documentary by the same name, starring Kurzweil, Tony Robbins and Alan Dershowitz, among others, was released in January. (Kurzweil is actually the subject of two current documentaries. The other one, less authorized but more informative, is called The Transcendent Man.) Bill Gates has called him “the best person I know at predicting the future of artificial intelligence.”(See the world’s most influential people in the 2010 TIME 100.)
In real life, the transcendent man is an unimposing figure who could pass for Woody Allen’s even nerdier younger brother. Kurzweil grew up in Queens, N.Y., and you can still hear a trace of it in his voice. Now 62, he speaks with the soft, almost hypnotic calm of someone who gives 60 public lectures a year. As the Singularity’s most visible champion, he has heard all the questions and faced down the incredulity many, many times before. He’s good-natured about it. His manner is almost apologetic: I wish I could bring you less exciting news of the future, but I’ve looked at the numbers, and this is what they say, so what else can I tell you?
Kurzweil’s interest in humanity’s cyborganic destiny began about 1980 largely as a practical matter. He needed ways to measure and track the pace of technological progress. Even great inventions can fail if they arrive before their time, and he wanted to make sure that when he released his, the timing was right. “Even at that time, technology was moving quickly enough that the world was going to be different by the time you finished a project,” he says. “So it’s like skeet shooting — you can’t shoot at the target.” He knew about Moore’s law, of course, which states that the number of transistors you can put on a microchip doubles about every two years. It’s a surprisingly reliable rule of thumb. Kurzweil tried plotting a slightly different curve: the change over time in the amount of computing power, measured in MIPS (millions of instructions per second), that you can buy for $1,000.
As it turned out, Kurzweil’s numbers looked a lot like Moore’s. They doubled every couple of years. Drawn as graphs, they both made exponential curves, with their value increasing by multiples of two instead of by regular increments in a straight line. The curves held eerily steady, even when Kurzweil extended his backward through the decades of pretransistor computing technologies like relays and vacuum tubes, all the way back to 1900. (Comment on this story.)
Kurzweil then ran the numbers on a whole bunch of other key technological indexes — the falling cost of manufacturing transistors, the rising clock speed of microprocessors, the plummeting price of dynamic RAM. He looked even further afield at trends in biotech and beyond — the falling cost of sequencing DNA and of wireless data service and the rising numbers of Internet hosts and nanotechnology patents. He kept finding the same thing: exponentially accelerating progress. “It’s really amazing how smooth these trajectories are,” he says. “Through thick and thin, war and peace, boom times and recessions.” Kurzweil calls it the law of accelerating returns: technological progress happens exponentially, not linearly.
See TIME’s video “Five Worst Inventions.”
See the 100 best gadgets of all time.
Then he extended the curves into the future, and the growth they predicted was so phenomenal, it created cognitive resistance in his mind. Exponential curves start slowly, then rocket skyward toward infinity. According to Kurzweil, we’re not evolved to think in terms of exponential growth. “It’s not intuitive. Our built-in predictors are linear. When we’re trying to avoid an animal, we pick the linear prediction of where it’s going to be in 20 seconds and what to do about it. That is actually hardwired in our brains.”
Here’s what the exponential curves told him. We will successfully reverse-engineer the human brain by the mid-2020s. By the end of that decade, computers will be capable of human-level intelligence. Kurzweil puts the date of the Singularity — never say he’s not conservative — at 2045. In that year, he estimates, given the vast increases in computing power and the vast reductions in the cost of same, the quantity of artificial intelligence created will be about a billion times the sum of all the human intelligence that exists today. (See how robotics are changing the future of medicine.)
The Singularity isn’t just an idea. it attracts people, and those people feel a bond with one another. Together they form a movement, a subculture; Kurzweil calls it a community. Once you decide to take the Singularity seriously, you will find that you have become part of a small but intense and globally distributed hive of like-minded thinkers known as Singularitarians.
Not all of them are Kurzweilians, not by a long chalk. There’s room inside Singularitarianism for considerable diversity of opinion about what the Singularity means and when and how it will or won’t happen. But Singularitarians share a worldview. They think in terms of deep time, they believe in the power of technology to shape history, they have little interest in the conventional wisdom about anything, and they cannot believe you’re walking around living your life and watching TV as if the artificial-intelligence revolution were not about to erupt and change absolutely everything. They have no fear of sounding ridiculous; your ordinary citizen’s distaste for apparently absurd ideas is just an example of irrational bias, and Singularitarians have no truck with irrationality. When you enter their mind-space you pass through an extreme gradient in worldview, a hard ontological shear that separates Singularitarians from the common run of humanity. Expect turbulence.
In addition to the Singularity University, which Kurzweil co-founded, there’s also a Singularity Institute for Artificial Intelligence, based in San Francisco. It counts among its advisers Peter Thiel, a former CEO of PayPal and an early investor in Facebook. The institute holds an annual conference called the Singularity Summit. (Kurzweil co-founded that too.) Because of the highly interdisciplinary nature of Singularity theory, it attracts a diverse crowd. Artificial intelligence is the main event, but the sessions also cover the galloping progress of, among other fields, genetics and nanotechnology. (See TIME’s computer covers.)
At the 2010 summit, which took place in August in San Francisco, there were not just computer scientists but also psychologists, neuroscientists, nanotechnologists, molecular biologists, a specialist in wearable computers, a professor of emergency medicine, an expert on cognition in gray parrots and the professional magician and debunker James “the Amazing” Randi. The atmosphere was a curious blend of Davos and UFO convention. Proponents of seasteading — the practice, so far mostly theoretical, of establishing politically autonomous floating communities in international waters — handed out pamphlets. An android chatted with visitors in one corner.
After artificial intelligence, the most talked-about topic at the 2010 summit was life extension. Biological boundaries that most people think of as permanent and inevitable Singularitarians see as merely intractable but solvable problems. Death is one of them. Old age is an illness like any other, and what do you do with illnesses? You cure them. Like a lot of Singularitarian ideas, it sounds funny at first, but the closer you get to it, the less funny it seems. It’s not just wishful thinking; there’s actual science going on here.
For example, it’s well known that one cause of the physical degeneration associated with aging involves telomeres, which are segments of DNA found at the ends of chromosomes. Every time a cell divides, its telomeres get shorter, and once a cell runs out of telomeres, it can’t reproduce anymore and dies. But there’s an enzyme called telomerase that reverses this process; it’s one of the reasons cancer cells live so long. So why not treat regular non-cancerous cells with telomerase? In November, researchers at Harvard Medical School announced in Nature that they had done just that. They administered telomerase to a group of mice suffering from age-related degeneration. The damage went away. The mice didn’t just get better; they got younger. (Comment on this story.)
Aubrey de Grey is one of the world’s best-known life-extension researchers and a Singularity Summit veteran. A British biologist with a doctorate from Cambridge and a famously formidable beard, de Grey runs a foundation called SENS, or Strategies for Engineered Negligible Senescence. He views aging as a process of accumulating damage, which he has divided into seven categories, each of which he hopes to one day address using regenerative medicine. “People have begun to realize that the view of aging being something immutable — rather like the heat death of the universe — is simply ridiculous,” he says. “It’s just childish. The human body is a machine that has a bunch of functions, and it accumulates various types of damage as a side effect of the normal function of the machine. Therefore in principal that damage can be repaired periodically. This is why we have vintage cars. It’s really just a matter of paying attention. The whole of medicine consists of messing about with what looks pretty inevitable until you figure out how to make it not inevitable.”
Kurzweil takes life extension seriously too. His father, with whom he was very close, died of heart disease at 58. Kurzweil inherited his father’s genetic predisposition; he also developed Type 2 diabetes when he was 35. Working with Terry Grossman, a doctor who specializes in longevity medicine, Kurzweil has published two books on his own approach to life extension, which involves taking up to 200 pills and supplements a day. He says his diabetes is essentially cured, and although he’s 62 years old from a chronological perspective, he estimates that his biological age is about 20 years younger.
From TIME’s archives: “The Immortality Enzyme.”
See Healthland’s 5 rules for good health in 2011.
But his goal differs slightly from de Grey’s. For Kurzweil, it’s not so much about staying healthy as long as possible; it’s about staying alive until the Singularity. It’s an attempted handoff. Once hyper-intelligent artificial intelligences arise, armed with advanced nanotechnology, they’ll really be able to wrestle with the vastly complex, systemic problems associated with aging in humans. Alternatively, by then we’ll be able to transfer our minds to sturdier vessels such as computers and robots. He and many other Singularitarians take seriously the proposition that many people who are alive today will wind up being functionally immortal.
It’s an idea that’s radical and ancient at the same time. In “Sailing to Byzantium,” W.B. Yeats describes mankind’s fleshly predicament as a soul fastened to a dying animal. Why not unfasten it and fasten it to an immortal robot instead? But Kurzweil finds that life extension produces even more resistance in his audiences than his exponential growth curves. “There are people who can accept computers being more intelligent than people,” he says. “But the idea of significant changes to human longevity — that seems to be particularly controversial. People invested a lot of personal effort into certain philosophies dealing with the issue of life and death. I mean, that’s the major reason we have religion.” (See the top 10 medical breakthroughs of 2010.)
Of course, a lot of people think the Singularity is nonsense — a fantasy, wishful thinking, a Silicon Valley version of the Evangelical story of the Rapture, spun by a man who earns his living making outrageous claims and backing them up with pseudoscience. Most of the serious critics focus on the question of whether a computer can truly become intelligent.
The entire field of artificial intelligence, or AI, is devoted to this question. But AI doesn’t currently produce the kind of intelligence we associate with humans or even with talking computers in movies — HAL or C3PO or Data. Actual AIs tend to be able to master only one highly specific domain, like interpreting search queries or playing chess. They operate within an extremely specific frame of reference. They don’t make conversation at parties. They’re intelligent, but only if you define intelligence in a vanishingly narrow way. The kind of intelligence Kurzweil is talking about, which is called strong AI or artificial general intelligence, doesn’t exist yet.
Why not? Obviously we’re still waiting on all that exponentially growing computing power to get here. But it’s also possible that there are things going on in our brains that can’t be duplicated electronically no matter how many MIPS you throw at them. The neurochemical architecture that generates the ephemeral chaos we know as human consciousness may just be too complex and analog to replicate in digital silicon. The biologist Dennis Bray was one of the few voices of dissent at last summer’s Singularity Summit. “Although biological components act in ways that are comparable to those in electronic circuits,” he argued, in a talk titled “What Cells Can Do That Robots Can’t,” “they are set apart by the huge number of different states they can adopt. Multiple biochemical processes create chemical modifications of protein molecules, further diversified by association with distinct structures at defined locations of a cell. The resulting combinatorial explosion of states endows living systems with an almost infinite capacity to store information regarding past and present conditions and a unique capacity to prepare for future events.” That makes the ones and zeros that computers trade in look pretty crude. (See how to live 100 years.)
Underlying the practical challenges are a host of philosophical ones. Suppose we did create a computer that talked and acted in a way that was indistinguishable from a human being — in other words, a computer that could pass the Turing test. (Very loosely speaking, such a computer would be able to pass as human in a blind test.) Would that mean that the computer was sentient, the way a human being is? Or would it just be an extremely sophisticated but essentially mechanical automaton without the mysterious spark of consciousness — a machine with no ghost in it? And how would we know?
Even if you grant that the Singularity is plausible, you’re still staring at a thicket of unanswerable questions. If I can scan my consciousness into a computer, am I still me? What are the geopolitics and the socioeconomics of the Singularity? Who decides who gets to be immortal? Who draws the line between sentient and nonsentient? And as we approach immortality, omniscience and omnipotence, will our lives still have meaning? By beating death, will we have lost our essential humanity?
Kurzweil admits that there’s a fundamental level of risk associated with the Singularity that’s impossible to refine away, simply because we don’t know what a highly advanced artificial intelligence, finding itself a newly created inhabitant of the planet Earth, would choose to do. It might not feel like competing with us for resources. One of the goals of the Singularity Institute is to make sure not just that artificial intelligence develops but also that the AI is friendly. You don’t have to be a super-intelligent cyborg to understand that introducing a superior life-form into your own biosphere is a basic Darwinian error.
(Comment on this story.)
If the Singularity is coming, these questions are going to get answers whether we like it or not, and Kurzweil thinks that trying to put off the Singularity by banning technologies is not only impossible but also unethical and probably dangerous. “It would require a totalitarian system to implement such a ban,” he says. “It wouldn’t work. It would just drive these technologies underground, where the responsible scientists who we’re counting on to create the defenses would not have easy access to the tools.”
Kurzweil is an almost inhumanly patient and thorough debater. He relishes it. He’s tireless in hunting down his critics so that he can respond to them, point by point, carefully and in detail.
See TIME’s photo-essay “A Global Look at Longevity.”
See how genes, gender and diet may be life extenders.
Take the question of whether computers can replicate the biochemical complexity of an organic brain. Kurzweil yields no ground there whatsoever. He does not see any fundamental difference between flesh and silicon that would prevent the latter from thinking. He defies biologists to come up with a neurological mechanism that could not be modeled or at least matched in power and flexibility by software running on a computer. He refuses to fall on his knees before the mystery of the human brain. “Generally speaking,” he says, “the core of a disagreement I’ll have with a critic is, they’ll say, Oh, Kurzweil is underestimating the complexity of reverse-engineering of the human brain or the complexity of biology. But I don’t believe I’m underestimating the challenge. I think they’re underestimating the power of exponential growth.”
This position doesn’t make Kurzweil an outlier, at least among Singularitarians. Plenty of people make more-extreme predictions. Since 2005 the neuroscientist Henry Markram has been running an ambitious initiative at the Brain Mind Institute of the Ecole Polytechnique in Lausanne, Switzerland. It’s called the Blue Brain project, and it’s an attempt to create a neuron-by-neuron simulation of a mammalian brain, using IBM’s Blue Gene super-computer. So far, Markram’s team has managed to simulate one neocortical column from a rat’s brain, which contains about 10,000 neurons. Markram has said that he hopes to have a complete virtual human brain up and running in 10 years. (Even Kurzweil sniffs at this. If it worked, he points out, you’d then have to educate the brain, and who knows how long that would take?) (See portraits of centenarians.)
By definition, the future beyond the Singularity is not knowable by our linear, chemical, animal brains, but Kurzweil is teeming with theories about it. He positively flogs himself to think bigger and bigger; you can see him kicking against the confines of his aging organic hardware. “When people look at the implications of ongoing exponential growth, it gets harder and harder to accept,” he says. “So you get people who really accept, yes, things are progressing exponentially, but they fall off the horse at some point because the implications are too fantastic. I’ve tried to push myself to really look.”
In Kurzweil’s future, biotechnology and nanotechnology give us the power to manipulate our bodies and the world around us at will, at the molecular level. Progress hyperaccelerates, and every hour brings a century’s worth of scientific breakthroughs. We ditch Darwin and take charge of our own evolution. The human genome becomes just so much code to be bug-tested and optimized and, if necessary, rewritten. Indefinite life extension becomes a reality; people die only if they choose to. Death loses its sting once and for all. Kurzweil hopes to bring his dead father back to life.
We can scan our consciousnesses into computers and enter a virtual existence or swap our bodies for immortal robots and light out for the edges of space as intergalactic godlings. Within a matter of centuries, human intelligence will have re-engineered and saturated all the matter in the universe. This is, Kurzweil believes, our destiny as a species. (See the costs of living a long life.)
Or it isn’t. When the big questions get answered, a lot of the action will happen where no one can see it, deep inside the black silicon brains of the computers, which will either bloom bit by bit into conscious minds or just continue in ever more brilliant and powerful iterations of nonsentience.
But as for the minor questions, they’re already being decided all around us and in plain sight. The more you read about the Singularity, the more you start to see it peeking out at you, coyly, from unexpected directions. Five years ago we didn’t have 600 million humans carrying out their social lives over a single electronic network. Now we have Facebook. Five years ago you didn’t see people double-checking what they were saying and where they were going, even as they were saying it and going there, using handheld network-enabled digital prosthetics. Now we have iPhones. Is it an unimaginable step to take the iPhones out of our hands and put them into our skulls?
Already 30,000 patients with Parkinson’s disease have neural implants. Google is experimenting with computers that can drive cars. There are more than 2,000 robots fighting in Afghanistan alongside the human troops. This month a game show will once again figure in the history of artificial intelligence, but this time the computer will be the guest: an IBM super-computer nicknamed Watson will compete on Jeopardy! Watson runs on 90 servers and takes up an entire room, and in a practice match in January it finished ahead of two former champions, Ken Jennings and Brad Rutter. It got every question it answered right, but much more important, it didn’t need help understanding the questions (or, strictly speaking, the answers), which were phrased in plain English. Watson isn’t strong AI, but if strong AI happens, it will arrive gradually, bit by bit, and this will have been one of the bits.
(Comment on this story.)
A hundred years from now, Kurzweil and de Grey and the others could be the 22nd century’s answer to the Founding Fathers — except unlike the Founding Fathers, they’ll still be alive to get credit — or their ideas could look as hilariously retro and dated as Disney’s Tomorrowland. Nothing gets old as fast as the future.
But even if they’re dead wrong about the future, they’re right about the present. They’re taking the long view and looking at the big picture. You may reject every specific article of the Singularitarian charter, but you should admire Kurzweil for taking the future seriously. Singularitarianism is grounded in the idea that change is real and that humanity is in charge of its own fate and that history might not be as simple as one damn thing after another. Kurzweil likes to point out that your average cell phone is about a millionth the size of, a millionth the price of and a thousand times more powerful than the computer he had at MIT 40 years ago. Flip that forward 40 years and what does the world look like? If you really want to figure that out, you have to think very, very far outside the box. Or maybe you have to think further inside it than anyone ever has before.
Download TIME’s iPhone, BlackBerry and Android applications.
See TIME’s Pictures of the Week.
Click to Print
Find this article at:
Dave Asprey, who says that he has ‘rewired’ his brain through body hacking
Michael Galpert rolls over in bed in his New York apartment, the alarm clock still chiming. The 28-year-old internet entrepreneur slips off the headband that’s been recording his brainwaves all night and studies the bar graph of his deep sleep, light sleep and REM. He strides to the bathroom and steps on his digital scale, the one that shoots his weight and body mass to an online data file. Before he eats his scrambled egg whites with spinach, he takes a picture of his plate with his mobile phone, which then logs the calories. He sets his mileage tracker before he hops on his bike and rides to the office, where a different set of data spreadsheets awaits.
“Running a start-up, I’m always looking at numbers, always tracking how business is going,” he says. Page views, clicks and downloads, he tallies it all. “That’s under-the-hood information that you can only garner from analysing different data points. So I started doing that with myself.”
His weight, exercise habits, caloric intake, sleep patterns – they’re all quantified and graphed like a quarterly revenue statement. And just as a business trims costs when profits dip, Galpert makes decisions about his day based on his personal analytics: too many calories coming from carbs? Say no to rice and bread at lunchtime. Not enough REM sleep? Reschedule that important business meeting for tomorrow.
The founder of his own online company, Galpert is one of a growing number of “self-quantifiers”. Moving in the technology circles of New York and Silicon Valley, engineers and entrepreneurs have begun applying a tenet of the computer business to their personal health: “One cannot change or control that which one cannot measure.”
Much as an engineer will analyse data and tweak specifications in order to optimise a software program, people are collecting and correlating data on the “inputs and outputs” of their bodies to optimise physical and mental performance.
“We like to hack hardware and software, why not hack our bodies?” says Tim Chang, a self-quantifier and Silicon Valley investor who is backing the development of several self-tracking gadgets.
Indeed, why not give yourself an “upgrade”, says Dave Asprey, a “bio-hacker” who takes self-quantification to the extreme of self-experimentation. He claims to have shaved 20 years off his biochemistry and increased his IQ by as much as 40 points through “smart pills”, diet and biology-enhancing gadgets.
“I’ve rewired my brain,” he says.
Attendees at this year’s Quantified Self Conference discuss a sleep-tracking device
Asprey shares his results with the CEOs and venture capitalists he consults with through his executive coaching business, Bullet Proof Executive, but he’s found an even more welcoming audience at the first-ever international Quantified Self Conference.
Over the last weekend of May, in the upstairs of the Computer History Museum in Mountain View, California, in the heart of Silicon Valley, 400 “Quantified-Selfers” from around the globe have gathered to show off their Excel sheets, databases and gadgets.
Participants are mostly middle to upper class, mostly white. Europe is well represented. Suits and skirts appear at a minimum. There are plenty of nerdy young men, nerdy older men and extremely fit men and women with defined muscles and glowing skin. There is also a robust contingent of young urban hipsters in military boots, hoodies and elaborate tattoos.
A quiet middle-aged man walks around with a pulse monitor clipped to his earlobe, a blood pressure cuff on his arm and a heart rate monitor strapped around his chest, all feeding a stream of data to his walkie-talkie-like computer. Someone from the UK unrolls a 12ft line graph charting the fluctuations in his mood over the previous year. A Canadian graduate student describes the web tools he uses to track his attention span.
Footsteps, sweat, caffeine, memories, stress, even sex and dating habits – it can all be calculated and scored like a baseball batting average. And if there isn’t already an app or a device for tracking it, one will probably appear in the next few years.
Brittany Bohnet, who was converted into a self-quantifier while working at Google, says she expects these gadgets will follow us in all aspects of our lives – even the most private. “Eventually we’ll get to a point where we use the restroom and we’ll get a meter that tells us, ‘You’re deficient in vitamin B,’” she says. “That will be the end goal, where we understand exactly what our bodies need.”
Joe and Lisa Betts-LaCroix, self-trackers ‘I was giving birth to our son, and instead of holding my hand and hugging me he was sitting in the corner entering the time between my contractions into a spreadsheet’
“We’re moving away from the era of the blockbuster drug and toward personalised medicine,” adds Joe Betts-LaCroix, a self-tracker and bio-engineer. He opens a laptop with graphs of his weight and that of his wife, Lisa, and two kids, measured daily for the last three years. He has data detailing his wife’s menstrual cycle for 10 years.
“I was giving birth to our son, and instead of holding my hand and supporting me and hugging me, he was sitting in the corner entering the time between my contractions into a spreadsheet,” says Lisa Betts-LaCroix.
The concept of self-tracking dates back centuries. Modern body hackers are fond of referencing Benjamin Franklin, who kept a list of 13 virtues and put a check mark next to each when he violated it. The accumulated data motivated him to refine his moral compass. Then there were scientists who tested treatments or vaccines for yellow fever, typhoid and Aids on themselves. Today’s medical innovators have made incredible advancements in devices such as pacemakers that send continuous heart data to a doctor’s computer, or implantable insulin pumps for diabetics that automatically read glucose levels and inject insulin without any human effort.
Today in Silicon Valley, the engineers who have developed devices for tracking their own habits are modifying them into consumer-friendly versions and preparing to launch them on a largely unsuspecting public. Though most people would cringe at the idea of getting a mineral read-out every time they visit the loo, entrepreneurs and venture capitalists see a huge market for consumer-focused health and wellness tools, using the $10.5bn self-help market and $61bn weight loss market as indicators of demand. Self-quantifiers who work at large technology companies such as Intel, Microsoft and Philips are drawing their bosses’ attention to the commercial opportunities. Public health advocates and healthcare executives are starting to imagine the potential the data could hold for disease management and personalised drug development.
“We can see the tipping point,” says Gary Wolf, one of the founders of the modern-day quantified self movement and an organiser of the conference. “The involvement of the businesses is a sign that we’re not completely alone in seeing something important happening.”
Tim Chang, the Silicon Valley investor, says that self-tracking will win minds and wallets the same way the Green movement put Priuses on the road and grapefruit-powered cleaners under the sink.
“Over the next five to 10 years, self-tracking will be critical to wellness,” Chang says. “It will be consumer-led, not prescribed by your doctor or mandated by your insurance company.” For now, though, it’s in the “geeky early adopter stage”.
Chang and many of the attendees of the Quantified Self conference liken themselves to the Homebrew Computer Club of the 1970s and ’80s, the Silicon Valley gathering of technical hobbyists – including Apple founders Steve Jobs and Steve Wozniak – who swore personal computers would one day grace every home. Quantified-selfers who are inventing personal tracking gadgets in their basements “will have the same scope of impact”, Chang says.
Software engineer Alex Gilman explains the Fujitsu Sprout body monitor
The self-tracking equivalent of an early model, 30lb, four-part desktop computer is Fujitsu Laboratories’ Sprout, as worn by software engineer Alex Gilman at the Quantified Self Conference: a maze of sensors and wires send data from his ear, chest and arm to the pocket-sized computer clipped to his belt – the Sprout. The Sprout synchronises the physical data from the body sensors and from the apps on his iPod Touch where he records his moods and drowsiness levels. What is now a mess of raw, useless data can be calculated and translated into a neat graph that will eventually be used to measure stress and fatigue, manage weight loss, even predict illness.
The potential of the Sprout is intriguing, but mass appeal will only come when such devices are consolidated into small, wireless, all-in-one products that make data collection completely passive, says Chang. Most will require little to no human effort and some will even be “game-ified”, he says, made as fun and addictive as Angry Birds.
Through his firm Norwest Venture Partners, Chang is placing his bets on Basis, a wristwatch-type device that records heart rate, physical activity, calorie burn and sleep patterns. Data readouts show spikes in heart rate data so users can see when they’re stressed and overlay that data with their work calendar to see which people or meetings might be the cause. When Chang tried a prototype, he noticed peaks in heart rate during his morning commute and decided to shift his route to a longer, but less busy, highway. It’s the interesting, useful, easy-to-digest information like this, he says, that will push these devices into the hands of ordinary users.
Tim Ferriss, author of ‘The 4-Hour Body’Ferriss claims he can teach people how to lose weight without exercise, maintain peak mental performance on two hours’ sleep and have a 15-minute orgasm
When the benefits of the information outweigh the costs, in money or time, people will buy the devices, says Tim Ferriss, author of The 4-Hour Body, an account of hundreds of body hacks he tried on himself which has won a following among employees at Google, Facebook and many Silicon Valley start-ups. Through his exploits Ferriss claims that he can teach people how to lose weight without exercise, maintain peak mental performance on two hours’ sleep and have a 15-minute orgasm. Ferriss has personally invested in at least eight devices.
“I think, as soon as the next 12 or 24 months, that people will have to opt out of self-tracking, as opposed to opt in,” he says, “much like GPS and geo tagging,” a feature of smartphones that records users’ geographic location automatically for use in various consumer mobile applications.
The implications for privacy are dramatic. Advocates and politicians were in an uproar when they realised the kind of access that Apple and Google have to geographic data derived from phones. Imagining three years worth of heart rate data or depression symptoms travelling through mobile devices – potentially being offered for sale to drug or insurance companies, exploited by advertisers or hacked by cyber criminals – puts watchdog groups on alert.
“What consumers need to realise is there’s a huge, huge demand for information about their activities, and the protections for the information about their activities are far, far, far less than what they think,” says Lee Tien, a privacy attorney at the Electronic Frontier Foundation. “A lot of these cloud services fall outside the federal and state privacy regimes.”
Mistakes will be made, Ferriss concedes, but he thinks “more good will come from it than bad”. He points to websites such as CureTogether.com and PatientsLikeMe.com, which harness individually collected data on conditions such as asthma, kidney disease, chronic pain and depression. People can then experiment with traditional and alternative therapies to find what works for them. That information is already informing new research and drug development.
Some doctors and public health advocates see great potential for personal tracking in managing chronic illnesses, especially among the rapidly ageing baby boomer generation. Mobile applications can track levels of blood sugar in diabetics or blood pressure in people with hypertension, and send alerts if a problem is developing. Movement-tracking sensors the size of watch batteries – like the one in the Basis wristwatch – can be placed on pill bottles to monitor if a medication has been taken and, if a dose is missed, generate a reminder text or e-mail.
“We believe it’s a differentiator that will help employers save costs,” says Nick Martin, vice-president of innovation with the UnitedHealth Group, a Minneapolis-based health insurance company that is considering covering the use of health-tracking technologies. This means that people’s personal health details could be shared not just with their doctor but with their insurer as well. But Martin says concerns about this will fade fast.
“We’ve seen this with credit cards and payments over mobile phones,” he says, where consumers gradually adapted to sharing financial information over the ether. “We’ve gotten over that hurdle, and I think we will here.”
Still, seniors and boomers are much less inclined to spill the details of their personal lives than the Facebook-ed generations after them. And to believe that even twenty- and thirtysomethings don’t have limits to what they want others to know, or what they even want to know about themselves, seems wishful. Despite promises of confidentiality, people fear they will be charged higher insurance premiums, denied coverage or even denied a job based on their healthcare data.
. . .
Alicia Morga, app developerThe data collected in Morga’s gottaFeeling app enables users to ‘predict an exercise slump or a spending spree and help avoid the behaviour’
Alicia Morga is an accidental self-tracker. The 39-year-old entrepreneur says she identifies more with the Oprah Winfrey school of self-improvement than the Silicon Valley data geeks. She’s tried a heart rate monitor and the pedometer on her iPod to track her running workouts, but she only recently learnt the term “quantified self” when she started developing an iPhone app for tracking emotions. Her desire to track her own moods arose in a business context after she founded her own Hispanic marketing company, Consorte Media.
“I needed a way to manage the emotional roller coaster that entrepreneurship is,” she says. “I was angry. But there’s a double standard in business that women are not allowed to be angry, especially if they’re the boss.”
She kept her anger pent up, but then noticed it would “leak out” in the office, in a snide remark or a contemptuous look. She wanted to be a better communicator and a better leader so she signed up for an executive course at Stanford University called Interpersonal Dynamics. The course sought to develop emotional self-awareness, determine when it was appropriate to hide a feeling or express it and practise ways to communicate those emotions in a constructive manner. “Emotional fitness requires exercise,” Morga says, “just like running.”
Shortly after selling her marketing company, Morga started designing gottaFeeling, an app that pings her one to six times a day, depending on the settings, and asks how she’s feeling. A menu gives her options such as happy, sad, confused, angry. If she clicks angry, it asks her to refine her answer with irritated, frustrated or pissed off. She then records where she is and who she’s with. At the end of a week she can look at a pie chart that breaks down the percentage of time she spent in each mood, and see overlaid data of where, when, and in whose company she felt that way.
Just naming an emotion helps you manage it, Morga says. But the ultimate goal is for users to correlate emotions with eating habits, shopping behaviours or work tasks. If patterns emerge, the data could help users “predict an exercise slump or a spending spree and help avoid the behaviour”.
In the spirit of self-experimentation, Morga tried this on herself. She began studying her financial records. She never uses cash, so all her purchases were in one credit card statement, which she exported to Excel.
“Turns out, I consume a ton of cupcakes,” she says. Crunching the data to see when and where she bought the cupcakes, she discovered that 40 per cent of her purchases were made on Tuesdays at the same kiosk she passed in downtown San Francisco – on her way to see her personal trainer. She studied her mood data from the same time to see if there was an emotional factor but found no correlation and concluded the purchases were attributable to convenience. As a result she asked her trainer to meet her in a different neighbourhood and successfully cut her cupcake consumption by 40 per cent.
She’s still crunching the data to figure out how to trim the other 60 per cent.
Brainstorming session at the Quantified Self Conference
Back at the Quantified Self Conference in Silicon Valley, attendees break into smaller groups to explore the finer points of hacking sleep, cognition and ageing. A concentration of hipsters heads to the session on attention-span tracking. About 50 participants sit in a circle, one-third with laptops propped open on their thighs. Moderating is Matthew Trentacoste, a 29-year-old PhD student in computer science at the University of British Columbia and an organiser of the Vancouver Quantified Self group, one of two dozen groups around the globe that meet informally throughout the year. His long, curly hair is piled at the back of his head and tied with a knitted scarf.
“I’ve been diagnosed with ADHD,” he says, referring to the increasingly common designation of Attention Deficit Hyperactivity Disorder. “As someone who’s easily distracted, I’m interested in figuring out strategies to reduce these distractions.”
Trentacoste describes a tool he’s developing to help him track how he spends his time online, down to the millisecond. It measures how long he spends on e-mail versus web browsing, how much time he spends in each web window and how often he switches his focus. The goal, among those who use or are building similar tools, is to reduce distractions, increase productivity and achieve “flow”, the optimum state of creativity and focus.
A discussion ensues on techniques for achieving flow, and a generational divide appears.
The younger people in the room talk about experimenting with Adderall, a common drug prescribed to people with ADHD that helps focus the mind. Older participants enquire whether meditating before bed has an effect on concentration the next day. The contrasts in method between the age groups are stark, as are the motivations for body hacking in general, says Dave Asprey.
“The people interested in this are under 30 and over 45,” he says, gesturing around the cafeteria at the conference. The people under 30 are the next Tim Ferrisses, the over-achieving entrepreneurs who are out to conquer Silicon Valley.
“The people over 45 are just tired of being fat and tired, and they see the kids under 30 and they know they’re going to lose their jobs to them,” he says. “They know they like to work ’em hard and burn ’em out young in Silicon Valley.”
Michael Galpert, internet entrepreneur‘Running a start-up, I’m always looking at numbers, always tracking how business is going. So I started doing that with myself’
Attempting to counter that trend, Michael Galpert, the New York internet entrepreneur, is using body hacking technology to promote healthier lifestyles in his office. He’s set up a workplace weight loss and fitness contest where employees use a mobile app to upload their daily weight and exercise routines into a shared online database. The idea is that seeing that your co-worker lost 2lb more than you last month, or did 20 more push-ups yesterday, will motivate you to keep up and keep going.
It’s not just a physical contest, Galpert says. The competitiveness and motivation on the treadmill will encourage people to push themselves at their desks as well.
“When you keep trying for one more push-up, it gets easier,” he says. “It’s the same at work. You can say ‘the project I’m working on is done,’ or you can say you’ll spend a little more time to make it better.”
Or, some ex-self-quantifiers would say, you could push the drive for perfection to breaking point.
“People thought I was narcissistic. What they didn’t see was the self-punishment, the fear, the hatred behind the tracking,” writes Alexandra Carmichael, one of the founders of CureTogether.com, in a poem about why she stopped tracking herself. “I had stopped trusting myself. Letting the numbers drown out my intuition, my instincts.”
Whether or not distilling human performance down to ones and zeroes will truly make us better, healthier human beings remains to be seen. Nobody has yet measured the full impact of so fully measuring their lives.
April Dembosky is the FT’s San Francisco correspondent