Facial recognition technology is a staple of sci-fi thrillers like “Minority Report.”
But of bars in Chicago?
SceneTap, a new app for smart phones, uses cameras with facial detection software to scout bar scenes. Without identifying specific bar patrons, it posts information like the average age of a crowd and the ratio of men to women, helping bar-hoppers decide where to go. More than 50 bars in Chicago participate.
As SceneTap suggests, techniques like facial detection, which perceives human faces but does not identify specific individuals, and facial recognition, which does identify individuals, are poised to become the next big thing for personalized marketing and smart phones. That is great news for companies that want to tailor services to customers, and not so great news for people who cherish their privacy. The spread of such technology — essentially, the democratization of surveillance — may herald the end of anonymity.
And this technology is spreading. Immersive Labs, a company in Manhattan, has developed software for digital billboards using cameras to gauge the age range, sex and attention level of a passer-by. The smart signs, scheduled to roll out this month in Los Angeles, San Francisco and New York, deliver ads based on consumers’ demographics. In other words, the system is smart enough to display, say, a Gillette ad to a male passer-by rather than an ad for Tampax.
Those endeavors pale next to the photo-tagging suggestion tool introduced by Facebook this year. When a person uploads photos to the site, the “Tag Suggestions” feature uses facial recognition to identify that user’s friends in those photos and automatically suggests name tags for them. It’s a neat trick that frees people from the cumbersome task of repeatedly typing the same friends’ names into their photo albums.
“Millions of people are using it to add hundreds of millions of tags,” says Simon Axten, a Facebook spokesman. Other well-known programs like Picasa, the photo editing software from Google, and third-party apps like PhotoTagger, from face.com, work similarly.
But facial recognition is proliferating so quickly that some regulators in the United States and Europe are playing catch-up. On the one hand, they say, the technology has great business potential. On the other, because facial recognition works by analyzing and storing people’s unique facial measurements, it also entails serious privacy risks.
Using off-the-shelf facial recognition software, researchers at Carnegie Mellon University were recently able to identify about a third of college students who had volunteered to be photographed for a study — just by comparing photos of those anonymous students to images publicly available on Facebook. By using other public information, the researchers also identified the interests and predicted partial Social Security numbers of some students.
“It’s a future where anonymity can no longer be taken for granted — even when we are in a public space surrounded by strangers,” says Alessandro Acquisti, an associate professor of information technology and public policy at Carnegie Mellon who directed the studies. If his team could so easily “infer sensitive personal information,” he says, marketers could someday use more invasive techniques to identify random people on the street along with, say, their credit scores.
Today, facial detection software, which can perceive human faces but not identify specific people, seems benign.
Some video chat sites are using software from face.com, an Israeli company, to make sure that participants are displaying their faces, not other body parts, says Gil Hirsch, the chief executive of face.com. The software also has retail uses, like virtually trying out eyeglasses at eyebuydirect.com, and entertainment applications, like moustachify.me, a site that adds a handle bar mustache to a face in a photo.
But privacy advocates worry about more intrusive situations.
Now, for example, advertising billboards that use facial detection might detect a young adult male and show him an ad for, say, Axe deodorant. Companies that make such software, like Immersive Labs, say their systems store no images or data about passers-by nor do they analyze their emotions.
But what if the next generation of mall billboards could analyze skin quality and then publicly display an ad for acne cream, or detect sadness and serve up an ad for antidepressants?
“You might think it’s cool, or you might think it’s creepy, depending on the context,” says Maneesha Mithal, the associate director of the division of privacy and identity protection for the Bureau of Consumer Protection at the Federal Trade Commission. Whatever consumers think, she says, they should be able to choose whether to be subject to such marketing practices. (The F.T.C. is planning a workshop next month on facial recognition.)
ON Facebook, people who find the photo-tagging suggestion program creepy may turn off the system that proposes their names to friends who are uploading photos. If people opt out, Facebook deletes their facial comparison data, according to the site. Users may also preapprove or reject being listed by name in a friend’s photo before it is posted on their profiles.
Those options may suffice for many.
But in Germany, where German and European privacy regulations require private companies to obtain explicit permission from a person before they store information about that individual, merely being able to opt out does not go far enough, says Johannes Caspar, the commissioner of the Hamburg Data Protection Authority. (Although the United States has federal data protection laws pertaining to specific industries like credit and video rental, no general law requires that all companies obtain explicit consent before storing personal data about an individual.)
Mr. Caspar says many users do not understand that Facebook’s tag suggestion feature involves storing people’s biometric data to re-identify them in later photos. Last summer, he asked Facebook to give current users in Germany the power to delete their biometric data and to give new users in Germany the power to refuse to have their biometric data collected in the first place. In the long term, he says, such popular uses of facial recognition could moot people’s right to remain anonymous.
Mr. Caspar said last week that he was disappointed with the negotiations with Facebook and that his office was now preparing to take legal action over the company’s biometric database.
Facebook told a German broadcaster that its tag suggestion feature complied with European data protection laws.
“There are many risks,” Mr. Caspar says. “People should be able to choose if they want to accept these risks, or not accept them.” He offered a suggestion for Americans, “Users in the United States have good reason to raise their voices to get the same right.”
Doctors are paid higher fees in the United States than in several other countries, and this is a major factor in the nation’s higher overall cost of health care, says a new study by two Columbia University professors, one of whom is now a top health official in the Obama administration.
“American primary care and orthopedic physicians are paid more for each service than are their counterparts in Australia, Canada, France, Germany and the United Kingdom,” said the study, by Sherry A. Glied, an assistant secretary of health and human services, and Miriam J. Laugesen, an assistant professor of health policy at Columbia.
The study, being published Thursday in the journal Health Affairs, found that the incomes of primary care doctors and orthopedic surgeons were substantially higher in the United States than in other countries. Moreover, it said, the difference results mainly from higher fees, not from higher costs of the doctors’ medical practice, a larger number or volume of services or higher medical school tuition.
Such higher fees are driving the higher spending on doctors’ services, the study concluded.
Ms. Glied, an economist, was a Columbia professor before President Obama named her assistant health secretary for planning and evaluation in June 2010. She said the paper, based on academic research, did not reflect the official views of the administration or the White House.
But the journal said the findings suggested that, as policymakers struggle to find ways to restrain health spending, they might consider doctors’ fees. Doctors have generally been excluded from recent cost-cutting proposals because under existing law, Medicare, the federal insurance program for older people, will reduce their fees by 29.5 percent on Jan. 1. In addition, many states have frozen or reduced fees paid to doctors treating poor people under Medicaid.
The study examined fees paid by public programs and private insurers for basic office visits and for hip replacement surgery, and found that Americans were “very low users of office visits and relatively high users of hip replacement surgery.”
“Fees paid by public payers to orthopedic surgeons for hip replacements in the United States are considerably higher than comparable fees for hip replacements in other countries,” the authors found, “and fees paid by private insurers in the United States for this service are double the fees paid in the private sector elsewhere.”
For primary care office visits, the gap between fees paid by Medicare and by public programs in other countries was smaller. But the study found that private insurers paid more for such services here than in other countries.
“U.S. primary care physicians earn about one-third more than do their counterparts elsewhere,” mainly “because a much larger share of their incomes is derived from private insurance,” the study said.
Ms. Laugesen and Ms. Glied said that among primary care doctors, those in the United States had the highest annual pretax earnings after expenses — an average of $186,582 in 2008 — while those in Australia and France had the lowest earnings, $92,844 and $95,585.
“Among orthopedic surgeons, those who had the highest annual pretax incomes, net of expenses, were in the United States,” with an average of $442,450, the study said. In Britain, which ranked second, the comparable figure was $324,138. Annual pretax earnings of orthopedic surgeons in the other countries were less than $210,000.
Medical students often cite higher pay as a reason for choosing to become specialists, and the researchers said the income gap between primary care doctors and orthopedic surgeons was larger here than elsewhere.
“In the United States, primary care doctors earned only about 42 percent as much as orthopedic surgeons earned,” the study said. “In Canada, France and Germany, in contrast, primary care doctors earned at least 60 percent as much as orthopedic surgeons earned.”
“High physician fees in the United States may reflect the cost of attracting skilled candidates to medicine in a society with a relatively more skewed income distribution,” the study said.
William E. Seidelman MD
Science and Inhumanity:
The Kaiser-Wilhelm/Max Planck Society
First Published in: If Not Now an e-journal Volume 2, Winter 2000
Revised February 18, 2001.
* The Kaiser-Wilhelm-Institute of Psychiatry, Munich
* The Kaiser-Wilhelm-Institute of Brain Research, Berlin-Buch
* The Kaiser-Wilhelm-Institute of Anthropology, Human Genetics and Eugenics, Berlin-Dahlem
* Present day Max Planck Society
One hundred years ago this past December a German scientist by the name of Max Karl Ernst Ludwig Planck gave a lecture in Berlin to the German Physical Society. Planck’s lecture would change the world forever. Entitled “On the Law of the Normal Distribution of Energy” the lecture marked the birth of quantum physics. Quantum physics established a basis for the later development of nuclear physics, the laser and the computer. It is the foundation of the modern technological world extending from nuclear energy to the transistor radio.
Max Planck was to become the most influential scientist in Germany. For 26 years (1912-1938) he was permanent secretary of the mathematics and physics sections of the Prussian Academy of Sciences. It was Planck who brought Albert Einstein to Berlin in 1914. Planck, who was awarded the 1918 Nobel Prize in Physics for his 1900 discovery, became the president of the Kaiser Wilhelm Society (1930-1937), the prestigious and influential organization established in 1911 by the German government and industry for the promotion of research.
After World War II the Kaiser-Wilhelm Society asked the aged Planck to again undertake the presidency of what had become a badly damaged organization. Planck also agreed to allow the Kaiser-Wilhelm Society to take on his name; thus today’s Max Planck Society which assumed its new name in 1948, one year after Planck’s death at age 89.
Planck’s achievements and distinctions and influence did not protect him from tragedy. His first wife died in 1909. His eldest son was killed in World War I. Each of his twin daughters died in childbirth. Erwin, the only surviving child of his first marriage, was executed by the Gestapo for purported links with Hitler’s would be assassins.
Planck was admired not only as a great scientist but also as a moral and courageous individual. He criticized Hitler’s racial policies to Hitler’s face. Rather than flee Germany Planck remained in an attempt to salvage what was left of German physical sciences. And in the 9th decade of his life he agreed to again become president of Germany’s preeminent research organization (1) (2) (3) (4).
While Planck the person is regarded with honor the same cannot be said of the organization that continues to bear his name.
Established in 1911, The Kaiser-Wilhelm organization spawned some of the most prestigious and influential scientific and academic institutes in the world. Kaiser Wilhelm institutes encompassed such scientific and academic disciplines as physics, chemistry, biology, cell biology, psychiatry, neuropathology, genetics, anthropology, metallurgy, and law. Many of the Nobel Laureates of the past century were associated with Kaiser-Wilhelm institutes. Between its founding in 1911 and 1948 the Kaiser-Wilhelm organization supported 35 institutes in Germany and other countries (5). The international esteem of the Kaiser Wilhelm institutes is reflected in the support it received from the Rockefeller Foundation. The Rockefeller Foundation made major contributions to the construction of the Kaiser Wilhelm institutes of brain research (Berlin-Buch) and psychiatry (Munich). It also provided financial support to other Kaiser-Wilhelm institutes adversely affected by World War I and the ensuing depression (6). In addition to capital funding for construction the Rockefeller Foundation supported research at the Munich psychiatric institute and twin research at the Kaiser Wilhelm Institute of Anthropology, Human Genetics and Eugenics at Berlin-Dahlem (7).
Ironically, the three Kaiser-Wilhelm institutes that were beneficiaries of Rockefeller largesse were to eventually play important roles in the development, implementation and exploitation of the racial programs of the Third Reich including murderous experiments and the exploitation of the dead. Kaiser-Wilhelm scientists joined with the Nazi state in pursuit of the goal of improving the people’s health (Volksgesundheit), the major emphasis being on eugenic and racial purification. The resulting collaboration between science and the Nazi state not only legitimized the policies and programs of the Hitler regime it resulted in the exploitation and mutilation and murder of untold thousands of innocent victims by physicians and scientists associated with some of the world’s leading universities and research institutes. The participation of scientists associated with the Kaiser-Wilhelm Society enhanced the credibility of the Nazi state’s program of scientific terror and murder (8).
The Kaiser-Wilhelm-Institute of Psychiatry, Munich
The Kaiser-Wilhelm Institute of Psychiatry in Munich had been established in 1917 as the German Institute for Psychiatric Research by the eminent psychiatrist, Prof. Emil Kraepelin. A major benefactor of the Munich institute was the American-born Jewish philanthropist, and at one time a patient of Kraepelin’s, James Loeb (9). The Munich psychiatric institute became the first and foremost psychiatric research institute in the world. In 1924 the institute joined with the Kaiser-Wilhelm research organization. The building of the new institute, which opened in 1928, was the first major construction project of the KWG to be financed by a grant from the medical division of the Rockefeller Foundation (6).
Kraepelin, who had been professor of psychiatry at the University of Heidelberg, had assembled a stellar group of clinicians and researchers including the psychiatrist/neurologist Alois Alzheimer and the neurohistologist, Franz Nissl (10). They were subsequently joined by the Swiss-born psychiatrist/geneticist Ernst Rüdin. Alzheimer, Nissl and Rüdin joined Kraepelin when he moved from Heidelberg to Munich in 1903. In 1909 Rüdin succeeded Alzheimer as senior physician at the Munich psychiatric hospital. In Munich, Rüdin led the Genealogical/Demographic research department of the Kraepelin Institute. The focus of Rüdin’s research was on the inheritance of psychiatric disorders. His 1916 paper on that subject is considered a classic that continues to be cited in the literature on the genetics of schizophrenia (11). In 1928 Rüdin became director of a “greatly expanded” genealogical department of what had become the Kaiser-Wilhelm Institute of Psychiatry. In 1931 he ascended to the leadership of the world’s preeminent psychiatric research institute (12). Rüdin built on Kraepelin’s relationship with the Rockefeller Foundation and James Loeb. His research was well endowed with funding from Rockefeller and the Loeb estate. Loeb, who died in 1933, had been a generous supporter of the institute from its inception. As his final gift, Loeb bequeathed $1,000,000 to the Munich institute (13).
After Hitler’s rise to power, Rüdin became an active supporter of the eugenic and racial hygiene policies of Hitler’s regime. He was an intellectual leader of the Nazi program of enforced eugenic sterilization entrenched under the 1933 Sterilization Law. He was honored twice by Hitler for his contribution to German eugenics (14).
The 1933 Sterilization Law established diagnostic categories for enforced sterilization. Two of the categories were for psychiatric conditions first characterized by Kraepelin and investigated by Rüdin, namely schizophrenia and manic-depressive disorder (15). An estimated 400,000 German citizens qualified for sterilization under the law. This goal was achieved (16).
The Kaiser Wilhelm Institute of Psychiatry under Rüdin became a major academic eugenic center during the Hitler period. It was to the psychiatric institute that doctors and the courts turned for an “expert” opinion on eugenic matters. It can be assumed that Rüdin, as an architect of the sterilization law did not support appeals against sterilizations ordered by the court. Rüdin was such an avid proponent of eugenic sterilization that his colleagues nicknamed him “Reichsfuhrer for Sterilization (17).”
In 1935 the Rockefeller Foundation withheld funding for genealogical and demographic research at the Munich institute. In 1940 the executors of the Loeb estate also ceased payment to the institute. Desperate for support Rüdin turned to the SS terror organization for salvation. In 1939 the world’s first and foremost psychiatric research institute came under the influence, if not the control, of the SS and its research organization, the notorious Ahnenerbe (7).
The Kaiser-Wilhelm-Institute of Brain Research, Berlin-Buch
The 1933 Sterilization Law to which Rüdin contributed and in which the KW Psychiatry Institute participated established the basis for the Nazi programs of selection and eugenic and racial purification. These programs included the killing of handicapped children and the T-4 Aktion for the murder of adults in psychiatric institutions (18) (19).
The T-4 killing program was exploited by Professor Julius Hallervorden of the KWI for Brain Research as an opportunity to augment the neuropathological collection of brain material in one of, if not the, world’s foremost institutes for brain research. The founding director of the institute, Professor Oskar Vogt, had been dismissed by the Nazis and replaced by Professor Hugo Spatz (20). Hallervorden was subsequently appointed by his friend Spatz with whom he had already achieved fame through the identification of a congenital neurological condition known as ‘Hallervorden-Spatz Disease (21).’
Hallervorden’s main collection point for specimens was the euthanasia killing center in the town of Brandenburg. At Brandenburg the victims were assembled in a large room disguised as a shower where they were asphyxiated with gas. Hallervorden is known to have been present for some of the killings and to have removed the victims’ brains shortly after their murder. Many of the Brandenburg victims came from the nearby large psychiatric hospital known as Görden. Hallervorden had a neuropathology facility on the grounds of the Görden hospital where specimens were prepared and sent to the Kaiser-Wilhelm institute in Buch (22) (23) (24) (25). Beginning in 1944, and for a number years after the war, the Kaiser-Wilhelm brain research institute and its neuropathological collection were moved from Berlin (via Giessen) to a permanent location near the University of Frankfurt and a newly constructed facility now known as the Max Planck Institute for Brain Research (26). The misbegotten specimens acquired by Hallervorden remained in the collection of the Max Planck Institute until 1990.
The provenance of the Frankfurt brain specimens was not revealed until the late 1980’s. This revelation occurred around the same time there were similar disclosures involving the Institute of Anatomy of the University of Tübingen. At first Max Planck officials denied the existence of the Hallervorden collection (27). The entire collection of brain specimens in the collection of the Frankfurt institute was buried in two adjoining gravesites at the Forest Cemetery in Munich. Munich was chosen as the burial site because it is the home of the Max Planck Society (28). It was also revealed at that time that the Max Planck Institute of Psychiatry had in its collection brain specimens from children murdered in the child “euthanasia” program. These specimens came from children murdered at Eglfing-Haar; a psychiatric institution near Munich (29).
The Kaiser-Wilhelm-Institute of Anthropology, Human Genetics and Eugenics, Berlin-Dahlem
The third Kaiser Wilhelm institute implicated in the crimes of the Third Reich is the Institute of Anthropology, Human Genetics and Eugenics in the Berlin suburb (and Kaiser Wilhelm Society campus) of Dahlem. The official opening of the institute in September of 1927 occurred in conjunction with the first international scientific conference to be held in Germany after the end of World War I; the 8th International Congress on Heredity. The founding director was the noted anatomist/anthropologist Prof. Eugen Fischer. Fischer’s interest was in the anthropology of natives of African territories colonized by Germany. With the arrival of the Hitler regime Fischer declared “the institute is completely and wholly prepared to assume the tasks of the current government (30).” And it did. Indeed, according to the Max Planck Society’s own accounting, the activities undertaken by institute scientists on behalf of the Nazi state “… reflected the conviction of most of the researchers, who believed that they might thereby come closer to achieving the racial hygiene goals they shared (30).”
A major proponent of the racist goals of the Hitler state was an institute scientist from Tübingen, Freiherr Dr. Otmar von Verschuer. Verschuer was a noted expert on the genetics of twins. His early twin research at the Dahlem institute was funded by Rockefeller (7). In 1936 Verschuer was called to Frankfurt to head the newly established Institute of Genetics and Racial Hygiene at the University of Frankfurt. The largest institute of its kind, the Frankfurt institute was responsible for the compulsory medical curriculum on eugenics and racial hygiene. Institute staff provided expert opinions to the court on decisions under the 1933 sterilization law and the Nuremberg racial laws (race having being made a medical diagnosis). Verschuer’s first assistant in Frankfurt was a medical student Josef Mengele. Mengele had recently received his PhD from the same university as had Max Planck, the University of Munich.
Verschuer exploited his position as a noted geneticist to expound his racist/anti-Semitic views which included “a new total solution (Gesamtlösung) of the Jewish problem.” In a 1942 editorial published in a journal he edited (Der Erbarzt) Verschuer called for a “final solution” (endgültige) to the Jewish question. In the second edition of his textbook on race hygiene (1942) he repeated his provocative statement, “The political demand of our time is the new total solution (Gesamtlösung) of the Jewish problem (31).”
In 1942 Verschuer succeeded Fischer as Director of the Dahlem institute. Verschuer as institute director took advantage of the endgültige to exploit helpless human beings captive in Auschwitz to pursue his research objectives including the study of twins. Verschuer took advantage of the appointment of his former Frankfurt assistant, Dr. Dr. Josef Mengele (PhD, MD) to Auschwitz where Mengele served as an agent of the institute and as his, Verschuer’s, research assistant. Jews and Gypsies (Roma) in Auschwitz were studied, murdered, dissected and their body parts sent by Mengele to Verschuer in Berlin. Specimens included eyes of hapless victims who happened to have eyes of a different color (known as heterochromia of the iris) (32).
Studies carried out on Auschwitz victims included an examination of their blood for certain elements that were believed to be formed in response to infection. Verschuer’s institute lacked the capacity to do the required blood testing. The blood of the Auschwitz victims was sent to the neighbouring Kaiser-Wilhelm Institute for Biochemistry headed by the Prof. Adolf Butenandt. The tests themselves were performed at the biochemistry institute by Dr. Günther Hillmann (31). The institute director, Butenandt, was a pioneering biochemist who discovered the male and female hormones in humans. His work, for which he received the Nobel Prize for Chemistry in 1939, led the way to many modern therapies including the birth control pill (33).
Verschuer fled Berlin toward the end of the war. There are no surviving specimens or documentation from the Dahlem institute. It is believed that Verschuer had those destroyed. Soon after the end of the war accusations were made against Verschuer concerning Auschwitz research. A committee of Kaiser-Wilhelm scientists was formed to review the case against him. The committee report, which was not made public, was critical of Verschuer’s wartime activities and made it impossible for him to be re-appointed to a university position (31).
Butenandt, in addition to directly or indirectly assisting Verschuer with his studies on the blood of Auschwitz victims, helped restore Verschuer’s postwar standing in the scientific community. Butenandt was the member of a committee that re-examined the Verschuer case. Despite the evidence, the new committee concluded that Verschuer was not a Nazi, was not a race fanatic, was tolerant with his collaborators and that he did not know what went on in Auschwitz. The committee stated that “von Verschuer has all qualities which destine him to be a researcher and teacher of the academic youth (31).”
Having been exonerated by his colleagues, Verschuer went on to assume the position of professor of genetics at the University of Münster and the director of that university’s genetics institute. In that position Verschuer became the most prominent geneticist in (West) Germany. He died in 1969 (32).
Present day Max Planck Society
As the preeminent scientific and research organization in Germany, it is not surprising that Kaiser-Wilhelm scientists and institutes were involved in the eugenic and racial programs of the Third Reich. What is perplexing is the difficulty the present day Max Planck Society has had in confronting its own history. Despite the evidence linking scientists and researchers of the Kaiser- Wilhelm/Max Planck (KW/MP) Society with the crimes of the Third Reich, it has taken the KW/MP Society over a half a century to begin an examination of its own history. In spite of the documented involvement of KW/MP scientists and researchers in various nefarious scientific activities during Hitler’s regime, the organization has yet to formally apologize for the suffering and death inflicted by its scientists and researchers on untold numbers of innocent human beings. Hiding under the veneer of academic and scientific objectivity, officials suggest that an apology would be premature and should await the completion of the much belated investigation that would document exactly what the Max Planck Society should apologize for (34) (35) (36). It is expected that this investigation will be completed in 2004.
Evidence of the complicity of Kaiser-Wilhelm/Max-Planck scientists has been in existence for over a decade. That evidence documents activities that include the scientific legitimization and advancement of eugenics and racial hygiene by the Kaiser-Wilhelm Institute of Psychiatry (Rüdin) and the Kaiser-Wilhelm Institute of Anthropology (Eugen Fischer, Fritz Lenz, Otmar von Verschuer); the exploitation of the euthanasia killings to acquire brain specimens of the murdered victims by the Kaiser-Wilhelm Institute of Brain Research (Hallervorden); and genetic research on Jews and Roma (Gypsies) in Auschwitz by the Kaiser-Wilhelm Institute of Anthropology (Verschuer/Mengele).
The Max-Planck Society has publicly acknowledged its moral responsibility (in the absence of any preceding investigation) for the exploitation of the “euthanasia” murders to acquire the brains of the victims (37). The connection between Verschuer and Josef Mengele has been well known since it was first reported by Professor Benno Müller-Hill in 1984 (38). In the words of a German social scientist published over a decade ago, “In fact, through Verschuer the institute (Kaiser Wilhelm Institute of Anthropology) was to become directly connected with the murderous ‘experiments on humans’ at Auschwitz. Even though this connection was never substantiated in a court of law, evidence accumulated over the years leaves little doubt (39).”
Despite this evidence, the Max Planck Society appears to be demonstrating signs and symptoms of disordered memory and conscience. This disorder of memory is exemplified by the Society’s own 1998 descriptive history of the Berlin institutes of the Kaiser Wilhelm/Max Planck Society. Despite the previously mentioned honest accounting of the anthropology institute during the Hitler period, the monograph disputes the wording of a commemorative plaque on the building that formerly housed the anthropology institute. According to the 1998 document, “The text of the plaque, which was revised many times, however, falsely suggests that the concentration camp doctor Josef Mengele was a member of this Kaiser Wilhelm Institute. He did, however, send blood and organ samples for testing. The statement that the staff of the institute made an ‘active contribution to selection and murder’ by virtue of issuing professional opinion, seems exaggerated, however (30).”
The formal examination of the history of the Max Planck Society was undertaken one year after the publication of the aforementioned deceiving statement. In 1999, the Max Planck Society established an “autonomous” Presidential Commission (“History of the Kaiser Wilhelm Society in the National Socialst era”) (5) (36). With two co-chairs and nine members (one from the United States and one from the United Kingdom), and a staff of resident and visiting researchers, the Commission has embarked on a major research program encompassing:
* The organization, policy and administration of the Kaiser-Wilhelm Society.
* Racial hygiene, genetic, medical and psychiatric research in Kaiser-Wilhelm Institutes.
* Military Research: war-related and applied science in Kaiser-Wilhelm Institutes under the supervision of the Four-Year-Plan and the war economy.
* Osforschung (research on the east) and Lebensraumforschung (research on living space) at Kaiser Wilhelm Institutes in the context of expansionistic and occupation politics.
One commission researcher, Prof. Volker Roelcke of the University of Lübeck, has documented that Ernst Rüdin provided intellectual and financial support for murderous experiments on children performed by the Heidelberg psychiatrist, Julius Deussen (35). Another commission researcher, Robert Proctor, has been given access to the archives on Professor Adolf Butenandt which show that ” Butenandt was aware of and supported a research project involving blood samples from Auschwitz in an unsuccessful effort to find disease-fighting proteins specific to race (36).”
There is no question that the Presidential Commission will add greatly to the body of knowledge of the history of science and medicine during the Third Reich. However, the issuance of an explicit apology on the basis of what is already known should in no way prejudice the research into what is not yet known. The half-century of delay into documenting its own history is of itself deserving of a public apology. The moral imperative is emphasized by the fact that the Max Planck Society today encompasses 81 institutes embracing such areas as human development, criminal law, and foreign and international private, public and social law. The Max Planck Society is now more than an organization for basic or applied scientific research; it is also an organization with an explicit moral mandate to study the human condition. In so doing, it also has a moral responsibility for its own actions, and those of its members, past and present. What the Max Planck Society appears to continue to ignore, or avoid facing, is the fate of the victims, those who perished and those who survived. Given the fact that 59 years will have elapsed between the end of the Third Reich and the end of the mandate of the “Presidential Commission,” few of the survivors will still be living. If a formal apology should be forthcoming few, if any, will be alive or capable of receiving it. By the time the Max Planck Society has recovered its own institutional memory, there will be few survivors who remember.
The esteemed scientist after whom Germany’s foremost scientific organization is named would probably have been very distressed had he been aware of the behavior of scientists associated with the organization that now bears his name. Given his own example of moral recititude he most likely would have been dismayed by the behaviour of the leadership of that organization which for over one half century after the end of the war demonstrated moral amnesia and paralysis in addressing the questions of its own history and moral responsibility.
The word “quantum” when first used by Max Planck referred to an infinitesimally small measure of change at the level of the atom. The collective impact of those infinitesimally small changes have changed the world forever. “Quantum” as adjective refers to a significant quantity. The future impact of the heretofore tepid efforts of the Max Planck Society will depend on that organization’s ongoing commitment to an honest historical examination of the role of German science and medicine during the Hitler period and after. The Max Planck Society has a moral obligation to set an example for German academia and industry in urging those universities and industrial concerns such as the chemical and pharmaceutical companies who wittingly and unwittingly joined in the programs of scientific terror and have yet to reveal the truth. That truth requires a proper objective and full examination of the archives and other historical records and collections of the universities and companies that share the same moral responsibility as the Kaiser-Wilhem/Max Planck Society for the design, implementation and exploitation of some of the worst crimes in the history of humankind.
On the basis of current evidence the Max Planck Society should make a full and public apology to the surviving victims and the families of the deceased for their suffering at the hands of scientists of the Kaiser-Wilhelm/Max Planck Society. The Max Planck Society should undertake, together with institutions providing care to Holocaust survivors, a study of the impact of the Holocaust and the trauma of experimentation and scientific abuse on the surviving victims. In contradistinction to the research undertaken in pursuit of state defined Volksgesundheit, this research would be based on the goal of improving the quality of life of the surviving victims; an effort to which the Max Planck Society should become party. As Germany’s (and one of the world’s) leading scientific and research organization, the Max Planck Society should explicitly address the issue of the moral responsibility of science based on its own tragic history and example.
Max Planck, having come from a family of theologians would have probably been acquainted with the source of the title of this journal (If Not Now an e-journal) which is taken from The Ethics of the Fathers by the great sage Hillel. The complete aphorism of Hillel is an admonition that applies to the society that continues to bear Max Planck’s name:
If I am not for myself who will be for me? But if I am only for myself, what am I? If not now when?
1) Overbye D. Celebrating a Century of Confusion and Triumph. New York Times. Dec, 12, 2000 p. D2 & D4.
2) Zeiliger A. The Quantum Leap. Frankfurter Allgemeine Zeitung. English-language internet edition. December 15, 2000.
5) Wirsing B. Opening the archives (letter) Haaretz Magazine. August 4, 2000, 2.
6) Macrakis K. The Rockefeller Foundation and German Physics under National Socialism. Minerva 27: 1989, 33-57.
7) Weindling P. Health, Race and German Politics Between National Unification and Nazism 1870-1945, Cambridge: Cambridge University Press, 1989.
8) Seidelman W. The Legacy of Academic Medicine and Human Exploitation in the Third Reich. Perspectives in Biology and Medicine. 43(3):2000, 325-334.
9) Chernow R. The Warburgs: The Twentieth Century Odyssey of a Remarkable Jewish Family. New York: Random House, 1993.
10) Shorter E. A History of Psychiatry: From the Era of the Ayslum to the Age of Prozac. New York: Wiley, 1997.
11) Rüdin E. Zur Vererbung und Neuentstehung der Dementia praecox, Vol. 1 of Studien über Vererbung und Entstehung geistiger Störungen. Berlin: Springer, 1916.
12) Weber W. Ernst Rüdin, 1874-1952: A German Psychiatrist and Geneticist. American Journal of Medical Genetics (Neuropsychiatric Genetics) 67:1996, 323-331.
13) “The German Eugenic Society,” Journal of the American Medical Association
14) Wistrich R. Who’s Who in Nazi Germany. London: Weidenfeld & Nicholson, 1982.
15) The Law for the Prevention of Hereditary Disease Offspring (Gesetz zur Verhütung erbkranken Nachwuchses) enacted 14 July 1933. Berlin: Reichsauschuß für Volksgesundheitsdienst Berlin, (Paris: Archives du Centre de Documentation Juive Contemporaine, Document #B15076).
16) Proctor R. Racial Hygiene, Cambridge, MA: Harvard University Press, 1988.
17) Stern K. The Pillar of Fire. New York: Harcourt Brace, 1951.
18) Burleigh M. Death and Deliverance: ‘Euthanasia’ in Germany 1900-1945. Cambridge; Cambridge, 1994.
19) Friedlander, H. The Origins of Nazi Genocide: From Euthanasia to the Final Solution. Chapel Hill: Univ. of N. Carolina, 1995.
20) Haymaker W. C‚cile and Oskar Vogt: On the Occasion of her 75th and his 80th Birthday. Neurology 1:1951, 179-204.
21) van Boagaert L. Hugo Spatz (1888-1969) in The Founders of Neurology, ed. Haymaker W. and Schiller F. Springfield, IL: Charles C. Thomas, 1970.
22) Alexander L. Neuropathology and neurophysiology, including electroencephalography, in wartime Germany. Combined Intelligence Objectives Sub-Committee G-2 Division SHAEF (rear) APO 413. National Archives. Washington D.C. Document No. l-170 cont’d. July 20, 1945.
23) Shevell, M. Racial hygiene, active euthanasia, and Julius Hallervorden. Neurology 42:1992, 2214-2219.
24) Peiffer J. Neuropathology in the Third Reich: Memorial to those Victims of National- Socialist Atrocities in Germany who were Used by Medical Science. Brain Pathology 1:1991, 125-131.
25) Jürgen Peiffer, Assessing Neuropathological Research carried out on Victims of the ‘Euthanasia’ Programme. Medical History Journal (Urban & Fischer) 34:1999, 339-356.
26) Neugebauer W, Stacher G Nazi Child ‘Euthanasia’ in Vienna and the Scientific Exploration of Its Victims before and after 1945. Dig. Dis. 17: 1999, 279-285.
27) Dickman S. Brain Sections to be Buried? Nature 339:498. June 15, 1989.
28) Seidelman W. Erinnerung, Medizin und Moral. Die Bedeutung der Ausbeutung des menschlichen Körpers im Dritten Reich. In Gabriel E. & Neugebauer W. (Hg.) NS-Euthanasie in Wien. Wien: Böhlau Verlag. 2000.
29) Kreutzberg G. Verwicklung, Aufdeckung und Bestattung: Über den Umgang mit einem Erbe. In Kerstig F., Teppe K., Walter B. (eds.) Nach Hadamar: Zum Verhältnis von Psychiatrie und Gesellschaft im 20 Jahrhundert. Paderborn: Ferdinand Schöningh; 1993.
30) Henning E. and Kazemi M. Dahlem-Domain of Science: A walking tour of the Berlin institutes of the Kaiser-Wilhelm/Max Planck Society in the “German Oxford”. Munich: Max Planck Society, 1998.
31) Müller-Hill B. Das Blut von Auschwitz und das Schweigen der Gelehrten. In Kaufmann D. Geschichte der Kaiser-Wilhelm-Gesellschaft im Nationalsozialismus. Bestandsaufnahme und Perspektiven der Forschung. Göttingen:Wallstein Verlag. 2000. 189-227. (English translation provided by B. M-H.).
32) Müller-Hill B. Murderous Science: Elimination by Scientific Selection of Jews, Gypsies, and Others in Germany, 1933-1945. Cold Spring Harbor Laboratory Press, 1998.
34) Hubert Markl, Anmaßung in Demut: Erst forschen, dann handeln. Eine Erwiderung auf Ernst Klee. Die Zeit 7 (February 2000).
35) Abbott A. German science begins to cure its historical amnesia. Nature 403:2000, 474-475.
36) Koenig R. Reopening the Darkest Chapter in German Science. Science 288:2000, 1576-1577.
37) MPG Pressinformation. Den Opfern zum Gedenken – den Lebenden zur Mahnung 25 May 1990.
38) Müller-Hill B. Tödliche Wissenschaft, Reinbeck: Rowohlt, 1984.
39) Weingart R. German Eugenics between Science and Politics. OSIRIS (2nd series) 5:1989, 260-282.
“People say it would be terrible if we made all girls pretty. I think it would be great.” James Watson
The term designer babies is by and large just emblematic of the idea that genetic technology can do more than merely correct the frail aspects of human existence. It can redress nature’s essential randomness. Purely elective changes are in the offing. The industry argues over the details, but many assure that within our decade, depending upon the family and the circumstances, height, weight and even eye color will become elective. Gender selection has been a fact of birth for years with a success rate of up to 91 percent for those who use it.
It goes much further than designer babies. Mass social engineering is still being advocated by eminent voices in the genetics community. Celebrated geneticist James Watson, codiscoverer of the double helix and president of Cold Springs Harbor Laboratories, told a British film crew in 2003, “If you are really stupid, I would call that a disease. The lower 10 per cent who really have difficulty, even in elementary school, what’s the cause of it? A lot of people would like to say, ‘Well, poverty, things like that.’ It probably isn’t. So I’d like to get rid of that, to help the lower 10 percent.” For the first half of the twentieth century, Cold Spring Harbor focused on the “submerged tenth”; apparently, the passion has not completely dissipated.
Following in the footsteps of Galton, who once amused himself by plotting the geographic distribution of pretty women in England, Watson also told the film crew,” People say it would be terrible if we made all girls pretty. I think it would be great.” Watson gave no indication of what the standard for beauty would be.
War Against the Weak: Eugenics and America’s Campaign to Create a Master Race. By Edwin Black. Pg. 442-44
Aping Mankind: Neuromania, Darwinitis and the Misrepresentation of Humanity, by Raymond Tallis, Acumen Publishing, RRP£25, 400 pages
Naked Genes: Reinventing the Human in the Molecular Age, by Helga Nowotny and Giuseppe Testa, MIT Press, RRP£18.95, 192 pages
The Most Human Human: A Defence of Humanity in the Age of the Computer, by Brian Christian, Viking, RRP£18.99, 320 pages
What is human nature? A biologist might see it like this: humans are animals and, like all animals, consist mostly of a digestive tract into which they relentlessly stuff other organisms – whether animal or vegetable, pot-roasted or raw – in order to fuel their attempts to reproduce yet more such insatiable, self-replicating omnivores. The fundamentals of human nature, therefore, are the pursuit of food and sex.
But that, the biologist would add, is only half the story. What makes human nature distinctive is the particular attribute that Homo sapiens uses to hunt down prey and attract potential mates. Tigers have strength, cheetahs have speed – that, if you like, is tiger nature and cheetah nature. Humans have something less obviously useful: freakishly large brains. This has made them terrifyingly inventive in acquiring other organisms to consume – and, indeed, in preparing them (what other animal serves up its prey cordon bleu?) – if also more roundabout in their reproductive strategies (composing sonnets, for example, or breakdancing).
Human nature – the predilection for politics and war, industry and art – is, therefore, just the particularly brainy way that humans have evolved to solve the problems of eating and reproducing. Thus biologists believe that once they understand the human brain and the evolutionary history behind it, they will know all they need to about this ubiquitous brand of ape.
Viewing ourselves in this way, stripped back to the biological bones, is a form of “reductionism”, as it reduces the intricacies of human consciousness and society to the workings of genes and brain cells. It would once have seemed incredible, obviously wrong, not to say blasphemous. To reduce religious wonder, poetic sensibility and the richness of social life to mere animal instincts seems a travesty. Yet exactly this is the dominant account of what it is to be human in the early 21st century.
Samuel Brittan on ‘Economics of Good and Evil’ by Tomas Sedlacek
‘Dancing in the Glory of Monsters’ by Jason Stearns
‘Dante in Love’ by AN Wilson
Matthew Engel Matthew Engel reviews ‘Off Message’ by Bob Marshall-Andrews
‘Sea Wolves’ by Tim Clayton
Thus newspapers today are filled with stories of genes for this and neurons for that. Recent examples range from “The Love-Cheat Gene: One in Four Born to be Unfaithful” to “Scientists Reveal Brain Cells Devoted to Jennifer Aniston”. Partly, the reductionist worldview is gaining in prevalence because many of its claims are true: evolutionary theory is now firmly established, our genome is being deciphered and there are indisputable correlations between consciousness and brain activity. But a problem arises when scientists, policymakers or the media adopt this biological perspective in the search for simple solutions to complex problems, blaming the credit crunch, for example, on short-termism inherited from our primate ancestors. Some thinkers are, therefore, rebelling against the reductionist consensus.
Of course, those with a strongly religious perspective often reject it outright. But even secular thinkers are increasingly resisting its claim to be the whole truth. Although some go too far in their attacks – arguing wrongly, for example, that we have next to nothing to learn about ourselves from our evolutionary history – such critics are, nonetheless, right to point out that in accepting the reductionist view, we risk doing ourselves a dangerous disservice.
One of the most vocal is Raymond Tallis, philosopher, former professor of geriatric medicine and prolific writer. His latest book, Aping Mankind: Neuromania, Darwinitis and the Misrepresentation of Humanity, is an all-out assault on the exaggerated claims made on behalf of the biological sciences.
“Neuromania” is Tallis’s term for reducing all aspects of mind and behaviour to the firings of microscopic brain cells, whereas “Darwinitis” refers to the other strand of biological reductionism: explaining all aspects of our behaviour in terms of our evolutionary history and/or the genes that encode it. His attack on them both is two-fold: first, he criticises many of the specific experiments and hypotheses as hopelessly crude and, second, he argues that the reductionist project is anyway philosophically flawed.
His criticisms of reductionism in practice are frequently justified: one researcher, for example, claimed to have identified the brain centre for romantic love by showing subjects first a picture of “someone with whom they were in love”, then a picture of a mere friend and recording the difference in neural activity. This does seem naive – what it means to be besotted is not summed up by which neurons fire when we look at a photo. Indeed, when I look at a picture of my beloved I am more likely to wonder what she wants for dinner than to call to mind the fullness of our love.
But such experiments are lapped up by the media and are influential throughout the humanities, where evolutionary explanations for everything from free speech to fine art are increasingly fashionable. Tallis, who is at pains to point out that he agrees with Darwinism – he is not a closet creationist – is on strong ground when he argues that the crude application of biological reductionism will shed little light on how to reform the health service or how to read James Joyce’s Ulysses.
But his position becomes much shakier when it moves to the broader philosophy. He argues at length, for example, that the mind cannot even in principle be reduced to the workings of our brains alone. This is a respectable position, though one held by a minority of those paid to think about these questions. Most philosophers and scientists believe the opposite: that mind is just the product of certain brain activity, even if we do not currently know quite how. Tallis, therefore, does both the reader and these thinkers an injustice by describing his opponents’ position as “obviously” wrong and accusing them of “elementary” mistakes. His own inability to provide a convincing alternative account of the mind shows that this is a subject on which reasonable people can differ without stooping to insults.
But although much of the theoretical content of Aping Mankind is unconvincing – to which might be added that the book is twice as long as it needs to be and unpleasantly boorish in tone – it is, nonetheless, an important work. Tallis is right to point out that a fundamental shift in our self-perception is under way and frequently going too far. It is, however, possible to question these developments in a more measured way, as shown by Naked Genes: Reinventing the Human in the Molecular Age by Helga Nowotny, a leading sociologist of science, and the biologist Giuseppe Testa. Their book is a subtle and sophisticated analysis of how the life sciences are shifting our view of ourselves and the challenges this is posing.
The title stems from a simple but telling observation: that it is the role of the sciences to “make things visible that could not previously be seen”. Until very recently, we had no idea how heredity worked. Now, our genes are laid bare before us. When technology first makes some process visible, scientists attempt to isolate what they see, extracting it from its context so as to understand its nature better. The result, argue Nowotny and Testa, is that they tend initially to ascribe too much importance to these processes and underestimate other factors – in other words, reductionism.
This seems a perfect explanation of Tallis’s “Darwinitis” and “Neuromania”. Just as our genes are being laid bare, new technology is enabling us for the first time to peer inside the living brain. But in an attempt to understand what they are seeing, scientists give disproportional weight to these fuzzy images. In time, however, a counter-movement will argue for seeing the newly revealed entities – genes or brain cells – in a broader context. Indeed, Tallis’s polemic in Aping Mankind can be seen as that counter-movement in action.
Nowotny and Testa explore various case studies where the revelations of biology are challenging our self-image: for example, in the debate around doping in sport. They argue that the distinction between the natural (the genes we are born with, good food and hard training) and the artificial (drugs, genetic engineering and prostheses) is a fiction, and unsustainable. Indeed, the idea of a “level playing field” for competitors is itself a fiction: some people are born with genes that make them better athletes. What is “level” about that? Genetic engineering, a technology frequently perceived as “unnatural”, could in theory level out just such an inequality.
But Nowotny and Testa do not offer answers to these questions – they simply explore them, exposing underlying tensions and ironies. Their main conclusion is that we need institutions that are flexible enough to cope both with further insights into our nature and also diverse and evolving public attitudes (citing the UK’s Human Fertilisation and Embryology Authority as an example). Such institutions should support citizens in making autonomous choices, they argue, in which case they are optimistic that new developments in science and technology can “empower the creative individual”.
Which is a view shared by Brian Christian in his excellent first book The Most Human Human: A Defence of Humanity in the Age of the Computer. The reductionist viewpoint suggests we are merely biological machines; if this is so, then all and any of our capacities should be achievable by other kinds of machine, such as computers. This is the opinion of many in the science and technology community, and a good number are setting out to prove it in the race to build the first truly intelligent machine.
The conventional test of whether a computer can think like a human is known as the Turing Test after the English computing pioneer Alan Turing, who created it in the 1950s. It is simply this: an assessor converses separately, usually via a remote terminal, with a human and with a machine. If the assessor can’t tell which is which, the machine has passed the test – something that has never yet happened. One annual setting of the Turing Test is called the Loebner Prize, after its sponsor, Hugh Loebner, and it provides the brilliant conceit for Christian’s book.
The set-up is this: Christian takes part in the Loebner Prize as one of the humans who will go up against the machines. If an assessor is fooled into believing that one of the computers is the human, this is tantamount to saying that the computer is more human-like than Christian himself. This is not a challenge that the author takes lying down: indeed, it is the launch pad for a fascinating explanation of what it means to be human and how Christian, in the face of stiff competition from the world’s best artificial intelligence, can prove that he is the genuine article.
Along the way, he explores ideas of authenticity, humour, spontaneity and originality. In one particularly insightful section, he notes that we can only be replaced by machines if we have first allowed ourselves to become like them. Once, for example, we have abandoned local contacts in favour of distant and homogenous call centres, staffed by workers given no room for responsibility or creativity, then it is only a matter of time before those workers, who are trained to act like robots, will be replaced by them.
All three books, different as they are, point to the same conclusion: that we need not allow ourselves to be reduced by these powerful new disciplines of genetics, neuroscience and computing. Instead, we can learn from them and assimilate them into a broader understanding of ourselves. We can, in fact, use them to become better at being human.
Stephen Cave is a writer and philosopher based in Berlin. His book, ‘Immortality’, is published next year by Random
Jonathan Carey did not die for lack of money.
New York State and the federal government provided $1.4 million annually per person to care for Jonathan and the other residents of the Oswald D. Heck Developmental Center, a warren of low-rise concrete and brick buildings near Albany.
Yet on a February afternoon in 2007, Jonathan, a skinny, autistic 13-year-old, was asphyxiated, slowly crushed to death in the back seat of a van by a state employee who had worked nearly 200 hours without a day off over 15 days. The employee, a ninth-grade dropout with a criminal conviction for selling marijuana, had been on duty during at least one previous episode of alleged abuse involving Jonathan.
“I could be a good king or a bad king,” he told the dying boy beneath him, according to court documents.
In the front seat of the van, the driver, another state worker at O. D. Heck, watched through the rear-view mirror but said little. He had been fired from four different private providers of services to the developmentally disabled before the state hired him to care for the same vulnerable population.
O. D. Heck is one of nine large institutions in New York that house the developmentally disabled, those with cerebral palsy, autism, Down syndrome and other conditions.
These institutions spend two and a half times as much money, per resident, as the thousands of smaller group homes that care for far more of the 135,000 developmentally disabled New Yorkers receiving services.
But the institutions are hardly a model: Those who run them have tolerated physical and psychological abuse, knowingly hired unqualified workers, ignored complaints by whistle-blowers and failed to credibly investigate cases of abuse and neglect, according to a review by The New York Times of thousands of state records and court documents, along with interviews of current and former employees.
Since 2005, seven of the institutions have failed inspections by the State Health Department, which oversees the safety and living conditions of the residents. One was shut down altogether this year.
While Jonathan Carey was at O. D. Heck, Health Department inspectors accused its management of routinely failing to investigate fractures and lacerations suffered by residents.
Similar problems can be found across the state. The Broome Developmental Center in Binghamton has been cited for repeatedly failing to protect residents from staff members. One employee there was merely reassigned after encouraging adolescent residents to fight one another.
Patterns of abuse appear embedded in the culture of the Sunmount Developmental Center in the Adirondacks. Last year, one supervisor was accused of four different episodes of physical and psychological abuse of residents within a span of two and a half months; another employee bragged on Facebook about “beating retards.”
The most damning accounts about the operations come from employees — thwarted whistle-blowers from around the state — and the beleaguered family members of residents.
Dozens of people with direct experience in the system echoed a central complaint about the Office for People With Developmental Disabilities: that the agency fails to take complaints seriously or curtail abuse of its residents.
“I’ve never seen any outfit run the way this place is,” said Jim Lynch, a direct-care worker in Brooklyn. “You report stuff, and then you get retaliated against. They want everything kept quiet. People that are outspoken attract the heat. I don’t know who to talk to when I see a problem. Nothing ever gets done.”
Paul Borer, a dietitian who works for the agency in the Hudson Valley, said he saw another employee punch a resident twice in the face in 2008, but little ever came of the many complaints he made about the episode, to his supervisors, to the commissioner of the agency at the time, Diana Jones Ritter, and to the office of Gov. David A. Paterson.
“You can see a person get hit, then you can go through three years of writing back and forth and nothing happens, so why even report it?” Mr. Borer said.
Mary Maioriello, who worked at O. D. Heck, reported seeing several cases of abuse, including the repeated beating of a resident with a stick that staff members called “the magic wand.”
Upset that her concerns were not sent to law enforcement, she confronted two of the agency’s top officials and secretly recorded the encounter, in which they sought to play down what she saw. After state officials learned of the existence of the tape, which Ms. Maioriello gave to The Times, the two officials were reassigned.
“The people at this place, the only way I can describe it is as a cult,” Ms. Maioriello said of O. D. Heck. “It should be shut down.”
Earlier this year, Gov. Andrew M. Cuomo forced the resignation of the commissioner of the Office for People With Developmental Disabilities after learning of the Times investigation, and said his administration would undertake a broad review of the state’s care of the developmentally disabled.
Indications are, however, that the agency is still struggling. Its new commissioner, Courtney Burke, is a well-regarded policy analyst but lacks management experience. She has taken over an agency with 23,000 employees; previously, she managed no more than seven. Mr. Cuomo has asked two veteran commissioners to review the agency’s practices, and Ms. Burke has taken some decisive steps, firing two top officials, and trying to establish more independent investigations.
Still, the pattern of secrecy at the agency has been hard to break; even after Ms. Burke’s ascension, it has battled in court to preventthe disclosure of patient records to Albany Law School, even though the school has a contract to monitor care of the disabled.
The institutions have survived a 40-year deinstitutionalization effort in part because officials have argued that they need a place to house the most frail or physically unruly residents. But there is also big money at stake. New York has been adept at securing large amounts of cash from Washington, earmarked for the institutions.
The federal and state governments now allocate more than $1.8 million annually for each of the roughly 1,300 residents remaining in the nine institutions, a number that has steadily risen from $1.4 million in 2007, when Jonathan Carey died.
That adds up to more than $2.5 billion a year, with about 60 percent coming from Washington.
But the money does not actually all go to the care of the residents in the institutions.
The state agency recently conceded that only about $600 million is being spent on the residents’ care — a still-generous allocation of nearly $430,000 per person — while the rest is redirected throughout the agency for use at group homes and care in other areas. The state’s redistribution of the Medicaid money earmarked for the institutions is currently the subject of a federal audit. The Cuomo administration has said it is moving to further de-emphasize institutional care and will close some of the nine facilities.
Jonathan Carey arrived at O. D. Heck on Oct. 7, 2005.
Two months later, unbeknown to Jonathan’s parents, Michael and Lisa Carey, the federal government barred the facility from accepting new residents financed by Medicaid for a year because of its chronic problems.
One inspection by the State Health Department found at least 18 serious injuries of residents in a five-month period, in a facility holding only 57 people. Eight of the injuries, including five fractures, were of unknown origin.
The Health Department concluded that investigating the high number of injuries was not a priority for O. D. Heck’s management.
“There was no evidence that the facility examined the nature of all reportable injuries systemically in an effort to prevent such injuries in the future,” inspectors wrote. O. D. Heck managers were supposed to complete initial investigations within five days of a serious injury, but often left the inquiries open for weeks or months, the department found.
Some workers were hardly fit for duty. One had a history of showing up intoxicated, according to depositions in a civil case brought by the Carey family against the state, but he was kept on the job until he was once so drunk at work that he was sent to a hospital. He was later made a groundskeeper.
Direct-care workers were often high school dropouts, some with criminal convictions. One lower-level supervisor had a petty larceny conviction. Edwin Tirado, the employee eventually convicted of manslaughter in Jonathan’s death, had been convicted of selling marijuana and, as a youthful offender, for firing a shotgun in his attic.
Nadeem Mall, a trainee at O. D. Heck who pleaded guilty to criminally negligent homicide in Jonathan’s death, was fired from four different private providers of services to the developmentally disabled, lasting less than a year at each of them, before he was hired by the state.
One employer had accused Mr. Mall of sleeping and watching television on the job. Another found him sleeping while a resident’s thumb was bleeding profusely. He was let go from a third job after being accused of calling 1-900 sex lines using a company cellphone, and from a fourth job after he inexplicably had a hairdresser cut off all the hair of a disabled woman in his care. Mr. Mall’s lawyer declined to comment.
With that background, he was hired by the state, listing his sister and his wife as references on his application. A state official recently said in a deposition that the Office for People With Developmental Disabilities knew Mr. Mall had lied on his application form, claiming his driver’s license had never been suspended when it actually had been shortly before his hiring.
“O. D. Heck failed at every single possible level,” said Ilann Maazel, Mr. Carey’s lawyer. “It was a disaster waiting to happen.”
There was little tangible oversight of employees and no restraint on overtime, which employees coveted to supplement the low salaries, which started at less than $30,000 a year. Mr. Tirado was once allowed to work 84 straight days, and the former head of O. D. Heck acknowledged in a deposition that too much overtime had contributed to Jonathan’s death.
All of this was hidden from the families of O. D. Heck’s residents.
“If we had any clue that O. D. Heck was in this shape, do you think that we would have ever put Jonathan in there?” Jonathan’s father said.
Mr. Carey is a tall man with piercing blue eyes, who ran a used-car dealership before his son’s death. During a recent interview, Mr. Carey was surrounded by pictures of a grinning Jonathan, a contrast to his father’s crushing sadness.
Before Jonathan died, Michael, an evangelical Christian, would make regular missionary trips to Africa. He has largely given up his dealership, and now devotes his life mostly to advocacy for the developmentally disabled.
For the Careys, the journey to O. D. Heck was a last resort. Jonathan was born in 1993, the older of their two sons. When he was 19 months old, the Careys were told that he was mentally retarded, and when he was older that he was autistic — functionally a 2-year-old, his vocabulary limited to “daddy” and the phrase “Where you goin’?”
The Careys, who live near Albany, raised Jonathan until he was 9, but became worried that they could not teach their son basic living skills, like toilet training. They enrolled him at the Anderson Center for Autism, a privately run school in the Hudson Valley overseen by the state.
At first, the school seemed a good fit, until Jonathan, who was always thin, began losing weight. During one visit, an employee told the Careys to take home a duffel bag they had never used. They discovered a logbook inside the bag detailing startling changes to Jonathan’s treatment plan. Among other things, the school was withholding food from Jonathan to punish him for taking off his shirt at inappropriate times.
“They literally planned to withhold my son’s meals,” Mr. Carey said. “And when that was not working, then they began to seclude him in his bedroom for an extended period of time. He missed eight full days of school.”
Soon afterward, the Careys removed their son from Anderson, and cared for him at home for the next year. But now there were tantrums for no apparent reason. A doctor later told the Careys their son was suffering from post-traumatic stress disorder.
He became harder to contain. He was tall enough to jump their fence, had no sense of keeping himself safe and became increasingly hard to handle. About a year after he came home, Jonathan had what his father called “a full emotional meltdown,” and the Careys took him to a local hospital, where he was essentially knocked out with a drug cocktail and tied to his hospital bed.
Death in a Van
Running out of options, the Careys were directed to O. D. Heck, and they hoped that an institution run by the state would be more promising than the Anderson school.
But on Oct. 29, 2005, just a couple of weeks after Jonathan was enrolled, the Careys arrived to find their son’s nose so swollen that they took him to the hospital. None of the staff members claimed to know what had happened, and they speculated that it had occurred during a dental procedure. Another time, Jonathan was taken to the hospital with a black eye and a broken nose. That time, the staff suggested that Jonathan might have fallen out of bed.
On a third occasion, Jonathan was taken to the hospital with severe bruising on both sides of his face.
“They basically told us that Jonathan had fallen out of a rocking chair and hit his head on a table, and I said, ‘Absolutely not,’ ” Lisa Carey said.
In a recent deposition, a lower-level supervisor at O. D. Heck, Tedra Hamilton, recalled the third episode, saying Jonathan “had bruises everywhere.”
“It looked bad to me,” she added. “It scared me.”
Edwin Tirado had been one of two employees on duty right before the bruises were discovered; Mr. Tirado invoked his Fifth Amendment rights and declined to speak during a recent deposition when asked about prior abuse cases involving Jonathan.
The situation came to a head on Feb. 15, 2007. Mr. Tirado and Mr. Mall took Jonathan and another resident on an outing. Mr. Tirado had worked 197 hours over a 15-day period and was so exhausted that he let Mr. Mall drive, fearing he would fall asleep.
Mr. Mall first drove to his bank, leaving Mr. Tirado in the van with Jonathan and the other resident. While they were waiting, Jonathan got up from his seat. Mr. Tirado went to the back of the van and began to restrain Jonathan, trying to subdue him. Mr. Mall and the other resident, identified in court documents by his initials, E. C., later said that Mr. Tirado sat on Jonathan, who was face down, his legs flailing.
When Mr. Mall returned to the van several minutes later, Mr. Tirado declined an offer for help.
Mr. Tirado restrained Jonathan for about 15 minutes, continuing as the group drove to a gas station. Mr. Mall said he heard Mr. Tirado tell the boy, “I could be a good king or a bad king.” Mr. Tirado denied making the remark, but another employee had heard him make a similar comment before, according to court documents.
E. C., watching with apparent concern from the front of the van, told Mr. Tirado, “Get off of him,” and “Let him breathe,” according to Mr. Mall.
When they got to the gas station, Mr. Mall went inside to buy some drinks, including a Snapple iced tea for Mr. Tirado. Mr. Mall has testified that when he returned to the van, Mr. Tirado told him that Jonathan had stopped breathing and the two panicked.
Mr. Tirado has changed his story over time. In a re-enactment videotaped by the police soon after the death, he said that at the gas station, he realized Jonathan had stopped breathing.
“I just froze,” he said, adding that he was afraid of “losing my job and going to jail.” Mr. Tirado has since recanted, saying he had believed that Jonathan had gone to sleep.
Regardless, the two men drove around for more than an hour with a suddenly silent boy in the back without checking on him or calling 911. They went to a video game store, where Mr. Tirado bought a special bag for his PlayStation, then to Mr. Tirado’s house, where they smoked and chatted with a neighbor, and eventually back to O. D. Heck.
An autopsy found the cause of death to be compressive asphyxia — basically, so much pressure was put on Jonathan’s chest that he could not get enough oxygen into his lungs.
Mr. Carey and his wife were together when they got the call.
“I just lost it,” Mr. Carey said. “My wife was yelling and screaming ‘What happened? What happened?’ I just couldn’t even, I don’t even think I could communicate well. And she finally said, ‘Which one?’ She realized something had happened to one of the boys. And I said ‘Jonathan.’ And we literally both fell under the weight of the grief, collapsed to the sidewalk, just uncontrollably weeping. It’s hard to explain the pain and the trauma that one experiences getting that kind of news. You’re in a cloud. It’s like you don’t even know what’s going on around you.”
‘Here’s to Beating Retards’
Employment standards are low at the Office for People With Developmental Disabilities. Not only were people with criminal convictions hired, but since 2006, some 125 workers who were fired from jobs there were rehired — a practice that agency officials said they would move to halt after The Times questioned them about it.
A recent case at the Monroe Developmental Center in Rochester, which failed a December inspection by the Health Department, highlighted the lax practices. Inspectors found that an allegation of physical abuse was substantiated against an employee who yelled at a resident, lunged toward him and “pushed him into the wall.”
Inspectors discovered that the same employee had previously been fired in 2007, after being involved in a case of misconduct and for threatening a supervisor. The employee also had been convicted of criminal mischief, a misdemeanor, not related to her job. In her personnel file, there were “do not rehire” recommendations from “numerous supervisors and administrators at the facility when she was terminated in 2007,” inspectors found. And yet she was hired again.
Ms. Burke, the agency’s commissioner, said in a statement that despite the past practice, she “will do everything in my power to not allow the rehiring of employees who have been previously terminated.”
The Sunmount Developmental Center in the Adirondacks, a repository for residents deemed more challenging, also failed an inspection last year. The supervisor accused of four episodes of abuse of residents continued to have contact with them even as the investigations took place, inspectors found.
Inspectors also found that a resident had claimed that a caretaker had called him a “retard” and threatened to have another resident “beat him up.” When the resident was indeed assaulted by the second resident, inspectors found that Sunmount officials did not investigate whether the employee had instigated the fight. In another episode, an employee dumped ketchup, salt and pepper on the head of a resident during dinner. The agency’s response was to transfer the employee to another unit.
Around the same time, one Sunmount resident, Eddie Adkins, was set upon by several staff members after he grew upset that he was not allowed to go to the bathroom, according to an internal report provided to The Times by Mr. Adkins’s family, who were able to get the report because of a disclosure law passed in the wake of Jonathan Carey’s death.
A deaf resident told state investigators that he saw four state employees punching Mr. Adkins while he was sitting on a couch — “I did not like that,” he told investigators, adding that he was so disturbed that he turned his hearing aid off during the melee.
Mr. Adkins’s case underscores the difficulty of this work: While many residents are defenseless — children like Jonathan Carey, or those with cerebral palsy or other debilitating diseases — Mr. Adkins stands 6 feet 5, and his weight has topped 300 pounds. He is autistic and bipolar, and has a history of biting himself and his caregivers and has been jailed for doing so.
But his caretakers can also be violent. The internal report found that Mr. Adkins’s “left eye was swollen and there was bruising under the left eye.”
“There was a large vertical abrasion to Mr. Adkins’s outer left eye,” the report continued, “and a small abrasion on the left side, inner corner of Mr. Adkins’s nose near his eye. There was a small linear abrasion on the outer corner of Mr. Adkins’s right eye.”
An agency spokeswoman said she could not comment on specific cases, citing confidentiality rights of residents.
After the attack, five staff members were placed on administrative leave. One of them wrote in a Facebook posting: “im on administration vacation as well,” adding, “cheers brother here’s to beating retards.”
State officials have said they took a number of steps to clean up O. D. Heck after Jonathan’s death.
Those included increasing the number of clinical staff members and direct-care workers and putting more emphasis on teaching residents skills that will help them move to small group homes, the agency said.
But Mary Maioriello, an employee at O. D. Heck until she resigned this year, said a culture of abuse continued. Ms. Maioriello was hired as a trainee last year, and witnessed several disturbing episodes. In one case, two employees played a game they called “Fetch,” throwing French fries on the floor and laughing as one resident dived to get them, while another jumped out of his recliner and a third ate them off the floor.
Ms. Maioriello was a 24-year-old trainee at the time. She was horrified, but also intimidated.
“When I first started working there, I was told, ‘Keep your eyes open and your mouth shut and you’ll do just fine here,’ ” she recalled in an interview. “It was kind of like a code that you just didn’t turn anything in. A word that they used a lot was a ‘snitch.’ That’s what it felt like to me, like I was in some kind of gang or cult.”
Ms. Maioriello told her mother what she had seen; her mother told a friend who knew someone at the agency. When Ms. Maioriello was brought in for questioning, she went further, telling her supervisors about several other episodes she had witnessed.
The most serious involved a blue wooden stick stashed in a cabinet drawer in a common room. One supervisor, Ms. Maioriello wrote, called the stick the “magic wand,” and it was used to repeatedly beat a resident whom Ms. Maioriello described as nonverbal and weighing less than 90 pounds.
Ms. Maioriello told management that she had seen three employees, including the supervisor, hitting a resident with the stick at different times. The same resident was confined by employees to a gym mat, and if he stepped off it, he was hit with the stick, snapped with a towel or had his hands stepped on. Employees also appeared to enjoy taunting residents; two workers told one resident they were going to knock over his ceramic frog collection — they called the game “Kick the Frog.”
For Ms. Maioriello, it was a painful experience. She said she had chosen to work with the disabled because her 3-year-old son has developmental problems. She aspires to being a nurse and has been a vigorous advocate for her son. But at O. D. Heck, she felt stuck between her need for a job and her determination to speak up about the behavior.
“I just thought, oh my God, what is wrong with these people?” Ms. Maioriello said of the other employees, adding: “I spoke to my mother, I spoke to some friends, I was telling them, you know, these terrible things. Should I quit? I’m a single mother, I need a paycheck. I don’t know what to do. I’m scared of retaliation. And then once I finally turned it in, I feel like it fell on deaf ears.”
A second state employee who worked at O. D. Heck corroborated much of Ms. Maioriello’s account, but asked not to be identified for fear of being fired.
“There’s abuse going on all the time,” the employee said. “They don’t report anything. They hide everything and cover for each other.”
This employee also saw the resident who was restricted to a gym mat. “I saw them shove socks in his mouth, they shake keys in front of him, they treated him like an animal,” the employee said.
Little resulted from Ms. Maioriello’s reports to management. Her co-workers at first blamed someone else for reporting them to management, and the word “snitch” was spray-painted on the worker’s car. Ms. Maioriello went on leave and resigned in March, threatening to go to the news media before she left.
The Official Response
It was then that Kate Bishop, who supervises O. D. Heck and group homes in a nine-county region stretching from Albany into the Adirondacks, met with Ms. Maioriello.
In an emotional hourlong encounter that she secretly recorded, Ms. Maioriello challenged Ms. Bishop and Andrew Morgese, the agency’s head of internal affairs, who was also present, reminding them that she had reported that a resident was being regularly beaten with a stick. She asked why the matter had not been reported to law enforcement. “Were the police notified?” Ms. Maioriello asked, according to the tape, which was provided to The Times. “Because it was an assault. That is the law, that the police are to be notified when an individual is assaulted. Were they notified?”
“Well,” Ms. Bishop said, “in the original report that you made, it didn’t appear to rise to the level of …”
“Hitting someone with a stick?” Ms. Maioriello asked.
“In the initial manner described …” Ms. Bishop responded.
“Really?” Ms. Maioriello said. “So what’s the severity that you have to make an assault?”
Later in the conversation, Ms. Maioriello again asked Ms. Bishop, “Is it an assault to hit him with a stick?”
Ms. Bishop replied, “Not seeing it, I couldn’t answer that question.”
She put the same question to Mr. Morgese.
“Shift after shift after shift, he was hit with this stick by several employees,” she said. “Is that an assault?”
Mr. Morgese replied, “I don’t think I can answer that question.”
At one point during the exchange, Mr. Morgese suggested that it was the responsibility of Ms. Maioriello, a trainee, to report the cases to law enforcement, even though management had been made aware of them.
“I’m not trying to turn this around,” he said, “but if you’re indicating that you believe you witnessed the assault of an individual who was being repeatedly hit with a stick, any one of our employees has not only an opportunity to report, they have a duty to report, they have a duty to intervene on behalf of that individual. If they can’t intervene safely on behalf of that individual, if he’s being assaulted, they have a duty to notify law enforcement.”
Shortly after the meeting, Ms. Maioriello reported the matter to the Niskayuna Police Department. While an officer who met with her said he was not sure how to respond to such episodes inside a state facility, she has since been contacted by the department seeking more information.
The Times asked the Office for People With Developmental Disabilities why Ms. Bishop and Mr. Morgese could not say what an assault was and why Ms. Maioriello’s supervisors had not forwarded her allegations to law enforcement.
The state disputed the framing of the question.
“Your characterization of these exchanges is not consistent with our understanding of the facts regarding those conversations,” an agency spokeswoman said, adding, “Without question, it is the agency policy that if a staff person hit an individual with a stick, law enforcement should be notified.”
The state was subsequently informed by The Times that a tape existed of the encounter, and shortly thereafter both Ms. Bishop and Mr. Morgese were removed from their positions. Ms. Bishop was reassigned to the central office, and Mr. Morgese was demoted and sent to a regional office.
Mr. Morgese, through the agency, declined to comment. In a brief statement, Ms. Bishop said she was inspired to get into the field by a developmentally disabled sister.
“I believe that I provided the highest-quality leadership,” she said, “always guided by respect and dignity for the people we are honored to serve.”
On Feb. 15, 1965, a diffident but self-possessed high school student named Raymond Kurzweil appeared as a guest on a game show called I’ve Got a Secret. He was introduced by the host, Steve Allen, then he played a short musical composition on a piano. The idea was that Kurzweil was hiding an unusual fact and the panelists — they included a comedian and a former Miss America — had to guess what it was.
On the show (see the clip on YouTube), the beauty queen did a good job of grilling Kurzweil, but the comedian got the win: the music was composed by a computer. Kurzweil got $200. (See TIME’s photo-essay “Cyberdyne’s Real Robot.”)
Kurzweil then demonstrated the computer, which he built himself — a desk-size affair with loudly clacking relays, hooked up to a typewriter. The panelists were pretty blasé about it; they were more impressed by Kurzweil’s age than by anything he’d actually done. They were ready to move on to Mrs. Chester Loney of Rough and Ready, Calif., whose secret was that she’d been President Lyndon Johnson’s first-grade teacher.
But Kurzweil would spend much of the rest of his career working out what his demonstration meant. Creating a work of art is one of those activities we reserve for humans and humans only. It’s an act of self-expression; you’re not supposed to be able to do it if you don’t have a self. To see creativity, the exclusive domain of humans, usurped by a computer built by a 17-year-old is to watch a line blur that cannot be unblurred, the line between organic intelligence and artificial intelligence.
That was Kurzweil’s real secret, and back in 1965 nobody guessed it. Maybe not even him, not yet. But now, 46 years later, Kurzweil believes that we’re approaching a moment when computers will become intelligent, and not just intelligent but more intelligent than humans. When that happens, humanity — our bodies, our minds, our civilization — will be completely and irreversibly transformed. He believes that this moment is not only inevitable but imminent. According to his calculations, the end of human civilization as we know it is about 35 years away. (See the best inventions of 2010.)
Computers are getting faster. Everybody knows that. Also, computers are getting faster faster — that is, the rate at which they’re getting faster is increasing.
So if computers are getting so much faster, so incredibly fast, there might conceivably come a moment when they are capable of something comparable to human intelligence. Artificial intelligence. All that horsepower could be put in the service of emulating whatever it is our brains are doing when they create consciousness — not just doing arithmetic very quickly or composing piano music but also driving cars, writing books, making ethical decisions, appreciating fancy paintings, making witty observations at cocktail parties.
If you can swallow that idea, and Kurzweil and a lot of other very smart people can, then all bets are off. From that point on, there’s no reason to think computers would stop getting more powerful. They would keep on developing until they were far more intelligent than we are. Their rate of development would also continue to increase, because they would take over their own development from their slower-thinking human creators. Imagine a computer scientist that was itself a super-intelligent computer. It would work incredibly quickly. It could draw on huge amounts of data effortlessly. It wouldn’t even take breaks to play Farmville.
Probably. It’s impossible to predict the behavior of these smarter-than-human intelligences with which (with whom?) we might one day share the planet, because if you could, you’d be as smart as they would be. But there are a lot of theories about it. Maybe we’ll merge with them to become super-intelligent cyborgs, using computers to extend our intellectual abilities the same way that cars and planes extend our physical abilities. Maybe the artificial intelligences will help us treat the effects of old age and prolong our life spans indefinitely. Maybe we’ll scan our consciousnesses into computers and live inside them as software, forever, virtually. Maybe the computers will turn on humanity and annihilate us. The one thing all these theories have in common is the transformation of our species into something that is no longer recognizable as such to humanity circa 2011. This transformation has a name: the Singularity.
The difficult thing to keep sight of when you’re talking about the Singularity is that even though it sounds like science fiction, it isn’t, no more than a weather forecast is science fiction. It’s not a fringe idea; it’s a serious hypothesis about the future of life on Earth. There’s an intellectual gag reflex that kicks in anytime you try to swallow an idea that involves super-intelligent immortal cyborgs, but suppress it if you can, because while the Singularity appears to be, on the face of it, preposterous, it’s an idea that rewards sober, careful evaluation.
See pictures of cinema’s most memorable robots.
From TIME’s archives: “Can Machines Think?”
See TIME’s special report on gadgets, then and now.
People are spending a lot of money trying to understand it. The three-year-old Singularity University, which offers inter-disciplinary courses of study for graduate students and executives, is hosted by NASA. Google was a founding sponsor; its CEO and co-founder Larry Page spoke there last year. People are attracted to the Singularity for the shock value, like an intellectual freak show, but they stay because there’s more to it than they expected. And of course, in the event that it turns out to be real, it will be the most important thing to happen to human beings since the invention of language. (See “Is Technology Making Us Lonelier?”)
The Singularity isn’t a wholly new idea, just newish. In 1965 the British mathematician I.J. Good described something he called an “intelligence explosion”:
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.
The word singularity is borrowed from astrophysics: it refers to a point in space-time — for example, inside a black hole — at which the rules of ordinary physics do not apply. In the 1980s the science-fiction novelist Vernor Vinge attached it to Good’s intelligence-explosion scenario. At a NASA symposium in 1993, Vinge announced that “within 30 years, we will have the technological means to create super-human intelligence. Shortly after, the human era will be ended.”
By that time Kurzweil was thinking about the Singularity too. He’d been busy since his appearance on I’ve Got a Secret. He’d made several fortunes as an engineer and inventor; he founded and then sold his first software company while he was still at MIT. He went on to build the first print-to-speech reading machine for the blind — Stevie Wonder was customer No. 1 — and made innovations in a range of technical fields, including music synthesizers and speech recognition. He holds 39 patents and 19 honorary doctorates. In 1999 President Bill Clinton awarded him the National Medal of Technology. (See pictures of adorable robots.)
But Kurzweil was also pursuing a parallel career as a futurist: he has been publishing his thoughts about the future of human and machine-kind for 20 years, most recently in The Singularity Is Near, which was a best seller when it came out in 2005. A documentary by the same name, starring Kurzweil, Tony Robbins and Alan Dershowitz, among others, was released in January. (Kurzweil is actually the subject of two current documentaries. The other one, less authorized but more informative, is called The Transcendent Man.) Bill Gates has called him “the best person I know at predicting the future of artificial intelligence.”(See the world’s most influential people in the 2010 TIME 100.)
In real life, the transcendent man is an unimposing figure who could pass for Woody Allen’s even nerdier younger brother. Kurzweil grew up in Queens, N.Y., and you can still hear a trace of it in his voice. Now 62, he speaks with the soft, almost hypnotic calm of someone who gives 60 public lectures a year. As the Singularity’s most visible champion, he has heard all the questions and faced down the incredulity many, many times before. He’s good-natured about it. His manner is almost apologetic: I wish I could bring you less exciting news of the future, but I’ve looked at the numbers, and this is what they say, so what else can I tell you?
Kurzweil’s interest in humanity’s cyborganic destiny began about 1980 largely as a practical matter. He needed ways to measure and track the pace of technological progress. Even great inventions can fail if they arrive before their time, and he wanted to make sure that when he released his, the timing was right. “Even at that time, technology was moving quickly enough that the world was going to be different by the time you finished a project,” he says. “So it’s like skeet shooting — you can’t shoot at the target.” He knew about Moore’s law, of course, which states that the number of transistors you can put on a microchip doubles about every two years. It’s a surprisingly reliable rule of thumb. Kurzweil tried plotting a slightly different curve: the change over time in the amount of computing power, measured in MIPS (millions of instructions per second), that you can buy for $1,000.
As it turned out, Kurzweil’s numbers looked a lot like Moore’s. They doubled every couple of years. Drawn as graphs, they both made exponential curves, with their value increasing by multiples of two instead of by regular increments in a straight line. The curves held eerily steady, even when Kurzweil extended his backward through the decades of pretransistor computing technologies like relays and vacuum tubes, all the way back to 1900. (Comment on this story.)
Kurzweil then ran the numbers on a whole bunch of other key technological indexes — the falling cost of manufacturing transistors, the rising clock speed of microprocessors, the plummeting price of dynamic RAM. He looked even further afield at trends in biotech and beyond — the falling cost of sequencing DNA and of wireless data service and the rising numbers of Internet hosts and nanotechnology patents. He kept finding the same thing: exponentially accelerating progress. “It’s really amazing how smooth these trajectories are,” he says. “Through thick and thin, war and peace, boom times and recessions.” Kurzweil calls it the law of accelerating returns: technological progress happens exponentially, not linearly.
See TIME’s video “Five Worst Inventions.”
See the 100 best gadgets of all time.
Then he extended the curves into the future, and the growth they predicted was so phenomenal, it created cognitive resistance in his mind. Exponential curves start slowly, then rocket skyward toward infinity. According to Kurzweil, we’re not evolved to think in terms of exponential growth. “It’s not intuitive. Our built-in predictors are linear. When we’re trying to avoid an animal, we pick the linear prediction of where it’s going to be in 20 seconds and what to do about it. That is actually hardwired in our brains.”
Here’s what the exponential curves told him. We will successfully reverse-engineer the human brain by the mid-2020s. By the end of that decade, computers will be capable of human-level intelligence. Kurzweil puts the date of the Singularity — never say he’s not conservative — at 2045. In that year, he estimates, given the vast increases in computing power and the vast reductions in the cost of same, the quantity of artificial intelligence created will be about a billion times the sum of all the human intelligence that exists today. (See how robotics are changing the future of medicine.)
The Singularity isn’t just an idea. it attracts people, and those people feel a bond with one another. Together they form a movement, a subculture; Kurzweil calls it a community. Once you decide to take the Singularity seriously, you will find that you have become part of a small but intense and globally distributed hive of like-minded thinkers known as Singularitarians.
Not all of them are Kurzweilians, not by a long chalk. There’s room inside Singularitarianism for considerable diversity of opinion about what the Singularity means and when and how it will or won’t happen. But Singularitarians share a worldview. They think in terms of deep time, they believe in the power of technology to shape history, they have little interest in the conventional wisdom about anything, and they cannot believe you’re walking around living your life and watching TV as if the artificial-intelligence revolution were not about to erupt and change absolutely everything. They have no fear of sounding ridiculous; your ordinary citizen’s distaste for apparently absurd ideas is just an example of irrational bias, and Singularitarians have no truck with irrationality. When you enter their mind-space you pass through an extreme gradient in worldview, a hard ontological shear that separates Singularitarians from the common run of humanity. Expect turbulence.
In addition to the Singularity University, which Kurzweil co-founded, there’s also a Singularity Institute for Artificial Intelligence, based in San Francisco. It counts among its advisers Peter Thiel, a former CEO of PayPal and an early investor in Facebook. The institute holds an annual conference called the Singularity Summit. (Kurzweil co-founded that too.) Because of the highly interdisciplinary nature of Singularity theory, it attracts a diverse crowd. Artificial intelligence is the main event, but the sessions also cover the galloping progress of, among other fields, genetics and nanotechnology. (See TIME’s computer covers.)
At the 2010 summit, which took place in August in San Francisco, there were not just computer scientists but also psychologists, neuroscientists, nanotechnologists, molecular biologists, a specialist in wearable computers, a professor of emergency medicine, an expert on cognition in gray parrots and the professional magician and debunker James “the Amazing” Randi. The atmosphere was a curious blend of Davos and UFO convention. Proponents of seasteading — the practice, so far mostly theoretical, of establishing politically autonomous floating communities in international waters — handed out pamphlets. An android chatted with visitors in one corner.
After artificial intelligence, the most talked-about topic at the 2010 summit was life extension. Biological boundaries that most people think of as permanent and inevitable Singularitarians see as merely intractable but solvable problems. Death is one of them. Old age is an illness like any other, and what do you do with illnesses? You cure them. Like a lot of Singularitarian ideas, it sounds funny at first, but the closer you get to it, the less funny it seems. It’s not just wishful thinking; there’s actual science going on here.
For example, it’s well known that one cause of the physical degeneration associated with aging involves telomeres, which are segments of DNA found at the ends of chromosomes. Every time a cell divides, its telomeres get shorter, and once a cell runs out of telomeres, it can’t reproduce anymore and dies. But there’s an enzyme called telomerase that reverses this process; it’s one of the reasons cancer cells live so long. So why not treat regular non-cancerous cells with telomerase? In November, researchers at Harvard Medical School announced in Nature that they had done just that. They administered telomerase to a group of mice suffering from age-related degeneration. The damage went away. The mice didn’t just get better; they got younger. (Comment on this story.)
Aubrey de Grey is one of the world’s best-known life-extension researchers and a Singularity Summit veteran. A British biologist with a doctorate from Cambridge and a famously formidable beard, de Grey runs a foundation called SENS, or Strategies for Engineered Negligible Senescence. He views aging as a process of accumulating damage, which he has divided into seven categories, each of which he hopes to one day address using regenerative medicine. “People have begun to realize that the view of aging being something immutable — rather like the heat death of the universe — is simply ridiculous,” he says. “It’s just childish. The human body is a machine that has a bunch of functions, and it accumulates various types of damage as a side effect of the normal function of the machine. Therefore in principal that damage can be repaired periodically. This is why we have vintage cars. It’s really just a matter of paying attention. The whole of medicine consists of messing about with what looks pretty inevitable until you figure out how to make it not inevitable.”
Kurzweil takes life extension seriously too. His father, with whom he was very close, died of heart disease at 58. Kurzweil inherited his father’s genetic predisposition; he also developed Type 2 diabetes when he was 35. Working with Terry Grossman, a doctor who specializes in longevity medicine, Kurzweil has published two books on his own approach to life extension, which involves taking up to 200 pills and supplements a day. He says his diabetes is essentially cured, and although he’s 62 years old from a chronological perspective, he estimates that his biological age is about 20 years younger.
From TIME’s archives: “The Immortality Enzyme.”
See Healthland’s 5 rules for good health in 2011.
But his goal differs slightly from de Grey’s. For Kurzweil, it’s not so much about staying healthy as long as possible; it’s about staying alive until the Singularity. It’s an attempted handoff. Once hyper-intelligent artificial intelligences arise, armed with advanced nanotechnology, they’ll really be able to wrestle with the vastly complex, systemic problems associated with aging in humans. Alternatively, by then we’ll be able to transfer our minds to sturdier vessels such as computers and robots. He and many other Singularitarians take seriously the proposition that many people who are alive today will wind up being functionally immortal.
It’s an idea that’s radical and ancient at the same time. In “Sailing to Byzantium,” W.B. Yeats describes mankind’s fleshly predicament as a soul fastened to a dying animal. Why not unfasten it and fasten it to an immortal robot instead? But Kurzweil finds that life extension produces even more resistance in his audiences than his exponential growth curves. “There are people who can accept computers being more intelligent than people,” he says. “But the idea of significant changes to human longevity — that seems to be particularly controversial. People invested a lot of personal effort into certain philosophies dealing with the issue of life and death. I mean, that’s the major reason we have religion.” (See the top 10 medical breakthroughs of 2010.)
Of course, a lot of people think the Singularity is nonsense — a fantasy, wishful thinking, a Silicon Valley version of the Evangelical story of the Rapture, spun by a man who earns his living making outrageous claims and backing them up with pseudoscience. Most of the serious critics focus on the question of whether a computer can truly become intelligent.
The entire field of artificial intelligence, or AI, is devoted to this question. But AI doesn’t currently produce the kind of intelligence we associate with humans or even with talking computers in movies — HAL or C3PO or Data. Actual AIs tend to be able to master only one highly specific domain, like interpreting search queries or playing chess. They operate within an extremely specific frame of reference. They don’t make conversation at parties. They’re intelligent, but only if you define intelligence in a vanishingly narrow way. The kind of intelligence Kurzweil is talking about, which is called strong AI or artificial general intelligence, doesn’t exist yet.
Why not? Obviously we’re still waiting on all that exponentially growing computing power to get here. But it’s also possible that there are things going on in our brains that can’t be duplicated electronically no matter how many MIPS you throw at them. The neurochemical architecture that generates the ephemeral chaos we know as human consciousness may just be too complex and analog to replicate in digital silicon. The biologist Dennis Bray was one of the few voices of dissent at last summer’s Singularity Summit. “Although biological components act in ways that are comparable to those in electronic circuits,” he argued, in a talk titled “What Cells Can Do That Robots Can’t,” “they are set apart by the huge number of different states they can adopt. Multiple biochemical processes create chemical modifications of protein molecules, further diversified by association with distinct structures at defined locations of a cell. The resulting combinatorial explosion of states endows living systems with an almost infinite capacity to store information regarding past and present conditions and a unique capacity to prepare for future events.” That makes the ones and zeros that computers trade in look pretty crude. (See how to live 100 years.)
Underlying the practical challenges are a host of philosophical ones. Suppose we did create a computer that talked and acted in a way that was indistinguishable from a human being — in other words, a computer that could pass the Turing test. (Very loosely speaking, such a computer would be able to pass as human in a blind test.) Would that mean that the computer was sentient, the way a human being is? Or would it just be an extremely sophisticated but essentially mechanical automaton without the mysterious spark of consciousness — a machine with no ghost in it? And how would we know?
Even if you grant that the Singularity is plausible, you’re still staring at a thicket of unanswerable questions. If I can scan my consciousness into a computer, am I still me? What are the geopolitics and the socioeconomics of the Singularity? Who decides who gets to be immortal? Who draws the line between sentient and nonsentient? And as we approach immortality, omniscience and omnipotence, will our lives still have meaning? By beating death, will we have lost our essential humanity?
Kurzweil admits that there’s a fundamental level of risk associated with the Singularity that’s impossible to refine away, simply because we don’t know what a highly advanced artificial intelligence, finding itself a newly created inhabitant of the planet Earth, would choose to do. It might not feel like competing with us for resources. One of the goals of the Singularity Institute is to make sure not just that artificial intelligence develops but also that the AI is friendly. You don’t have to be a super-intelligent cyborg to understand that introducing a superior life-form into your own biosphere is a basic Darwinian error.
(Comment on this story.)
If the Singularity is coming, these questions are going to get answers whether we like it or not, and Kurzweil thinks that trying to put off the Singularity by banning technologies is not only impossible but also unethical and probably dangerous. “It would require a totalitarian system to implement such a ban,” he says. “It wouldn’t work. It would just drive these technologies underground, where the responsible scientists who we’re counting on to create the defenses would not have easy access to the tools.”
Kurzweil is an almost inhumanly patient and thorough debater. He relishes it. He’s tireless in hunting down his critics so that he can respond to them, point by point, carefully and in detail.
See TIME’s photo-essay “A Global Look at Longevity.”
See how genes, gender and diet may be life extenders.
Take the question of whether computers can replicate the biochemical complexity of an organic brain. Kurzweil yields no ground there whatsoever. He does not see any fundamental difference between flesh and silicon that would prevent the latter from thinking. He defies biologists to come up with a neurological mechanism that could not be modeled or at least matched in power and flexibility by software running on a computer. He refuses to fall on his knees before the mystery of the human brain. “Generally speaking,” he says, “the core of a disagreement I’ll have with a critic is, they’ll say, Oh, Kurzweil is underestimating the complexity of reverse-engineering of the human brain or the complexity of biology. But I don’t believe I’m underestimating the challenge. I think they’re underestimating the power of exponential growth.”
This position doesn’t make Kurzweil an outlier, at least among Singularitarians. Plenty of people make more-extreme predictions. Since 2005 the neuroscientist Henry Markram has been running an ambitious initiative at the Brain Mind Institute of the Ecole Polytechnique in Lausanne, Switzerland. It’s called the Blue Brain project, and it’s an attempt to create a neuron-by-neuron simulation of a mammalian brain, using IBM’s Blue Gene super-computer. So far, Markram’s team has managed to simulate one neocortical column from a rat’s brain, which contains about 10,000 neurons. Markram has said that he hopes to have a complete virtual human brain up and running in 10 years. (Even Kurzweil sniffs at this. If it worked, he points out, you’d then have to educate the brain, and who knows how long that would take?) (See portraits of centenarians.)
By definition, the future beyond the Singularity is not knowable by our linear, chemical, animal brains, but Kurzweil is teeming with theories about it. He positively flogs himself to think bigger and bigger; you can see him kicking against the confines of his aging organic hardware. “When people look at the implications of ongoing exponential growth, it gets harder and harder to accept,” he says. “So you get people who really accept, yes, things are progressing exponentially, but they fall off the horse at some point because the implications are too fantastic. I’ve tried to push myself to really look.”
In Kurzweil’s future, biotechnology and nanotechnology give us the power to manipulate our bodies and the world around us at will, at the molecular level. Progress hyperaccelerates, and every hour brings a century’s worth of scientific breakthroughs. We ditch Darwin and take charge of our own evolution. The human genome becomes just so much code to be bug-tested and optimized and, if necessary, rewritten. Indefinite life extension becomes a reality; people die only if they choose to. Death loses its sting once and for all. Kurzweil hopes to bring his dead father back to life.
We can scan our consciousnesses into computers and enter a virtual existence or swap our bodies for immortal robots and light out for the edges of space as intergalactic godlings. Within a matter of centuries, human intelligence will have re-engineered and saturated all the matter in the universe. This is, Kurzweil believes, our destiny as a species. (See the costs of living a long life.)
Or it isn’t. When the big questions get answered, a lot of the action will happen where no one can see it, deep inside the black silicon brains of the computers, which will either bloom bit by bit into conscious minds or just continue in ever more brilliant and powerful iterations of nonsentience.
But as for the minor questions, they’re already being decided all around us and in plain sight. The more you read about the Singularity, the more you start to see it peeking out at you, coyly, from unexpected directions. Five years ago we didn’t have 600 million humans carrying out their social lives over a single electronic network. Now we have Facebook. Five years ago you didn’t see people double-checking what they were saying and where they were going, even as they were saying it and going there, using handheld network-enabled digital prosthetics. Now we have iPhones. Is it an unimaginable step to take the iPhones out of our hands and put them into our skulls?
Already 30,000 patients with Parkinson’s disease have neural implants. Google is experimenting with computers that can drive cars. There are more than 2,000 robots fighting in Afghanistan alongside the human troops. This month a game show will once again figure in the history of artificial intelligence, but this time the computer will be the guest: an IBM super-computer nicknamed Watson will compete on Jeopardy! Watson runs on 90 servers and takes up an entire room, and in a practice match in January it finished ahead of two former champions, Ken Jennings and Brad Rutter. It got every question it answered right, but much more important, it didn’t need help understanding the questions (or, strictly speaking, the answers), which were phrased in plain English. Watson isn’t strong AI, but if strong AI happens, it will arrive gradually, bit by bit, and this will have been one of the bits.
(Comment on this story.)
A hundred years from now, Kurzweil and de Grey and the others could be the 22nd century’s answer to the Founding Fathers — except unlike the Founding Fathers, they’ll still be alive to get credit — or their ideas could look as hilariously retro and dated as Disney’s Tomorrowland. Nothing gets old as fast as the future.
But even if they’re dead wrong about the future, they’re right about the present. They’re taking the long view and looking at the big picture. You may reject every specific article of the Singularitarian charter, but you should admire Kurzweil for taking the future seriously. Singularitarianism is grounded in the idea that change is real and that humanity is in charge of its own fate and that history might not be as simple as one damn thing after another. Kurzweil likes to point out that your average cell phone is about a millionth the size of, a millionth the price of and a thousand times more powerful than the computer he had at MIT 40 years ago. Flip that forward 40 years and what does the world look like? If you really want to figure that out, you have to think very, very far outside the box. Or maybe you have to think further inside it than anyone ever has before.
Download TIME’s iPhone, BlackBerry and Android applications.
See TIME’s Pictures of the Week.
Click to Print
Find this article at:
Dave Asprey, who says that he has ‘rewired’ his brain through body hacking
Michael Galpert rolls over in bed in his New York apartment, the alarm clock still chiming. The 28-year-old internet entrepreneur slips off the headband that’s been recording his brainwaves all night and studies the bar graph of his deep sleep, light sleep and REM. He strides to the bathroom and steps on his digital scale, the one that shoots his weight and body mass to an online data file. Before he eats his scrambled egg whites with spinach, he takes a picture of his plate with his mobile phone, which then logs the calories. He sets his mileage tracker before he hops on his bike and rides to the office, where a different set of data spreadsheets awaits.
“Running a start-up, I’m always looking at numbers, always tracking how business is going,” he says. Page views, clicks and downloads, he tallies it all. “That’s under-the-hood information that you can only garner from analysing different data points. So I started doing that with myself.”
His weight, exercise habits, caloric intake, sleep patterns – they’re all quantified and graphed like a quarterly revenue statement. And just as a business trims costs when profits dip, Galpert makes decisions about his day based on his personal analytics: too many calories coming from carbs? Say no to rice and bread at lunchtime. Not enough REM sleep? Reschedule that important business meeting for tomorrow.
The founder of his own online company, Galpert is one of a growing number of “self-quantifiers”. Moving in the technology circles of New York and Silicon Valley, engineers and entrepreneurs have begun applying a tenet of the computer business to their personal health: “One cannot change or control that which one cannot measure.”
Much as an engineer will analyse data and tweak specifications in order to optimise a software program, people are collecting and correlating data on the “inputs and outputs” of their bodies to optimise physical and mental performance.
“We like to hack hardware and software, why not hack our bodies?” says Tim Chang, a self-quantifier and Silicon Valley investor who is backing the development of several self-tracking gadgets.
Indeed, why not give yourself an “upgrade”, says Dave Asprey, a “bio-hacker” who takes self-quantification to the extreme of self-experimentation. He claims to have shaved 20 years off his biochemistry and increased his IQ by as much as 40 points through “smart pills”, diet and biology-enhancing gadgets.
“I’ve rewired my brain,” he says.
Attendees at this year’s Quantified Self Conference discuss a sleep-tracking device
Asprey shares his results with the CEOs and venture capitalists he consults with through his executive coaching business, Bullet Proof Executive, but he’s found an even more welcoming audience at the first-ever international Quantified Self Conference.
Over the last weekend of May, in the upstairs of the Computer History Museum in Mountain View, California, in the heart of Silicon Valley, 400 “Quantified-Selfers” from around the globe have gathered to show off their Excel sheets, databases and gadgets.
Participants are mostly middle to upper class, mostly white. Europe is well represented. Suits and skirts appear at a minimum. There are plenty of nerdy young men, nerdy older men and extremely fit men and women with defined muscles and glowing skin. There is also a robust contingent of young urban hipsters in military boots, hoodies and elaborate tattoos.
A quiet middle-aged man walks around with a pulse monitor clipped to his earlobe, a blood pressure cuff on his arm and a heart rate monitor strapped around his chest, all feeding a stream of data to his walkie-talkie-like computer. Someone from the UK unrolls a 12ft line graph charting the fluctuations in his mood over the previous year. A Canadian graduate student describes the web tools he uses to track his attention span.
Footsteps, sweat, caffeine, memories, stress, even sex and dating habits – it can all be calculated and scored like a baseball batting average. And if there isn’t already an app or a device for tracking it, one will probably appear in the next few years.
Brittany Bohnet, who was converted into a self-quantifier while working at Google, says she expects these gadgets will follow us in all aspects of our lives – even the most private. “Eventually we’ll get to a point where we use the restroom and we’ll get a meter that tells us, ‘You’re deficient in vitamin B,’” she says. “That will be the end goal, where we understand exactly what our bodies need.”
Joe and Lisa Betts-LaCroix, self-trackers ‘I was giving birth to our son, and instead of holding my hand and hugging me he was sitting in the corner entering the time between my contractions into a spreadsheet’
“We’re moving away from the era of the blockbuster drug and toward personalised medicine,” adds Joe Betts-LaCroix, a self-tracker and bio-engineer. He opens a laptop with graphs of his weight and that of his wife, Lisa, and two kids, measured daily for the last three years. He has data detailing his wife’s menstrual cycle for 10 years.
“I was giving birth to our son, and instead of holding my hand and supporting me and hugging me, he was sitting in the corner entering the time between my contractions into a spreadsheet,” says Lisa Betts-LaCroix.
The concept of self-tracking dates back centuries. Modern body hackers are fond of referencing Benjamin Franklin, who kept a list of 13 virtues and put a check mark next to each when he violated it. The accumulated data motivated him to refine his moral compass. Then there were scientists who tested treatments or vaccines for yellow fever, typhoid and Aids on themselves. Today’s medical innovators have made incredible advancements in devices such as pacemakers that send continuous heart data to a doctor’s computer, or implantable insulin pumps for diabetics that automatically read glucose levels and inject insulin without any human effort.
Today in Silicon Valley, the engineers who have developed devices for tracking their own habits are modifying them into consumer-friendly versions and preparing to launch them on a largely unsuspecting public. Though most people would cringe at the idea of getting a mineral read-out every time they visit the loo, entrepreneurs and venture capitalists see a huge market for consumer-focused health and wellness tools, using the $10.5bn self-help market and $61bn weight loss market as indicators of demand. Self-quantifiers who work at large technology companies such as Intel, Microsoft and Philips are drawing their bosses’ attention to the commercial opportunities. Public health advocates and healthcare executives are starting to imagine the potential the data could hold for disease management and personalised drug development.
“We can see the tipping point,” says Gary Wolf, one of the founders of the modern-day quantified self movement and an organiser of the conference. “The involvement of the businesses is a sign that we’re not completely alone in seeing something important happening.”
Tim Chang, the Silicon Valley investor, says that self-tracking will win minds and wallets the same way the Green movement put Priuses on the road and grapefruit-powered cleaners under the sink.
“Over the next five to 10 years, self-tracking will be critical to wellness,” Chang says. “It will be consumer-led, not prescribed by your doctor or mandated by your insurance company.” For now, though, it’s in the “geeky early adopter stage”.
Chang and many of the attendees of the Quantified Self conference liken themselves to the Homebrew Computer Club of the 1970s and ’80s, the Silicon Valley gathering of technical hobbyists – including Apple founders Steve Jobs and Steve Wozniak – who swore personal computers would one day grace every home. Quantified-selfers who are inventing personal tracking gadgets in their basements “will have the same scope of impact”, Chang says.
Software engineer Alex Gilman explains the Fujitsu Sprout body monitor
The self-tracking equivalent of an early model, 30lb, four-part desktop computer is Fujitsu Laboratories’ Sprout, as worn by software engineer Alex Gilman at the Quantified Self Conference: a maze of sensors and wires send data from his ear, chest and arm to the pocket-sized computer clipped to his belt – the Sprout. The Sprout synchronises the physical data from the body sensors and from the apps on his iPod Touch where he records his moods and drowsiness levels. What is now a mess of raw, useless data can be calculated and translated into a neat graph that will eventually be used to measure stress and fatigue, manage weight loss, even predict illness.
The potential of the Sprout is intriguing, but mass appeal will only come when such devices are consolidated into small, wireless, all-in-one products that make data collection completely passive, says Chang. Most will require little to no human effort and some will even be “game-ified”, he says, made as fun and addictive as Angry Birds.
Through his firm Norwest Venture Partners, Chang is placing his bets on Basis, a wristwatch-type device that records heart rate, physical activity, calorie burn and sleep patterns. Data readouts show spikes in heart rate data so users can see when they’re stressed and overlay that data with their work calendar to see which people or meetings might be the cause. When Chang tried a prototype, he noticed peaks in heart rate during his morning commute and decided to shift his route to a longer, but less busy, highway. It’s the interesting, useful, easy-to-digest information like this, he says, that will push these devices into the hands of ordinary users.
Tim Ferriss, author of ‘The 4-Hour Body’Ferriss claims he can teach people how to lose weight without exercise, maintain peak mental performance on two hours’ sleep and have a 15-minute orgasm
When the benefits of the information outweigh the costs, in money or time, people will buy the devices, says Tim Ferriss, author of The 4-Hour Body, an account of hundreds of body hacks he tried on himself which has won a following among employees at Google, Facebook and many Silicon Valley start-ups. Through his exploits Ferriss claims that he can teach people how to lose weight without exercise, maintain peak mental performance on two hours’ sleep and have a 15-minute orgasm. Ferriss has personally invested in at least eight devices.
“I think, as soon as the next 12 or 24 months, that people will have to opt out of self-tracking, as opposed to opt in,” he says, “much like GPS and geo tagging,” a feature of smartphones that records users’ geographic location automatically for use in various consumer mobile applications.
The implications for privacy are dramatic. Advocates and politicians were in an uproar when they realised the kind of access that Apple and Google have to geographic data derived from phones. Imagining three years worth of heart rate data or depression symptoms travelling through mobile devices – potentially being offered for sale to drug or insurance companies, exploited by advertisers or hacked by cyber criminals – puts watchdog groups on alert.
“What consumers need to realise is there’s a huge, huge demand for information about their activities, and the protections for the information about their activities are far, far, far less than what they think,” says Lee Tien, a privacy attorney at the Electronic Frontier Foundation. “A lot of these cloud services fall outside the federal and state privacy regimes.”
Mistakes will be made, Ferriss concedes, but he thinks “more good will come from it than bad”. He points to websites such as CureTogether.com and PatientsLikeMe.com, which harness individually collected data on conditions such as asthma, kidney disease, chronic pain and depression. People can then experiment with traditional and alternative therapies to find what works for them. That information is already informing new research and drug development.
Some doctors and public health advocates see great potential for personal tracking in managing chronic illnesses, especially among the rapidly ageing baby boomer generation. Mobile applications can track levels of blood sugar in diabetics or blood pressure in people with hypertension, and send alerts if a problem is developing. Movement-tracking sensors the size of watch batteries – like the one in the Basis wristwatch – can be placed on pill bottles to monitor if a medication has been taken and, if a dose is missed, generate a reminder text or e-mail.
“We believe it’s a differentiator that will help employers save costs,” says Nick Martin, vice-president of innovation with the UnitedHealth Group, a Minneapolis-based health insurance company that is considering covering the use of health-tracking technologies. This means that people’s personal health details could be shared not just with their doctor but with their insurer as well. But Martin says concerns about this will fade fast.
“We’ve seen this with credit cards and payments over mobile phones,” he says, where consumers gradually adapted to sharing financial information over the ether. “We’ve gotten over that hurdle, and I think we will here.”
Still, seniors and boomers are much less inclined to spill the details of their personal lives than the Facebook-ed generations after them. And to believe that even twenty- and thirtysomethings don’t have limits to what they want others to know, or what they even want to know about themselves, seems wishful. Despite promises of confidentiality, people fear they will be charged higher insurance premiums, denied coverage or even denied a job based on their healthcare data.
. . .
Alicia Morga, app developerThe data collected in Morga’s gottaFeeling app enables users to ‘predict an exercise slump or a spending spree and help avoid the behaviour’
Alicia Morga is an accidental self-tracker. The 39-year-old entrepreneur says she identifies more with the Oprah Winfrey school of self-improvement than the Silicon Valley data geeks. She’s tried a heart rate monitor and the pedometer on her iPod to track her running workouts, but she only recently learnt the term “quantified self” when she started developing an iPhone app for tracking emotions. Her desire to track her own moods arose in a business context after she founded her own Hispanic marketing company, Consorte Media.
“I needed a way to manage the emotional roller coaster that entrepreneurship is,” she says. “I was angry. But there’s a double standard in business that women are not allowed to be angry, especially if they’re the boss.”
She kept her anger pent up, but then noticed it would “leak out” in the office, in a snide remark or a contemptuous look. She wanted to be a better communicator and a better leader so she signed up for an executive course at Stanford University called Interpersonal Dynamics. The course sought to develop emotional self-awareness, determine when it was appropriate to hide a feeling or express it and practise ways to communicate those emotions in a constructive manner. “Emotional fitness requires exercise,” Morga says, “just like running.”
Shortly after selling her marketing company, Morga started designing gottaFeeling, an app that pings her one to six times a day, depending on the settings, and asks how she’s feeling. A menu gives her options such as happy, sad, confused, angry. If she clicks angry, it asks her to refine her answer with irritated, frustrated or pissed off. She then records where she is and who she’s with. At the end of a week she can look at a pie chart that breaks down the percentage of time she spent in each mood, and see overlaid data of where, when, and in whose company she felt that way.
Just naming an emotion helps you manage it, Morga says. But the ultimate goal is for users to correlate emotions with eating habits, shopping behaviours or work tasks. If patterns emerge, the data could help users “predict an exercise slump or a spending spree and help avoid the behaviour”.
In the spirit of self-experimentation, Morga tried this on herself. She began studying her financial records. She never uses cash, so all her purchases were in one credit card statement, which she exported to Excel.
“Turns out, I consume a ton of cupcakes,” she says. Crunching the data to see when and where she bought the cupcakes, she discovered that 40 per cent of her purchases were made on Tuesdays at the same kiosk she passed in downtown San Francisco – on her way to see her personal trainer. She studied her mood data from the same time to see if there was an emotional factor but found no correlation and concluded the purchases were attributable to convenience. As a result she asked her trainer to meet her in a different neighbourhood and successfully cut her cupcake consumption by 40 per cent.
She’s still crunching the data to figure out how to trim the other 60 per cent.
Brainstorming session at the Quantified Self Conference
Back at the Quantified Self Conference in Silicon Valley, attendees break into smaller groups to explore the finer points of hacking sleep, cognition and ageing. A concentration of hipsters heads to the session on attention-span tracking. About 50 participants sit in a circle, one-third with laptops propped open on their thighs. Moderating is Matthew Trentacoste, a 29-year-old PhD student in computer science at the University of British Columbia and an organiser of the Vancouver Quantified Self group, one of two dozen groups around the globe that meet informally throughout the year. His long, curly hair is piled at the back of his head and tied with a knitted scarf.
“I’ve been diagnosed with ADHD,” he says, referring to the increasingly common designation of Attention Deficit Hyperactivity Disorder. “As someone who’s easily distracted, I’m interested in figuring out strategies to reduce these distractions.”
Trentacoste describes a tool he’s developing to help him track how he spends his time online, down to the millisecond. It measures how long he spends on e-mail versus web browsing, how much time he spends in each web window and how often he switches his focus. The goal, among those who use or are building similar tools, is to reduce distractions, increase productivity and achieve “flow”, the optimum state of creativity and focus.
A discussion ensues on techniques for achieving flow, and a generational divide appears.
The younger people in the room talk about experimenting with Adderall, a common drug prescribed to people with ADHD that helps focus the mind. Older participants enquire whether meditating before bed has an effect on concentration the next day. The contrasts in method between the age groups are stark, as are the motivations for body hacking in general, says Dave Asprey.
“The people interested in this are under 30 and over 45,” he says, gesturing around the cafeteria at the conference. The people under 30 are the next Tim Ferrisses, the over-achieving entrepreneurs who are out to conquer Silicon Valley.
“The people over 45 are just tired of being fat and tired, and they see the kids under 30 and they know they’re going to lose their jobs to them,” he says. “They know they like to work ’em hard and burn ’em out young in Silicon Valley.”
Michael Galpert, internet entrepreneur‘Running a start-up, I’m always looking at numbers, always tracking how business is going. So I started doing that with myself’
Attempting to counter that trend, Michael Galpert, the New York internet entrepreneur, is using body hacking technology to promote healthier lifestyles in his office. He’s set up a workplace weight loss and fitness contest where employees use a mobile app to upload their daily weight and exercise routines into a shared online database. The idea is that seeing that your co-worker lost 2lb more than you last month, or did 20 more push-ups yesterday, will motivate you to keep up and keep going.
It’s not just a physical contest, Galpert says. The competitiveness and motivation on the treadmill will encourage people to push themselves at their desks as well.
“When you keep trying for one more push-up, it gets easier,” he says. “It’s the same at work. You can say ‘the project I’m working on is done,’ or you can say you’ll spend a little more time to make it better.”
Or, some ex-self-quantifiers would say, you could push the drive for perfection to breaking point.
“People thought I was narcissistic. What they didn’t see was the self-punishment, the fear, the hatred behind the tracking,” writes Alexandra Carmichael, one of the founders of CureTogether.com, in a poem about why she stopped tracking herself. “I had stopped trusting myself. Letting the numbers drown out my intuition, my instincts.”
Whether or not distilling human performance down to ones and zeroes will truly make us better, healthier human beings remains to be seen. Nobody has yet measured the full impact of so fully measuring their lives.
April Dembosky is the FT’s San Francisco correspondent