an international and interdisciplinary journal of postmodern cultural sound, text and image
Volume 3, August 2006, ISSN 1552-5112
Will Biomedicine Transform Society?
In the field of biomedicine, there seem to be two parallel universes. In first, in the developed world, many believe that we are on the threshold of an epochal change. The sequencing of the human genome, it was claimed, would enable experts to read the book of life, decode the code of codes, remake Eden, usher in a brave – or terrifying – new world. Our genotypes would be read out, coded on a chip, and used to predict our fate, diagnose our diseases and to personalise our medicines. New reproductive technologies would enable a world of designer babies and engineered people. Human stem cells would regenerate damaged human tissue, cure spinal injuries, heart disease, diabetes, Parkinson’s and Alzheimer’s. Smart drugs would enable us to engineer our moods, emotions, desires and intelligence at will. Some of the biomedical techniques cited insuch futurology are already familiar, but most are said to be ‘just around the corner’. Each day seems to bring news of research that promises to increase our ability to modify, manipulate, transform our living bodily processes at will in pursuit of our secular desires.
Hopes here are also political and economic. UK Prime Minister Tony Blair, at the European Bioscience Conference in November 2004, said: “Biotechnology is the next wave of the knowledge economy and I want Britain to become its European hub”. Biotechnology, especially biomedical biotechnology, is seen as a key driver for the knowledge economy. Here, some hope, we will see a virtuous alliance of state, science and commerce in the pursuit of health and wealth. But there is no hope without fear. The UK House of Commons Trade and Industry Committee Report on Biotechnology, 2003, writes:
With biotechnology such a focus of public policy in Germany, France, Canada, Singapore, Puerto Rico, Israel, and Ireland, amongst many others, fears have arisen that the UK may not be doing enough to nurture an industry seen to have such potential and may be in danger of jeopardising the advantages of its early start in the field…
The same report tells us that pharmaceutical biotechnology is the dominant branch of biotech, and that in 2002 the UK biotechnology industry had a market capitalisation of £6.3 billion, accounting for 42% of the total market capitalisation of European biotechnology. Ernst and Young reports that the US biotech sector is a US$33.6 billion industry, with a total of 1,466 companies, 318 which are public.(Ernst & Young, 2003) They also report that “In Australia… total revenues among publicly traded companies increased 38 per cent from $666 million in 2001 to $920 million in 2002. The number of… people employed in the industry jumped 24 % from 5,201 to 6,464…The Japanese government anticipates the nation’s biotech work force will surge to 1 million by 2010, an enormous increase over the estimated 70,000 today. Government officials plan to double their investment in biotechnology in the next five years.” (Ernst & Young, 2003) In our own era of shareholder value and promissory capitalism, such expectations about the generation of what Catherine Waldby has termed ‘biovalue’ –the value to be extracted from living processes themselves - play a key political and economic role. (Waldby, 2000)
Yet, in another universe, things look rather different. The World Health Organisation repeatedly reports that the world’s biggest killer, the greatest cause of ill-health and suffering across the globe is coded Z59.5 in the International Classification of Diseases. The condition Z59.5 is extreme poverty. By the end of last century almost 9 out of 10 children in the world had been vaccinated against the five major killer diseases of childhood, and global rates of infant mortality had declined over two decades by over 25%. Yet at the start of the twenty first century, some 12.2 million children under 5 in less developed countries—equal to the combined total population of Norway and Sweden—still died every year mostly from causes that could be prevented for a few US cents per child.(World Health Organization, 2002) The gaps between rich and poor are widening, exacerbated by AIDS in Africa - a person in Malawi has a life expectancy of 39 years, in the most developed countries life expectancy is twice this at 78 years.
Yet a tiny proportion of the resources of our new biomedical era are directed to the major health problems of the majority of the world’s population. Médecins Sans Frontièrs reported in 2004:
Ten years ago, the world spent US$30 billion on health research of which under 10 percent was spent on 90 per cent of the world’s health problems – a disparity known as the ‘10/90 gap’. Today global spend on health research has more than tripled to under US$106 billion, yet the amount allocated to the R&D for drugs to treat 90 per cent of the global disease burden has risen by a mere US$ 0.3 - 0.5 billion to around US$3.5 billion, mainly due to contributions from private foundations, governments, and charities. Thus, the 10/90 gap doesn’t just persist, in percentage terms it shows alarming growth over the last decade.
A recent study shows that of 1393 new chemical entities marketed between 1975 and 1999, only 16 were for tropical diseases and tuberculosis. There is a 13-fold greater chance of a drug being brought to market for central-nervous-system disorders or cancer than for a neglected disease.(Trouiller et al., 2002) The pharmaceutical industry argues that research and development is too costly and risky to invest in low-return neglected diseases. But pharmaceutical companies massively over-inflate claims as to the costs of developing new drugs – they like to quote an estimate of around $800m to bring a drug to market, but the actual cost reduces to between $71 and $118m when proper accounting criteria are applied.(Relman and Angell, 2002) Such claims are used to justify inflated prices and extended patents that place drugs out of reach of poorer countries–dramatically highlighted by the bill of indictment laid by multinational pharmaceutical companies against Nelson Mandela and his government in South Africa for ignoring or violating patents on retrovirals. Global inequalities in health have now come onto the political agenda in a big way. Yet the gulf between the problems of this world and the promises of high tech biomedicine seems immense.
In the first universe to which I referred, the prospects raised by biomedicine have caused consternation amongst politicians, regulators, theologians, philosophers and others. A whole new profession of bioethics has been invented to debate them. These debates have been particularly intense where they involve ‘the right to life’ – the role of embryonic stem cells in the US elections is the most obvious example. Some governments have enacted laws to limit some of these developments, especially those relating to human reproduction. Many have set up committees and commissions to deliberate over where the line should be drawn between the permitted, the regulated and the forbidden. Somepressure groups have campaigned for restrictions to be overthrown to allow the research that might bring hope to their loved ones. Others have campaigned for restrictions to be tightened, notably those that seek to protect the ‘sanctity of life’ of the ovum from fertilization or even before. Some hope to settle these debates by appeal to a transcendental religious morality or an equally transcendental human ontology. For some, the key issues concern individual autonomy and informed consent - who should have the power to make decisions in each troubling situation where a decision has to be made about the selection of an embryo, the conduct of an experiment, the licensing of a drug, the termination of a life. For others, the issues are consequential, moral, sometimes spiritual – what kinds of society do we want, what is a proper ‘human’ form of life. Many prominent intellectuals have been drawn into this debate. Frances Fukuyama, Leon Kass and Jürgen Habermas each argue that biomedicine is in danger of violating human dignity, human identity, human nature itself – for them, we tamper with our ‘nature’ at enormous risk, ultimately, to the human soul (Habermas, 2003, Fukuyama, 2002, Kass, 2002, President's Council on Bioethics (U.S.) and Kass, 2003).
Yet bioethical debate in Britain, Europe and the USA has not been much exercised by global health inequalities. The recent special feature on global inequalities in public health in The Lancet contained few contributions by bioethicists; one bioethicist who did contribute remarks on this lack of conversation between ‘frontier bioethics’ and ‘everyday bioethics’ – while bioethicists concentrate on individual autonomy, rights and protections in high tech medicine, they seldom address the ethical issues raised by the mundane, routine, global depredations of illness and premature death.(Berlinguer, 2004) This global letting die on a massive scale somehow does not seem to register as a bioethical problem. I will come back to these normative questions at the end of my talk today. I am not a bioethicist but a sociologist. So what, then, of the sociologists?
Most of those in my own discipline have tended to cast a jaundiced eye on developments in high tech biomedicine in our first universe. They tend to see it as one more stage in the long tale of medicalization. They say that medicalization individualizes, it turns our attention away from the social causes, and social solutions to ill health. They term its current form ‘geneticization’ – a view of the implacable genetic determinants not only of diseases but of other human characteristics and human inequalities. Some suggest that this is leading to a new eugenics, seeking to eliminate those genetically inferior. Others criticise the ways in which more and more everyday troubles are coming within the sphere of medicine, and technical fixes to misery and ill health are replacing an attack on social causes. Some of these points are well made, but many seem to me to be wide of the mark. They are fighting the battles of a previous war. In this old debate, biological and social explanations were implacably opposed and associated with contesting political and ethical positions - the biological was inevitably on the side of reaction and the social on the side of the progressives. Today, I think, we need a different perspective.
Take individualization. In fact, contemporary genetics does not individualize. It involves new ways of tracing connections and making connections. For example,information that I may find out about my genetic complement traces new connections, and imposes new obligations, between myself, my parents, relatives, brothers, sisters and my children, including those born through the donation of my sperm. Another example: a host of genealogy tracing companies now offer to identify your roots and the
geographical origins of your ancestors on the basis of a sample of your DNA. Most significantly, perhaps, we can see new collectivities forming. Paul Rabinow, who studied those campaigning for genomic research on the dystrophies, coined the term ‘biosociality’ for such groups; we can see similar patterns in campaigns around manyother genetic disorders.(Rabinow, 1996) Biosocial communities, often geographically dispersed, sometimes virtual, are brought into existence around a shared condition; they actively strive for research, for funds, for support, for therapies for ‘their diseases’. They educate one another in the disease mechanisms and practicalities of care, donate tissues and blood for genomic research, and seek to take control of their own biological destiny and to bend medical and scientific knowledge and expertise to their own ends. Some groups have even patented the genes at the root of their disease. I term those engaged with these new forms of activity ‘biological citizens’.(Rose and Novas, 2005) In the advanced liberal societies of ‘the west’, they govern themselves according to an ethic of active citizenship and are obliged to manage their own lives through choice, to take responsibility for their future and to maximise their own potentials.(Rose, 1999) This, of course, generates its own problems for those unable or unwilling to be active and responsible in this way.(Callon and Rabeharisoa, 2004) And it should not be confused with democracy, for only some diseases (especially those of children), and only some biological citizens have the cultural capital for effective mobilization.
Unlike many sociologists, then, I do not think contemporary biomedicine is reactivating fatalism in which individuals, or those who govern them, consider that ones capacities and potential is given in ones genes. Biology is destiny – so went the old adage. That may once have been so, but it is no longer the case at the molecular level at which living processes are now understood. Biology is no longer destiny but opportunity. Molecular biology and genomics are interventionist disciplines. To understand the nature of life at the molecular level is to open it up to intervention. In this style of thought, life can be reverse engineered, taken apart in the laboratory, its processes broken down into their elements, and then put back together again. Life becomes open to artifice at the molecular level. This is why I suggest we are involved in a ‘politics of life itself’.(Rose, 2001) A politics because all these developments are highly contested. And ‘life itself’ because it is not just illness that is involved, nor even the maximisation of health – it is the management of human vitality itself. To deem an aspect of human life biological today is to suggest that it can be transformed though technology.
The troubled discourse of bioethicists, poplar science writers and social theorists in the developed tends to be futuristic. It often rests on overstated claims about the marvels that bioscience and biomedicine is about to achieve. Contemporary biotechnology – no doubt following a pattern familiar from other technologies – thrives on such exaggerated expectations of epochal changes just around the corner. These claims generate publicity, inflate share prices, mobilise funding agencies, enhance careers and, no doubt, generate a sense of excitement and mission for those working in the field.(Brown, 2003) While it may be true that many of the phenomena of life - from reproduction to emotion – now seem to be understandable as mechanisms, in most cases we are a long way from being able to re-engineer them at will. Even for IVF (In Vitro Fertilisation), now often considered old technology, 75%-80% of treatments fail in each cycle in the UK – with consequences being studied by my LSE colleague Karen Throsby.(Throsby, 2004) In the US – where private clinics vie with one another to claim a high success rate – recent research puts a woman's chance of getting pregnant with IVF when she reaches 42 at about 4%. The cautionary note about exaggerated expectations recently struck by some social scientists is welcome. While it often appears as if our current limits are ‘merely technical’ and will soon be overcome, evidence does not indicate a revolutionary change in the therapeutic capacities of our medical practitioners. But this does not mean that nothing is happening. Let me consider some examples. They certainly illustrate the differences between dream and reality. We are not in the midst of an epochal shift, on the brink of utopia or dystopia. But we are inhabiting what I term an ‘emergent form of life’.
First, predictive genomic medicine. The revolution ushered in by the sequencing of the human genome was initially thought to lie in the area of predictive and preventive medicine – identifying the DNA sequences that would lead to disease before symptoms emerged in order to initiate preventive measures. Many of the specific mutations related to rare single gene disorders have indeed been identified, though preventive therapeutic interventions have proved much harder to develop. But there is the technique of preimplantation genetic diagnosis or PGD, which combines in-vitro fertilization and genetic testing. Embryos are created outside the womb, a cell is removed and gene sequences examined for specific genetic diseases, and only those free of the markers for these diseases implanted. Will developments of this sort lead to a new ‘liberal eugenics’—where those with qualities thought undesirable are eliminated before birth, increasing stigmatisation of those with disabilities who do live?
Eugenics was the programme, initially articulated by Francis Galton in the late nineteenth century, that tried to improve the ‘quality’ of the population of a nation by acting upon individual reproduction, ensuring that those of the best stock reproduced themselves and passed their superior qualities down to their children, whilst those of weaker or defective stock bred less, or, in some cases, were prevented from breeding at all. As we mark the 60th anniversary of the liberation of Auschwitz, I do not need to rehearse the murderous form eugenics took in Nazi Germany. But we should remind ourselves that it started with elimination of inmates of mental asylums: deemed individually to have ‘lives not worthy of life’ and collectively to impose insupportable burdens on the healthy population of the Reich.(Proctor, 1988) Coercion was only one element in these strategies, which also sought to modify professional and public attitudes and individual judgements by education and counselling. Many German doctors took their own decisions on eugenic grounds; in the context of a widespread campaign of propaganda and public education, parents often requested eugenic measures for their own children.(Burleigh, 1994) The Nazi’s looked admiringly at the policies enacted in the United States to restrict immigration from the lower races – Slavs, Southern Europeans – and to compulsory sterilize many of inhabitants of asylums. Eugenic policies of forced or coerced sterilisation of those considered threats to the quality of the population – notably inhabitants of mental hospitals, the ‘feeble minded’ and those deemed incorrigibly immoral or anti-social – were put in place not only in the US and Germany, but in Switzerland, Denmark, Finland, Norway, Estonia, Iceland, Mexico, Cuba, Czechoslovakia, Yugoslavia, Lithuania, Latvia, Hungary and Turkey – to name but a few. Eugenic advice to parents and prospective marriage partners was widespread in these countries, as it was in the UK. Sterilisation on eugenic grounds continued into the post-war period in a number of democratic nations. In Sweden, the sterilization laws stayed on the books from 1935 to 1975 - in a paternalistic welfare state, the good shepherd must be prepared to take harsh decisions in order to reduce the burden that sickly sheep would place upon the flock as a whole (Broberg and Roll-Hansen, 1996, c.f. Foucault, 2001). Up until the 1950s in Britain and the United States, eugenic considerations infused reproductive advice to prospective parents in the new profession of genetic counselling. (Novas, 2003)
Contemporary genetic counselling, and contemporary reproductive genetics, explicitly rejects such directive ‘eugenic’ advice which judges the worth of potential children from the perspective of their contribution to the national population. It asserts the values of individual autonomy and informed choice. Sociological research suggests a more complicated picture - that despite the rhetoric of non-directive counselling, genetic counsellors do shape the choices that parents – in particular women - make, while devolving the responsibility for those fateful choices upon them. Nonetheless, what if some prospective parents, in the light of their own value judgements about the worth of different forms of life, take advantage of such techniques and decide against having
children with certain conditions. Is this ‘liberal eugenics’. I think not. Eugenics was a collective attempt imposed by a state to improve the quality of the population, in a geopolitical context often seen as a struggle between races. What we see today is something different.
Of course, in a sense, the very availability of genetic counselling to parents considered ‘at risk’ of having children with certain disabilities or medical conditions, with the availability of therapeutic abortion, does indicate that some lives, potentially, are less desirable than others. Undoubtedly many parents, given the choice offered by PGD, choose against having children whose lives are likely to be painful and short because of inherited diseases which are known to be single gene disorders – arising from a mutation in a specific genetic site. But here is an example that concerns to early-onset dystonia, a painful but not terminal condition, whose genetic basis was discovered in 1997 and named DYT1. It appears under a happy picture of Art and Wendy Kessler and their newborn baby Benjamin:
Kessler, diagnosed at age 12 with early-onset dystonia, a genetic brain disease that causes involuntary muscle movements and forces the body into twisted, painful postures, refused to father a child at risk of having the disease and the condition he described as a nightmare. Now, because of the discovery of the DYT1 gene, genetic and prenatal testing, and a groundbreaking procedure called preimplantation genetic diagnosis (PGD), Kessler and his wife, Wendy, are the parents of dystonia-free Benjamin … the first child ever to be born using PGD to prevent another life from being burdened with dystonia…. [Kessler says] Wendy and I are elated…. Benjamin means this is the end of dystonia in our family. It’s great!
I have, not accidentally, taken this example from the website of the Chicago Jewish Community on line. The site informs us that dystonia is one of several ‘Jewish’ genetic disorders – that is the term adopted by many Jewish organizations, because of their high prevalence among Ashkenazi Jews, although they are by no means exclusive to them. Jewish organizations in the United States have been very active in campaigns and research to find, screen for and eventually eliminate the genes for these disorders in their communities. Hence the irony, for the critics, who brand this eugenics. But do these attempts to eliminate such single gene disorders indicate that those born with such conditions are deemed ‘lives less worthy of life’, less worthy of our care and sustenance. I think not. Of course, as research by Sarah Franklin and her colleagues shows, this is not a matter of wanting ‘designer babies’. And there is no evidence to suggest that parents who do have children with these diseases think their lives ore unworthy, or love or value them less. On the contrary, it is precisely because of that love that they strive to avoid more children having such painful and/or shortened lives. Nor do I think that children born by such means will consider themselves, or be considered by others, as in some way ‘less than human’ because they arise from choice rather than chance, as suggested by the social theorist Jürgen Habermas.(Habermas, 2003) Quite the reverse – as in the case of children chosen to be ‘saviour siblings’ who are tissue matched so that they can donate tissue to an existing child with a terminal disease. I think the ethical issue is different. I don’t mean to make a cheap point, but in the context of the mass letting die of millions of children, we can note that the procedures to produce Benjamin cost the Kessler’s $20,000. Perhaps, it is not eugenics or the threat to our species ethic that should animate our bioethicists, but this differential value of life.
Private clinics in the US offer PGD services for a whole range of conditions. The Institute for Reproductive Medicine and Genetic Testing, for example, lists 57 such diseases on its website, from Adrenoleukodystrophy to Von Willebrand Disease. These include sex linked diseases, where PGD is used to ensure that only male or female embryos are implanted, despite their being no certainty that any specific embryo of the other sex will carry the mutations for the disorder. In the UK, this area is regulated by the Human Fertilization and Embryology Authority, which has to issue a licence to clinics to use PGD in relation to certain conditions where the embryo is considered at risk of developing certain serious conditions associated with great suffering, for which no effective therapy is available. But the boundaries are fuzzy. In November 2004, the HFEA issued a licence to University College Hospital in November 2004 for a form of severe inherited bowel cancer. Familial Adenomatous Polyposis Coli (FAP) is a very serious condition, leading to multiple colon cancers in early adulthood; many of those affected have prophylactic surgery in their teens to remove the colon. Few would argue with attempts to eliminate this condition though many live into adulthood with it. But what about breast cancer – where the genetic markers BRCA1 and BRCA2 are linked not to certainty, but to an increased risk of developing cancer, around 70% as opposed to 10% - should PGD be used in such cases to implant only male embryos. What about achondroplasia which arises from an abnormality in a gene located on chromosome 4 –are short legs and arms a condition that causes severe suffering and should be selected against? What if these choices were to be offered to families with, say, a history of manic depression. These are certainly difficult issues, but I don’t think we understand them through the rhetorical invocation of eugenics. Rather they indicate the kinds ethical choices that are created, not by our modern technologies of life themselves, but by the hopes we invest in them. Drawing on a term used by Rayna Rapp, in her study of women facing amniocentesis,(Rapp, 1999) I term those in these situations ‘ethical pioneers’.(Rose and Novas, 2005) In their relation with their bodies, with experts, with others in similar situations, and with their destiny, they have to create new ways of understanding, judging and acting on themselves, and those to whom they owe responsibilities—their children, their kin, their medical helpers, their co-citizens, their community, their society. They are at the frontiers of the practical ethical dilemmas that
will face more and more of us in the years to come.
In this future, more and more of us will have to make such fateful decisions in conditions of considerable uncertainty. For genomic research has identified the mutations for many rare and devastating conditions, but has been far less successful in identifying genomic sequences that give clear predictions about the likelihood of individuals developing any of the common complex disorders – stroke, heart disease, diabetes and most forms of cancer – let alone mental disorders. However genomic variations at the level of single nucleotides have been identified, and can be tested for, that increase the probability that the individual carrying them will develop a particular disease – as in the BRCA mutations linked to breast cancer that I mentioned earlier- but even then, probability is not certainty, and population data cannot predict individual cases. Outside the rare conditions that I have already discussed, the ‘gene for’ paradigm – one that sought the ‘cause’ of a disease in one or two mutations in one or two genes – has largely been abandoned in favour of a model of complexity, where susceptibility to a disorder is the result of the interaction of multiple variations at many sites in the genome, some of which are protective and some of which, in certain environmental and other circumstances, may increase the risk of a disease developing. In most cases, that is to say, susceptibility testing does not read the implacable medical fate of an embryo, or of a born human being, in their genes, but can suggest an increased risk of developing a disease, although seldom when, how acutely, or with what consequences. This does not generate fatalism and resignation – on the contrary, it adds the obligations of genetic knowledge, genetic responsibility and genetic prudence to us ‘active citizens’ in the advanced liberal societies of the west.
Let me turn to consider another issue that has generated much debate in the newly christened profession of ‘neuroethics’ – enhancement. Some worry that we will soon be able to alter our moods, emotions, desires and intellectual capacities at will though the use of smart drugs, without the hard work of self on itself that is currently required. Leon Kass, Frances Fukuyama and their colleagues on the US President’s Commission on Bioethics write:
The growing power to manage our mental lives pharmacologically threatens our happiness by estranging us not only from the world but also from the sentiments, passions, and qualities of mind and character that enable us to live in it well… the creating of calmer moods and moments of heightened pleasure or self-satisfaction that bear no relation to our actual undertakings threatens to erode our sentiments, passions, and virtues. What is to be particularly feared about the increasingly common and casual use of mind-altering drugs, then, is not that they will induce us to dwell on happiness at the expense of other human goods, but that they will seduce us into resting content with a shallow and factitious happiness. (President's Council on Bioethics (U.S.) and Kass, 2003: 266-7).
Prozac is the usual example here: Peter Kramer introduced the term ‘cosmetic psychopharmacology’ when he suggested that some of the patients who he had put on the drug had become ‘better than well’. Many millions of people worldwide have taken Prozac or its sister SSRI drugs, and my own study of psychiatric drugs shows that, in Europe, antidepressant prescribing per 1000 population doubled from 1993-2002, and the use of SSRIs increased tenfold.(Rose, 2004) Yet we do not seem to have witnessed a general increase in geniality, well being, conviviality or any of the rest of it. In fact, these drugs do not allow individuals to manipulate their moods at will – they do so less markedly and less reliably than the older and rather un-smart drugs such as alcohol and marijuana. Indeed they are not sold on this promise but another, more familiar one – not to make yourself something new, but ‘feel like yourself again’, get your life back, become the author in your own narrative. It is not the novel ethic of enhancement but the familiar ethic of authenticity – familiar from so many of our existing psychotherapies—that is engaged here. The neuroethicists fear of shallow happiness in a pill has the wrong target. And the image of SSRIs, like the minor and major tranquillisers before them, has now shifted from ‘miracle pills’ to ‘bitter pills’ as they enter the troubled zone of scandals, legal challenges, adverse effects and evidence of dependence.
Recently concerns have shifted to “cognitive enhancement” – pharmaceuticals that enhance mental functions. Harry Tracey, publisher of NeuroInvestment, is widely quoted as estimating, in 2004, that at least 40 potential cognitive enhancers were currently in clinical development. Ritalin, a stimulant, is already widely used in the US by students who have not been diagnosed with ADHD; Cephalon’s Provigil was developed for the treatment of sleep disorders but may also increase alertness and mental energy; and drugs initially developed for the treatment of age-related memory loss, ‘Mild Cognitive Impairment’, and early Alzheimer’s Disease, may be used ‘off label’ to improve memory. But what is new here? Humans have been trying to enhance our mental capacities for centuries - by eating brain foods, doing crosswords, going to crammers, and indeed enrolling at the LSE. There is a massive market in sales of nutritional products that claim to enhance our mental capacities. Again, I think the ethicists are addressing the wrong question. Instead, we should ask why, in the west, we have become 'psychopharmacological' societies. The European market for psychiatric drugs in 2000 had a value (at ex-manufacturers prices) of $4,741 million – up from $2,110 million in 1990 – and in the US of $11,619 million – up from $2,502 million in 1990.(Rose, 2004) In many different contexts, in different ways, in relation to a variety of problems, by doctors, psychiatrists, parents and by ourselves, human subjective capacities have come to be routinely re-shaped by psychiatric drugs. This certainly raises important questions about how we configure the boundaries of the normal and the pathological, the treatable and the acceptable. It does indeed raise questions about the kinds of humans we want to be and the role of the market in this reshaping of ourselves as ‘neurochemical selves’ But these aren’t going to be resolved by an appeal to human nature, dignity, or a rejection of artificiality. Neither humans not nature have ever been ‘natural’: we only need to look at social and historical variations in such ‘natural’ phenomena as life expectancy, morbidity, fertility and much else besides. An appeal to nature doesn’t help us much - the limits of the natural are precisely what has been shifted.
Perhaps what should most concern us with such drugs is not enhancement but control. In the developed world, risk management and the prudential principle reign supreme. Even without genomics, the most profitable pharmaceuticals are those that treat, not disease, but risk – the statins for reducing risk of coronary heart disease are the best known example. So we are likely to see calls for psychiatric screening and preventive pharmaceutical intervention based on risks and probabilities. Some of you may have read of a proposal made recently by US President George W. Bush’s New Freedom Commission on Mental Health.(Lenzer, 2004) They proposed a programme of widespread screening of “consumers of all ages” for undiagnosed psychiatric disorders, starting with the 52 million students and 6 million adults who work at the schools. They coupled this with a programme initiated in Texas, where those who were found to be ‘at risk’ by such screening were given preventive treatment with psychiatric drugs even though they did were not currently in any sense ‘ill’. The Texas scheme was widely criticised, partly because of the financial links between the politicians proposing it and the pharmaceutical companies who part funded it and stand to benefit from it. Such preventive screening programmes, which I think will be come more common, will undoubtedly expand the remit of medicine and the market for the pharmaceutical companies. Such screening in US schools, encouraged by various incentives, has been central to the widespread diagnosis of Attention Deficit Hyperactivity Disorder and use of Ritalin or Adderall. I worry less about the possibility of factitious happiness, or enhanced cognitive capacities, and more about the apparent acceptability of these programmes for presymptomatic diagnosis of risky behaviour coupled with incentives or obligations to prescribe pharmaceuticals.
Where, then, do we stand on the implications of biomedicine in the developed world? Many of the promises and predictions that worried social theorists and bioethicists have proved to be unfounded, or at least premature. As Nightingale and Martin have argued, ‘biological knowledge derived in the laboratory is not easily translated into useful clinical practices’ (Nightingale and Martin, 2004: 567): many obstacles have to be overcome before advances in basic biological knowledge generate new medical technologies. We can see this clearly in pharmacogenomics - the promise of personalised medicine where a genomic test would ensure that each individual would get the right drug in the right dose for their precise condition and metabolism. It seemed that, very soon, if you went into your GPs surgery and she diagnosed you with depression, she would administer such a test before deciding which of the twenty or so antidepressants to prescribe and at what dose, ensuring efficacy and avoiding adverse effects. BIOS is engaged in research in this area, but it is already clear that the claims for personalised medicine are exaggerated. At best, all that a genomic test will do is place you in a risk group, not too different from those already familiar from epidemiology and family history – you might be in a group with a 20% chance or an 80% chance of responding well or badly to a drug. This may assist doctors in initial choice of drug. It will present opportunities for drug companies, who will market some drugs with the diagnostic tests required to prescribe them. The costs to health services are obvious, but the benefits to the patient are unproven.
Epochal thinking, utopian or dystopian pronouncements, and dire warnings of slippery slopes don’t help us here – they should themselves be part of what we study. This does not mean that nothing new is happening. The beliefs, hopes, expectations, investments that we see all around us are themselves significant of the centrality of health and illness to contemporary politics, economics and ethics. Perhaps, as some believe, the benefits of this high tech biomedicine for the few will ‘trickle down’ to the many – but as with ‘trickle down’ economics, things don’t always work out this way. No doubt, by the time some of these developments are translated into the clinic, the medical possibilities will seem as routine and un-contentious as in vitro fertilisation appears today, a far cry from the febrile debates over ‘test tube babies’ sparked by the birth of Louise Brown in July 1978.
Indeed there are signs that this message is becoming evident to biocapitalism. Ernst and Young report that 2003 was a difficult year, as “a more sober mood characterizes the [biotech] sector’s condition as it matures”. (Ernst & Young, 2003: 1) The net loss of US biotech revenues in 2003 increased by 71.2 % over 2002. Venture capitalists and life sciences investors seem increasingly aware of the divergence between promissory biotechnology and its substantive results. Frank Baldino, CEO of Cephalon, writes that:
…the appeal of technologies that hold the promise to lead to products in a decade had dwindled…. Over the past 25 years, since the founding of Genentech, only a handful of companies have achieved profitability…to garner the interest of Wall Street today, companies need to have products in late-stage clinical development or very near the market.(Ernst & Young, 2003: 2)
Promissory capitalism demands results in the short term. And the absence of such results is likely to make the biotech industry even less responsive to demands that it should direct some of its R & D to the health needs in developing countries. This leads me back, in conclusion, to the relation between the two universes that I pictured at the outset.
Of course, this picture was misleading – they are not as distinct as might at first appear. The two universes are, in fact, linked by multiple circuits of collaboration, exchange and, of exploitation, also being researched in BIOS. Circuits of tissues (the global trade in organs), of research (researchers collecting DNA from populations in isolated regions in the search for the genomic basis of diseases), of scientists and knowledge themselves (biomedical science being a truly global activity). And, of course, they are linked by the ways in which pharmaceuticals are licensed and exported from the developed to the less developed world.
And while multinational pharma and biotech in the developed world have not engaged significantly with the problems of the less developed world, governments, NGOs and philanthropists have. To take just one example, the Bill and Melinda Gates Foundation
has given more than $1.5 billion to projects focussing on the prevention and control of infectious disease, notably though support for GAVI – Global Alliances for Vaccination and Immunization – in its first five years of operation, GAVI immunized a 4 million children against diphtheria, tetanus and pertussis and more than 24 million against hepatitis B. The foundation also made a $42.6 million donation to the Institute for OneWorld Health, the first nonprofit pharmaceutical company in the United States, to develop affordable cures for malaria, which kills more than a million children each year.
But, lastly, the less developed world is not passive; competitive implications of developments in Asia are causing Western governments and companies particular concern. The report of a UK government mission to India in 2003 is headed with a quote from then Indian Prime Minister Atal Behari Vajpayee: “Biotechnology is a frontier science with a high promise for the welfare of humanity”: at that time there were 160 biotechnology companies in India with combined revenues of US$150 million, driven by developments in the healthcare sector; the industry was expected to grow to US $4.5 billion by 2010. and to generate a million or more jobs. Singapore’s revenues from biomedical manufacturing are projected to reach $7 billion by 2005. In China, world number 3 in terms of overall R&D spend by 2003, the government spent about $180 million building a biotech industry from 1996 to 2002. In the next three years, this will triple. Despite or because of its one child policy, China has an active sector of reproductive medicine, and IVF and PGD is widespread. China is also a world leader in research on stem cells, with its own set of lines, and is already involved in clinical trials. The Stem Cell Research Centre in South Korea has guaranteed government funding of US$7.5 million for the next ten years. In Asia, such developments are underpinned by long term government funding and investment in infrastructure: they are in it for the long term.
Africa, of course, is the exception. But the fulcrum, in biomedicine as in so many other areas, is shifting to the East. Not that we should regard the economic or political regimes of these regions as inherently more concerned with social justice or international equity. But maybe the highly individualistic concerns of Euro-American bioethics might be offset by a deeper concern with collective well being and the ethical problems raised by the morbidity of the many rather than the lives of the few.
So will biomedicine transform society? Being a sociologist, my answer of course is ‘yes’ and ‘no’. Or rather ‘no’, ‘no’ and ‘yes’ No: there will be no new Eden, no end to our human-ness, no posthuman future. We will remain human, all too human. No: we cannot rely on advanced biomedicine in its current corporate form to help put an end to the scandalous inequities in global health. This will remain a matter, not for medicine, but for politics. But yes, in a multitude of small ways, minor shifts, new choices and dilemmas in our everyday existence, we are inhabiting an emergent form of life.
an international and interdisciplinary journal of postmodern cultural sound, text and image
Volume 3, August 2006, ISSN 1552-5112
BERLINGUER, G. (2004) Bioethics, health, and inequality. Lancet, 364, 1086-1091.
BROBERG, G. & ROLL-HANSEN, N. (1996) Eugenics and the welfare state: sterilization policy in
Denmark, Sweden, Norway, and Finland, East Lansing, Michigan State University Press.
BROWN, N. (2003) Hope against hype - accountability in biopasts, presents and futures. Science Studies,
BURLEIGH, M. (1994) Death and deliverance: 'euthanasia' in Germany c.1900-1945, Cambridge,
Cambridge University Press.
CALLON, M. & RABEHARISOA, V. (2004) Gino's lesson on humanity: genetics, mutual entanglement
and the sociologist's role. Economy and society, 33, 1-27.
ERNST & YOUNG (2003) Beyond Borders: The Global Biotechnology Report.
ERNST & YOUNG (2003) Resilience: America's Biotechnology Report.
FOUCAULT, M. (2001) '"Omnes et Singulatim": Towards a Critique of Political Reason. IN RABINOW,
P. (Ed.) Power: The Essential Works. London, Allen Lane.
FRANKLIN, S. (1997) Embodied progress: a cultural account of assisted conception, London, Routledge.
FUKUYAMA, F. (2002) Our posthuman future: consequences of the biotechnology revolution, London,
HABERMAS, J. (2003) The Future of Human Nature, Cambridge, Polity.
KASS, L. (2002) Life, liberty, and the defense of dignity: the challenge for bioethics, San Francisco,
LENZER, J. (2004) Bush plans to screen whole US population for mental illness. BMJ, 328, 1458-.
NIGHTINGALE, P. & MARTIN, P. A. (2004) The myth of the biotech revolution. Trends in
Biotechnology, 22, 564-569.
NOVAS, C. (2001) The political economy of hope: patients' organisations, science and biovalue. Paper
presented at the Postgraduate Forum on Genetics and Society, University of Nottingham, June 21-
NOVAS, C. (2003) Governing Risky Genes: Thesis submitted for the degree of PhD. London, University
PRESIDENT'S COUNCIL ON BIOETHICS (U.S.) & KASS, L. (2003) Beyond therapy: biotechnology
and the pursuit of happiness, New York, ReganBooks.
PROCTOR, R. (1988) Racial hygiene: medicine under the Nazis, Cambridge, Mass.; London, Harvard
RABINOW, P. (1996) Artificiality and Enlightenment: From Sociobiology to Biosociality. Essays on the
Anthropology of Reason. Princeton, Princeton University Press.
RAPP, R. (1999) Testing women, testing the fetus: the social impact of amniocentesis in America, New
RELMAN, A. S. & ANGELL, M. (2002) Americas' other drug problem. New Republic, 227, 27-41.
ROSE, N. (1999) Powers of freedom: reframing political thought, Cambridge; New York, Cambridge
ROSE, N. (2001) The politics of life itself. Theory, Culture & Society, 18, 1-30.
ROSE, N. (2004) Becoming neurochemical selves. IN STEHR, N. (Ed.) Biotechnology, Commerce And
Civil Society. New York, Transaction Press.
ROSE, N. & NOVAS, C. (2005) Biological Citizenship. IN ONG, A. & COLLIER, S. (Eds.) Global
Assemblages: Technology, Politics and Ethics as Anthropological Problems. Malden, MA,
THROSBY, K. (2004) When IVF Fails, Macmillan, Palgrave.
TROUILLER, P., OLLIARO, P., TORREELE, E., ORBINSKI, J., LAING, R. & FORD, N. (2002) Drug
development for neglected diseases: a deficient market and a public-health policy failure. Lancet,
WALDBY, C. (2000) The Visible Human Project: informatic bodies and posthuman medicine, London,
New York, Routledge.
WORLD HEALTH ORGANIZATION (2002) Genomics and World Health. Geneva, World Health
 CLIFFORD BARCLAY LECTURE, 2 February 2005, London School of Economics
Global Forum For Health Research at http://www.globalforumhealth.org/pages/index.asp
posted in January 2004
 e.g. in ‘Supercharging the brain’, in The Economist, 16 September 2004
 I would like to thank all my BIOS colleagues for help in preparing this talk, especially Sarah Franklin for her
comments and suggestions, and Linsey McGoey for research assistance.