Health care

David Brooks on deference for incompetent authority in the wake of Ebola fear

Mark D. White

BrooksDavid Brooks' New York Times column this morning, titled "The Quality of Fear," makes a number of claims regarding the source of the panic surrounding the Ebola virus. As usual, he makes useful and insightful points, but he falls a bit flat when he tries to tie this episode into his persistent theme of deference for authority, especially when this episode—as he describes it—reinforces the very skepticism he laments.

His opening point about Ebola points out this dilemma:

In the first place, we’re living in a segmented society. Over the past few decades we’ve seen a pervasive increase in the gaps between different social classes. People are much less likely to marry across social class, or to join a club and befriend people across social class.

That means there are many more people who feel completely alienated from the leadership class of this country, whether it’s the political, cultural or scientific leadership. They don’t know people in authority. They perceive a vast status gap between themselves and people in authority. They may harbor feelings of intellectual inferiority toward people in authority. It becomes easy to wave away the whole lot of them, and that distrust isolates them further. “What loneliness is more lonely than distrust,” George Eliot writes in “Middlemarch.”

So you get the rise of the anti-vaccine parents, who simply distrust the cloud of experts telling them that vaccines are safe for their children. You get the rise of the anti-science folks, who distrust the realm of far-off studies and prefer anecdotes from friends to data about populations. You get more and more people who simply do not believe what the establishment is telling them about the Ebola virus, especially since the establishment doesn’t seem particularly competent anyway.

His point about isolation within social classes is a familiar one (although somewhat redundant, given what social class means), but more troubling is his transition to leadership and authority. Maybe I'm too young, but at what point in our nation's history have people known or felt "one with" those in authority? Aside from the elites in government, business, and the media, I doubt many Americans have ever considered an elected leader or appointed bureaucrat to be "one of us." After all, it is very difficult for people who have no power to connect with people who have power.

(When he writes of the changing perception of authority, perhaps Mr. Brooks is thinking of the increase in distrust in government following Watergate, but this is a separate issue from feeling connected with authority. I would also add that, given what we know how about government operated before Nixon, we would have been wise to be more distrustful back then as well. Trust based on ignorance is hardly a virtue.)

I would have preferred Mr. Brooks to end the piece with his last sentence above: "You get more and more people who simply do not believe what the establishment is telling them about the Ebola virus, especially since the establishment doesn’t seem particularly competent anyway." In my opinion, that's the core issue: incompetence. I'm sure the American people would love to be able to trust their elected leaders to have a handle on crises and a plan to deal with them—and to tell us when a crisis is not in fact a crisis. But we have seen little such competence from government leaders in a long time. Of course, the people behind the scenes, the (mostly) apolitical researchers and scientists and analysts who toil in anonymity for presidents and Congress, are not incompetent. But when their message is filtered through political interests (especially so nakedly and shamelessly) before they get to the people, they become suspect and unreliable. As a result, many people turn to television and the internet to listen to speakers who seem to talk directly to them, with no apparent agenda, even if what they say is hyperbole or simply utter nonsense.

(Brooks touches on the role of the media later in his piece, stressing how they intensify news and cause disproportionate panic. This is true, of course—but this would not have such an impact if people could rely on the true authorities to give them the information they need without having to doubt their motivations almost by reflex.)

Mr. Brooks makes his best point near the end of the article, but again I read it as giving more reason to be skeptical of authority, not less:

The Ebola crisis has aroused its own flavor of fear. It’s not the heart-pounding fear you might feel if you were running away from a bear or some distinct threat. It’s a sour, existential fear. It’s a fear you feel when the whole environment seems hostile, when the things that are supposed to keep you safe, like national borders and national authorities, seem porous and ineffective, when some menace is hard to understand.

In these circumstances, skepticism about authority turns into corrosive cynicism. People seek to build walls, to pull in the circle of trust. They become afraid. Fear, of course, breeds fear. Fear is a fog that alters perception and clouds thought. Fear is, in the novelist Yann Martel’s words, “a wordless darkness.”

Of course people are frightened, and Mr. Brooks is correct to point out that it is an amorphous, "existential" fear. We often make a distinction between risk and uncertainty, in which risk deals with known probabilities (such as the roll of a fair die) while uncertainty deals with unknown probabilities (such as keeping your job). But our current fears reflect another level of uncertainty altogether: not only uncertainty about what is likely to happen, but what can possibly happen at all.

Just think of the things people worry about these days (reasonably or not). Ebola. ISIS. Climate change. Economic inequality. Human trafficking. Civil war. Terrorism. Not as exhaustive list, and obviously skewed by my perspective, but I hope it gets the idea across, which is that these are not risks that can be insured against or "mere" uncertainities that can be planned for. These are perceived threats that, many of them, could not have been imagined before they occurred, have unknown and potentially catastrophic consequences, and have no clear solution. As a result, they all speak to the fragility at the core of human existence—they merit a certain level of fear that is not easily assuaged by political statements from authorities who do not seem to appreciate their gravity or the trepidation they reasonably cause.

As Mr. Brooks wrote, "It’s a fear you feel when the whole environment seems hostile, when the things that are supposed to keep you safe, like national borders and national authorities, seem porous and ineffective, when some menace is hard to understand." In such conditions, I think skepticism about authority is entirely justified, and should not be reversed until authority shows the people it deserves to be trusted. When Mr. Brooks writes that Ebola "exploits the weakness in the fabric of our culture," I think he is spreading the blame too widely. When authority tries to respond to such existential threats but cannot do so outside an explicitly political lens, the message, as valuable as it might be, becomes soiled, and people turn elsewhere for information (and misinformation). But can we blame them?

I fear I will never understand David Brooks' blind appeals to authority and his unshakeable trust in people with power to use that power responsibly. Then again, I was raised to be distrustful of authority (an attitude he would likely attribute to my class upbringing). I have not yet had reason to change my mind, though, and the incompetence he himself identifies this recent episode is hardly going to give me one.


Bioethics and Disagreement (in Journal of Medicine and Philosophy)

Mark D. White

Jrnl med philThanks to Jan Henderson's terrific blog The Health Culture, I bring you the latest issue of The Journal of Medicine and Philosophy (39/3, June 2014), which focuses on "Bioethics and Disagreement: Organ Markets, Abortion, Cognitive Enhancement, Double Effect, and Other Key Issues in Bioethics," and includes articles by James Stacey Taylor, Walter E. Block, Rob Goodman, and more. In fact, just check out Henderson's blog for the titles and abstracts--thanks, Jan!

 


How must military medical ethics adapt to the realities of modern warfare?

Mark D. White

The latest issue of Bioethics (27/3, March 2013) features a brief but provocative paper by Steven H. Miles (University of Minnesota in Minneapolis) titled "The New Military Medical Ethics: Legacies of the Gulf Wars and the War on Terror":

United States military medical ethics evolved during its involvement in two recent wars, Gulf War I (1990–1991) and the War on Terror (2001–). Norms of conduct for military clinicians with regard to the treatment of prisoners of war and the administration of non-therapeutic bioactive agents to soldiers were set aside because of the sense of being in a ‘new kind of war’. Concurrently, the use of radioactive metal in weaponry and the ability to measure the health consequences of trade embargos on vulnerable civilians occasioned new concerns about the health effects of war on soldiers, their offspring, and civilians living on battlefields. Civilian medical societies and medical ethicists fitfully engaged the evolving nature of the medical ethics issues and policy changes during these wars. Medical codes of professionalism have not been substantively updated and procedures for accountability for new kinds of abuses of medical ethics are not established. Looking to the future, medicine and medical ethics have not articulated a vision for an ongoing military-civilian dialogue to ensure that standards of medical ethics do not evolve simply in accord with military exigency.


Highlights of ASSA (pt. 3): Costly Posturing in China

Jonathan B. Wight

Xi Chen (Yale University) presented a fascinating paper at the recent ASE/ASSA meetings in San Diego (co-author Xiaobo Zhang of the International Food Policy Research Institute and Peking University).

"Costly Posturing: Relative Status, Ceremonies and Early Child Development" explores the relationship between social behaviors and economic and health outcomes. In particular, it examines how public ceremonies such as funerals, weddings, home blessings, and other events negatively affect substantive measures of human well-being -- specifically by caloric intake and malnutrition. People feel intense social pressure to participate in these social rituals even when it detracts from the well-being of their own children.

The authors present evidence that in rural areas of China, poor families spend more on gifts than do the richest families-- creating what the authors call "squeeze effects". The impact of this is statistically observed on children who are in utero at the time of the ceremonies.

This is counter to what one normally thinks, which is that social events tend to be redistributive. For example, in the highlands of Guatemala, ceremonies are paid for disproportionately by wealthier villagers. Such ceremonies serve to redistribute wealth in society according to a cosmic vision of what promotes justice in the circumstances. (See: Blevins, Ramirez, and Wight, "Ethics in the Mayan Marketplace," in Mark D. White, ed., Accepting the Invisible Hand: Market-Based Approaches to Social-Economic Problems(Palgrave Macmillan, 2010, pp. 87-110).

These findings also appear to contradict Confucian beliefs about the duty of a leader to provide for those in a lower hierarchy. It may be that these data can be explained by arguing that poor people have to try harder to make an impression and gain status. Hence, they give larger gifts.

Adam Smith noted that it is not simply the rich who are interested in status. Writing in The Wealth of Nations, Smith noted that:

"By necessaries I understand, not only the commodities which are indispensably necessary for the support of life, but whatever the custom of the country renders it indecent for creditable people, even of the lowest order, to be without. A linen shirt, for example, is, strictly speaking, not a necessary of life…. But in the present times, through the greater part of Europe, a creditable day-labourer would be ashamed to appear in publick without a linen shirt, the want of which would be supposed to denote that disgraceful degree of poverty, which, it is presumed, nobody can well fall into without extreme bad conduct. (566)"

What is not in the paper is a broader general analysis that would examine whether social affiliations provide important pay-backs over many decades to the wider group. That is, social events during hard times may injure an in-utero baby. But being part of the social group may confer advantages to other siblings in terms of jobs and marriages.

This was a highly stimulating paper and a remarkable attempt to understand the link between status spending and negative health indicators in poor communities.


Health Care and the Elderly

Jonathan B. Wight

America's health care crisis is very much a crisis of how we treat elderly people:

Note that the surge in American spending starts before Medicare kicks in.

Americans do "heroic" spending to add an extra few months to a life without making the quality of that life better, and often making it worse.

This has been my experience with various family members, who endured painful procedures and expensive hospitalizations. Without these they might have passed to the new world a few months sooner, but their enjoyment of life in this world might have been much better--at home, hooked to a morphine pump.

How do we make the transition to death with dignity?

One option is to pay people! A hospice worker for the insurance company could offer this deal: "If we do all the fancy modern interventions, your last four months of life will cost $400,000. On the other hand, we could split that with you. We'll give you $200,000 to simply walk away (with marijuana brownies to cover the pain). We'll save money and your heirs will, too."

It sounds crude and crass to bribe someone to die earlier. Instead, it could be viewed as the reward for dying with dignity.

If it were me I would love to die thinking I could donate a large chunk of cash to worthwhile people and causes. What a way to go!


"Property in Human Biomaterials—Separating Persons and Things?"

Mark D. White

QuigleyMuireann Quigley (Centre for Social Ethics & Policy, School of Law, University of Manchester) has a fascinating paper in the latest issue of Oxford Journal of Legal Studies (32/4, Winter 2012) titled "Property in Human Biomaterials—Separating Persons and Things?":

The traditional ‘no property’ approach of the law to human biomaterials has long been punctured by exceptions. Developments in the jurisprudence of property in human tissue in English law and beyond demonstrate that a variety of tissues are capable of being subject to proprietary considerations. Further, among commentators, there are few who would deny, given biotechnological advances, that such materials can be considered thus. Yet, where commentators do admit human biomaterials into the realm of property, it is often done with an emphasis on some sort of separation from the person who is the source of those materials. One line of argument suggests that there is a difference between persons and things, which constitutes a morally justifiable distinction when it comes to property. This article examines whether the idea of separability can do the work of demarcating those objects that ought to be considered property from those that ought not to be. It argues that, despite the entailment of a separability criterion inherent in both the statutory and common law positions, and the support given to this by some commentators, it is philosophically problematic as the basis for delineating property in human tissue and other biomaterials. 


Special issue of Health Economics, Policy and Law on end-of-life care

Mark D. White

HEPLThe latest issue of Health Economics, Policy and Law (7/4, October 2012) is a special issue on th topic of end-of-life care, stemming from a workshop held by the London School of Economics/Columbia Health Policy Group in December 2010:

Introduction (Adam Oliver)

Comparing the United States and United Kingdom: contrasts and correspondences (Rudolf Klein)

The conventionally antithetical stereotypes of the United Kingdom and United States health care systems needs to be modified in the case of the elderly. Relative to the rest of the population, the over-65s in the United States are more satisfied with their medical care than their UK counterparts. There is also much common ground: shared worries about the quality of elderly care and similar attitudes towards assisted death. Comparison is further complicated by within country variations: comparative studies should take account of the fact that even seemingly polar models may have pools of similarity.

Evidence and values: paying for end-of-life drugs in the British NHS (Kalipso Chalkidou)

In January 2009, Britain's National Institute for Health and Clinical Excellence (NICE), following a very public debate triggered by its decision, six months earlier, provisionally to rule against the adoption by the National Health Service (NHS) of an expensive drug for advanced renal cancer, introduced a new policy for evaluating pharmaceuticals for patients nearing the end of their lives. NICE's so-called end-of-life (EOL) guidance for its Committees effectively advises them to deviate from the Institute's threshold range and to value the lives of (mostly) dying cancer patients more than the lives of those suffering from other, potentially curable, chronic or acute conditions. This article tells the story of the EOL guidance. Through looking at specific EOL decisions between 2009 and 2011 and the reactions by stakeholders to these decisions and the policy itself, it discusses the triggers for NICE's EOL guidance, the challenges NICE faces in implementing it and the policy's putative implications for the future role of NICE in the NHS, especially in the context of value-based reforms in the pricing and evaluation of pharmaceuticals, currently under consideration.

Valuing end-of-life care in the United States: the case of new cancer drugs (Corinna Sorenson)

New cancer therapies offer the hope of improved diagnosis to patients with life-threatening disease. Over the past 5–10 years, a number of specialty drugs have entered clinical practice to provide better systemic therapy for advanced cancers that respond to few therapeutic alternatives. To date, however, such advances have been only modestly effective in extending life and come with a high price tag, raising questions about their value for money, patient access and implications for health care costs. This article explores some of the key issues present in valuing end-of-life care in the United States in the case of advanced cancer drugs, from the difficult trade-offs between their limited health benefits and high costs to the technical, political and social challenges in assessing their value and applying such evidence to inform policy and practice. A number of initial steps are discussed that could be pursued to improve the value of advanced cancer care.

Setting priorities in and for end-of-life care: challenges in the application of economic evaluation (Charles Normand)

Health technology assessment processes aim to provide evidence on the effectiveness and cost-effectiveness of different elements of health care to assist setting priorities. There is a risk that services that are difficult to evaluate, and for which there is limited evidence on cost-effectiveness, may lose out in the competition for resources to those with better evidence. It is argued here that end-of-life care provides particular challenges for evaluation. Outcomes are difficult to measure, can take place over short time scales, and services can be difficult to characterise as they are tailored to the specific needs of individuals. Tools commonly used to measure health care outcomes do not appear to discriminate well in the end-of-life care context. It is argued that the assumption that units of time of different quality of life can simply be added to assess the overall experience at the end of life may not apply, and that alternative perspectives, such as the Peak and End Rule, might offer useful perspectives.

Delivering better end-of-life care in England: barriers to access for patients with a non-cancer diagnosis (Rachael Addicott)

The End of Life Care Strategy (Department of Health, 2008) radically raised the profile of end-of-life care in England, signalling the need for development in planning and delivery, to ensure that individuals are able to exercise genuine choice in how and where they are cared for and die. Research has indicated that there have been continuing difficulties in access to high-quality and appropriate support at the end of life, particularly for patients with a diagnosis other than cancer. This article uses research findings from three case studies of end-of-life care delivery in England to highlight some of the barriers that continue to exist, and understand these challenges in more depth. Access to high-quality and appropriate end-of-life care has been a challenge for all patients nearing the end of life. However, the findings from this research indicate that there are several interrelated reasons why access to end-of-life care services can be more difficult for patients with a non-cancer diagnosis. These issues relate to differences in disease trajectories and subsequent care planning, which are further entrenched by existing funding arrangements.

US health care: the unwinnable war against death (Daniel Callahan)

For well over 40 years, the United States has struggled to improve end-of-life care. This effort, heavily focused on living wills, hospice and improved doctor–patient communications and palliative care, has been a modest success only. Both doctors and patients are often unwilling to accept the fact that death is on the way – only 25% of Americans have an advance directive. Advances in medical technology have provided more ways of keeping dying patients alive, making the line between living and dying harder to discern. The way physicians are paid promotes the use of technology not for talking with patients. Underlying these practical problems is a culture of American medicine with deep historical roots: that medical progress should be unending and is a moral imperative, that death is the greatest enemy and that cure, not care, is the primary goal. A better balance between care and cure is needed.

Stealing on insensibly: end of life politics in the United States (Lawrence D. Brown)

Because the United States often seems (and seems eager to present itself as) the home of the technological imperative and of determination to brand all challenges to it in end-of-life care as a descent into death panels, the prospects look unpromising for progress in US public policies that would expand the range of choices of medical treatments available to individuals preparing for death. Beneath this obdurate and intermittently hysterical surface, however, the diffusion across US states and communities of living wills, advanced directives, palliative care, hospice services and debates about assisted suicide is gradually strengthening not so much ‘personal autonomy’ as the authority, cultural and formal, of individuals and their loved ones not merely to shape but to lead the inevitably ‘social’ conversations on which decisions about care at the end of life depend. In short, the nation appears to be (in terms taken from John Donne's mediations on death) ‘stealing on insensibly’ – making incremental progress toward the replacement of clinical and other types of dogma with end-of-life options that honor the preferences of the dying.

End-of-life care for patients with dementia in the United States: institutional realities (Michael Gusmano)

Few are satisfied with end-of-life care in the United States. For families and friends of people with dementia, end-of-life care is particularly frustrating. Providing better end-of-life care to people with dementia is urgent because the prevalence of the disease is increasing rapidly. Dementia is currently the seventh leading cause of death in the United States and fifth leading cause of death among people aged 65 years and older. By 2050, there will be around 19 million people with Alzheimer's disease. This article reviews ethical and policy challenges associated with providing end-of-life care for people with dementia in the United States. I explain how disagreements about the meaning of futility lead to poor care for people with dementia. Most people agree that we should not provide care that is futile, but there is little agreement about how futility should be defined. US policies and politics clearly tip the balance in the direction of treatment, even in the face of strong evidence that such care does more harm than good. Although we may never reach a consensus, it is important to address these questions and think about how to develop policies that respect the different values.

Dementia, death and advance directives (Jonathan Wolff)

This article considers the ethics of advance directives, especially in relation to conditions such as dementia. For some choices, such as over whether one's life should end at home or in a hospice, advance directives can be very enlightened and helpful. For others, such as those to end the life of an autonomous subject, against their will, have no moral appeal and would rightly be ignored. In a wide range of intermediate cases, given our typical lack of insight into how changes in our health condition will affect us in other ways, we should be very cautious indeed in promoting the use of advance directives in end-of-life decisions, at least where a reasonable quality of life remains. There may be some reasons for giving priority to the earlier autonomous self over a later, contented but non-autonomous self, but these reasons seem far from compelling.


Symposium on Obamacare in The Journal of Law, Medicine & Ethics

Mark D. White

JLMEThe latest issue of The Journal of Law, Medicine & Ethics (40/3, Fall 2012) features a symposium titled "The Health Care Reform Law (PPACA): Controversies in Ethics and Policy," based on a conference held at the Medical University of South Carolina in October 2011 and organized as a group of point-counterpoint discussions focusing on "the responsibilities of individuals versus those of society to provide health care, the morality of market-based health care reforms, the effectiveness of consumer-driven health care reforms, and the role of the principle of justice in grounding health care reform" ("Introduction," p. 523).

Introduction (Robert M. Sade)

Physicians Have a Responsibility to Meet the Health Care Needs of Society (Allan S. Brett)

Medical Responsibility (Ronald Hamowy)

Market-Based Reforms in Health Care Are Both Practical and Morally Sound (James Stacey Taylor)

Government Intervention in Health Care Markets Is Practical, Necessary, and Morally Sound (Len M. Nichols)

Expanding Choice through Defined Contributions: Overcoming a Non-Participatory Health Care Economy (Robert E. Moffit)

Cost-Sharing under Consumer-Driven Health Care Will Not Reform U.S. Health Care (John P. Geyman)

Justice and Fairness: A Critical Element in U.S. Health System Reform (Paul T. Menzel)

No Theory of Justice Can Ground Health Care Reform (Griffin Trotter)


Nutritional labeling, nudges, and a "cynical view of human nature"

Mark D. White

Food labelsNew today from associate editor Brian Fung at The Atlantic is a piece on an experimental nutritional labeling system modeled on traffic lights. In use in the United Kingdom (where it was instituted by the British government's "nudge unit"), the revised nutrition labels would have color-coded icons for fat, calories, and other aspects of food products according to whether the levels are considered healthy or unhealthy. Mr. Fung reports the results of a study from Masschusetts General Hospital that--unsurprisingly--such labels increase the amount of healthy food consumered and lower the amount of unhealthy food consumed.

I discuss labeling systems such as these in my upcoming book, The Manipulation of Choice: Ethics and Libertarian Paternalism, in which I differentiate between the information provided by such label--which allows people to make better decisions according to their own interests--and schemes like the traffic light one which nudge people toward some food and away from others based on bureaucrats' judgment of what is healthy and what is not. (I also discussed nutrition labeling in an earlier blog post.) As Mr. Fung acknowledges, "Bickering over what red, yellow, and green actually mean is likely to be as difficult -- if not more so -- than actually putting the system in place." Some of this bickering may be political, of course, but some will be due to disagreements among health experts over what a proper diet consists of--a debate unlikely to be settled any time soon among the experts, much less by government fiat!

But what I found most interesting about Mr. Fung's article was the irony in the subheading:

If soda bans take an implicitly cynical view of human nature, food labels that give consumers the impression of freedom might be their opposite.

I don't know what could reflect a more cynical view of human nature then trumpeting proudly the prospect of "giving consumers the impression of freedom." These two approaches to paternalistic regulation are not opposites--the only difference is that one is clumsy and the other is "clever." This attitude continues as the article begins (emphasis mine):

From New York City's point of view, humans are notoriously bad at making good decisions. That's what makes a ban on large sodas necessary: the idea that Americans can't be trusted with their own health. But maybe there's a middle ground between letting people gorge themselves on junk food and making it illegal. The key to making it all work is creating an environment where consumers still believe they're in control.

No, there's no cynical view of human nature on display there.

Finally, as the article ends, Mr. Fung writes:

New York's faith in humanity must be low indeed if it thinks only the most blatant coercion can get people behaving differently. Whether collectively or alone, people are hopelessly incompetent, is the message Bloomberg's soda ban sends. A more accurate way to put it might be that people are incredibly malleable, open to having their decisions swayed in terrible ways by factors that are out of their hands. The difference is slight, but in the small gap between those two statements lies an opportunity to move people in the right direction without taking away their freedom.

As above, I disagree with Mr. Fung: the difference is not slight, it is nonexistent. In my view, all paternalists have little faith in humanity, as shown by their willingness to substitute their own judgment for those of the people they claim to help, based on an overly simplistic view of decision-making and interests. And if you "move people in the right direction" by manipulation rather than by reasoned persuasion--subverting their deliberative processes rather than engaging them--you are taking away their freedom, little by little.

But as long as they're left with the "impression" of their freedom, as long as they "still believe they're in control," I guess that's OK.


Is health or health care a public concern, a right, or a need?

Mark D. White

One of the topics that fascinates me, but which I never seem to have time to catch up on, is the moral/political status of health and health care. In most cases (other than particularly infectious or contagious diseases), I consider health and health care to be matters of personal choice and responsibility, but I'm eager to hear the arguments on the other side as well.

JLMETwo articles in the latest issue of The Journal of Law, Medicine & Ethics (40/2, Summer 2012), part of a symposium on pharmaceutical firms and the right to health, address this issue:

"Health as a Basic Human Need: Would This Be Enough?" by Thana Cristina de Campos

Although the value of health is universally agreed upon, its definition is not. Both the WHO and the UN define health in terms of well-being. They advocate a globally shared responsibility that all of us — states, international organizations, pharmaceutical corporations, civil society, and individuals — bear for the health (that is, the well-being) of the world's population. In this paper I argue that this current well-being conception of health is troublesome. Its problem resides precisely in the fact that the well-being conception of health, as an all-encompassing label, does not properly distinguish between the different realities of health and the different demands of justice, which arise in each case. In addressing responsibilities related to the right to health, we need to work with a more differentiated vocabulary, which can account for these different realities. A crucial distinction to bear in mind, for the purposes of moral deliberation and the crafting of political and legal institutions, is the difference between basic and non-basic health needs. This distinction is crucial because we have presumably more stringent obligations and rights in relation to human needs that are basic, as they justify stronger moral claims, than those grounded on non-basic human needs. It is important to keep this moral distinction in mind because many of the world's problems regarding the right to health relate to basic health needs. By conflating these needs with less essential ones, we risk confusing different types of moral claims and weakening the overall case for establishing duties regarding the right to health. There is, therefore, a practical need to reevaluate the current normative conception of health so that it distinguishes, within the broad scope of well-being, etween what is basic and what is not. My aim here is to shed light onto this distinction and to show the need for this differentiation. I do so, first, by providing, on the basis of David Miller's concept of basic needs, an account of basic health needs and, secondly, by mounting a defense of the basic needs approach to the right to health, arguing against James Griffin who opposes the basic needs approach.

"A Right to Health Care" by Pavlos Eleftheriadis

What does it mean to say that there is a right to health care? Health care is part of a cooperative project that organizes finite resources. How are these resources to be distributed? This essay discusses three rival theories. The first two, a utilitarian theory and an interst theory, are both instrumental, in that they collapse rights to good states of affairs. A third theory, offered by Thomas Pogge, locates the question within an institutional legal context and distinguishes between a right to health care that results in claimable duties and other dimensions of health policy that do not. Pogge's argument relies on a list of “basic needs,” which itself, however, relies on some kind of instrumental reasoning. The essay offers a reconstruction of Pogge's argument to bring it in line with a political conception of a right to health care. Health is a matter of equal liberty and equal citizenship, given our common human vulnerability. If we are to live as equal members in a political community, then our institutions need to create processes by which we are protected from the kinds of suffering that would make it impossible for us to live as equal members.

CoggonBut what I most look forward to reading is What Makes Health Public?: A Critical Evaluation of Moral, Legal, and Political Claims in Public Health by John Coggon, whom I had the pleasure of meeting and listening to at the "Regulating Bodies and Influencing Health" symposium in Rotterdam in June.

John Coggon argues that the important question for analysts in the fields of public health law and ethics is 'what makes health public?' He offers a conceptual and analytic scrutiny of the salient issues raised by this question, outlines the concepts entailed in, or denoted by, the term 'public health' and argues why and how normative analyses in public health are inquiries in political theory. The arguments expose and explain the political claims inherent in key works in public health ethics. Coggon then develops and defends a particular understanding of political liberalism, describing its implications for critical study of public health policies and practices. Covering important works from legal, moral, and political theory, public health, public health law and ethics, and bioethics, this is a foundational text for scholars, practitioners and policy bodies interested in freedoms, rights and responsibilities relating to health.