Op-eds

Cost effectiveness is not the problem — government control of health care is.

Health dataMark D. White

In today's "The Upshot" in The New York Times, economist Aaron E. Carroll bemoans the fact that health policymakers, regulators, and spokespeople are reluctant, and sometimes even forbidden, to discuss and make use of information regarding the cost effectiveness of particular treatments. The fear is that they will invoke the spectres of rationing and "death panels," or more generally, medical decisions made on the basis of money alone and not the needs or interests of patients and their loved ones.

I agree with Carroll that cost effectiveness is an essential and necessary topic for discussion; after all, health care has to be paid for by someone, who is responsible for making sure that scarce resources are used in the most beneficial way possible. And I think most people understand this principle as well, even if they don't want to acknowledge it at times of tragedy and impending loss.

If people are afraid of calculations of cost effectiveness, it's because they don't want some distant, faceless, bureaucracy using cold data to make decisions that affect such an intensely personal aspect of their lives. But the problem isn't the numbers themselves—it's who is using them to make the critical decisions.

If health care decisions had not been centralized under the Affordable Care Act (or a similar plan), and health care decisions were left in the hands of doctors, patients, and insurance companies unbound by government mandates regarding coverage, these parties together could use cost effectiveness numbers in a way that worked with each patients based on his or her interests, coverage, and resources. Each patient, together with his or her doctor and loved ones, could balance these various factors in a way that furthered his or her overall interests within available resources and insurance coverage. They could use cost effectiveness information as one input into a specific decision in a way that furthers that patient's interests.

I wrote about this aspect of private health care in "Markets and Dignity: The Essential Link (With an Application to Health Care)," my chapter in my edited volume Accepting the Invisible Hand: Market-Based Approaches to Social-Economic Problems (Palgrave Macmillan), on pp. 13-14:

The possibility of making private decisions regarding the benefits and costs of various treatment options, whether for minor illness or chronic disease, puts the choice in the patient’s hands (as well as with her doctor and whomever else the patient wants to join the process, such as family or friends). In consultation with her doctor, the patient can assess the value of various treatments, considering the merits compared not only to their costs, and the benefits and costs of alternative options, but also other uses towards which those resources can be devoted, which are all subjective valuations. Perhaps she will choose not to undergo the premium treatment, even if she could afford it, because she wants to leave the money for her children, or take a cruise in the final months of her life; or perhaps she will sell her house to pay for a little more time on life support and with her grandchildren. In a market setting, this choice is hers, along with its benefits, costs, and other consequences.

I am not denying that the patient may not be able to afford the premium treatment because she does not have the resources for it; this is tragic, to be sure, but unavoidable in a world of scarcity. If she is not making these decisions, someone else is; an insurance company or HMO may also refuse her the premium treatment based on costs, and a government-run health plan may do the same. But in these cases, the decision would be made for her, according to someone else’s calculation of whether the treatment was “worthwhile” in terms of costs and benefits for the hospital, insurance company, or government health program, all of whom have scarce resources that must be allocated somehow. In a market context, the decision would be hers, even if it seemed she had no decision at all because she does not possess the resources, either due to bad luck or bad planning, or other choices made through her life.

All is not lost, necessarily; just because the premium treatment is out of reach does not mean there are not lesser, more inexpensive treatments that will also be of benefit. In a market system, this is the patient’s choice, just as she can choose what size house to buy, what model car to lease, what size TV to own. Every person prioritizes the various interests on her life; some forego the large house to take frequent vacations, some do the opposite. Some may opt for the cheaper treatment option to retain more resources for another goal in life, or to give more to others rather than spend it on premium care for herself. And certainly, past choices will constrain or expand her present options; one who spends her income on lavish toys throughout life should not expect sympathy when she cannot afford top-line treatment at the end of it. But these are her choices, while in any other system, this decision may be made for her, according to calculations based on the imputed value of her life and her well-being compared to other persons. Not every person can afford to have the premium treatment, but this fact is due to scarcity of resources, not the way in which they are allocated or distributed, and it will be true under a state-controlled system as well as a market system. A state system focused on efficiency cannot allow everyone to have the premium treatment either, and the choice of who (if anybody) undergoes it will be truly arbitrary, with no role for choice on the part of the patient or her family. Choices that so closely affect a person’s life should be made by that person alone (or other persons to whom she delegates—or sells—that authority); they should not be made by another party that either presumes to know her “true interests” or serves the collective weal in the name of efficiency.


More confusion about individualism in The New York Times

Mark D. White

In this morning's installment of The Stone in The New York Times, anthropologist John Edward Terrell invokes against the individualist strain in modern politics, especially on behalf of "Republicans, especially libertarians and Tea Party members on the ideological fringe." But, as regular New York Times columnist David Brooks often does, Professor Terrell conflates individualism with self-interest, ultimately attacking a straw man.

Most of Terrell's piece is uncontroversial. He surveys ancient philosophers who emphasized the social nature of persons and the modern science that supports them. (He finds this ironic, implying that some woud disagree; who, exactly, remains to be seen.) He discusses religious traditions that emphasize community and responsibility, and contrasts this with Enlightment thinkers that emphasized the individual (each in his own nuanced way).

Near the end of the piece, though, he stakes a bold claim: "the sanctification of the rights of individuals and their liberties today by libertarians and Tea Party conservatives is contrary to our evolved human nature as social animals." This is a false dichotomy, for there is no contrast at all. Rights and liberties are necessary (if not sufficient) for a functioning civil society. Rights and liberties enable individuals to pursue their own interests broadly defined, which may and often do include their own well-being, the well-being of others, and ideals such as justice and equality. Libertarians and "Tea Party Conservatives" may place more emphasis on rights and liberties because they see support for them declining, but this does not imply that it is their only concern and that they think it is the sole metric of human progress and well-being.

Terrell also writes, "the thought that it is both rational and natural for each of us to care only for ourselves, our own preservation, and our own achievements is a treacherous fabrication." I agree, it is a fabrication, but in the sense of a straw man fabricated by Terrell himself, not any prominent conservative or libertarian thinker.

I'll end where Terrell began: politics. He writes that part of the divide between left and right in the US is over the "role of the individual," with the left "more likely to embrace the communal nature of individual lives" while the right (and libertarians) favoring rapacious self-interest. (I paraphrased a bit there.)

Let me offer an alternative, although it doesn't strike such a stark tone. Both left and right appreciate and value the social nature of the individual and their responsibilities to each other. Where they differ is in the role of the state in executing those responsibilities. The left believes the state should take care of the needy, through social programs and redistribution, while the right (and libertarians) believe individuals, acting alone or through voluntary organizations, should help each other. (And they do, as numerous studies have shown.) In other words, those who Terrell accuses of worshipping at the altar of self-interest are actually expressing their responsibility toward other individuals as an exercise of the rights and liberties they value so highly.

In short, rights and liberties are not always used to further self-interest, and the institutions of government often are. Individualism is not self-interest—on the contrary, the most noble and admirable acts of charity are those that result from the free actions of individuals acting in their sense of social responsibility.

There is no contrast here—let's not fabricate one.


David Brooks on deference for incompetent authority in the wake of Ebola fear

Mark D. White

BrooksDavid Brooks' New York Times column this morning, titled "The Quality of Fear," makes a number of claims regarding the source of the panic surrounding the Ebola virus. As usual, he makes useful and insightful points, but he falls a bit flat when he tries to tie this episode into his persistent theme of deference for authority, especially when this episode—as he describes it—reinforces the very skepticism he laments.

His opening point about Ebola points out this dilemma:

In the first place, we’re living in a segmented society. Over the past few decades we’ve seen a pervasive increase in the gaps between different social classes. People are much less likely to marry across social class, or to join a club and befriend people across social class.

That means there are many more people who feel completely alienated from the leadership class of this country, whether it’s the political, cultural or scientific leadership. They don’t know people in authority. They perceive a vast status gap between themselves and people in authority. They may harbor feelings of intellectual inferiority toward people in authority. It becomes easy to wave away the whole lot of them, and that distrust isolates them further. “What loneliness is more lonely than distrust,” George Eliot writes in “Middlemarch.”

So you get the rise of the anti-vaccine parents, who simply distrust the cloud of experts telling them that vaccines are safe for their children. You get the rise of the anti-science folks, who distrust the realm of far-off studies and prefer anecdotes from friends to data about populations. You get more and more people who simply do not believe what the establishment is telling them about the Ebola virus, especially since the establishment doesn’t seem particularly competent anyway.

His point about isolation within social classes is a familiar one (although somewhat redundant, given what social class means), but more troubling is his transition to leadership and authority. Maybe I'm too young, but at what point in our nation's history have people known or felt "one with" those in authority? Aside from the elites in government, business, and the media, I doubt many Americans have ever considered an elected leader or appointed bureaucrat to be "one of us." After all, it is very difficult for people who have no power to connect with people who have power.

(When he writes of the changing perception of authority, perhaps Mr. Brooks is thinking of the increase in distrust in government following Watergate, but this is a separate issue from feeling connected with authority. I would also add that, given what we know how about government operated before Nixon, we would have been wise to be more distrustful back then as well. Trust based on ignorance is hardly a virtue.)

I would have preferred Mr. Brooks to end the piece with his last sentence above: "You get more and more people who simply do not believe what the establishment is telling them about the Ebola virus, especially since the establishment doesn’t seem particularly competent anyway." In my opinion, that's the core issue: incompetence. I'm sure the American people would love to be able to trust their elected leaders to have a handle on crises and a plan to deal with them—and to tell us when a crisis is not in fact a crisis. But we have seen little such competence from government leaders in a long time. Of course, the people behind the scenes, the (mostly) apolitical researchers and scientists and analysts who toil in anonymity for presidents and Congress, are not incompetent. But when their message is filtered through political interests (especially so nakedly and shamelessly) before they get to the people, they become suspect and unreliable. As a result, many people turn to television and the internet to listen to speakers who seem to talk directly to them, with no apparent agenda, even if what they say is hyperbole or simply utter nonsense.

(Brooks touches on the role of the media later in his piece, stressing how they intensify news and cause disproportionate panic. This is true, of course—but this would not have such an impact if people could rely on the true authorities to give them the information they need without having to doubt their motivations almost by reflex.)

Mr. Brooks makes his best point near the end of the article, but again I read it as giving more reason to be skeptical of authority, not less:

The Ebola crisis has aroused its own flavor of fear. It’s not the heart-pounding fear you might feel if you were running away from a bear or some distinct threat. It’s a sour, existential fear. It’s a fear you feel when the whole environment seems hostile, when the things that are supposed to keep you safe, like national borders and national authorities, seem porous and ineffective, when some menace is hard to understand.

In these circumstances, skepticism about authority turns into corrosive cynicism. People seek to build walls, to pull in the circle of trust. They become afraid. Fear, of course, breeds fear. Fear is a fog that alters perception and clouds thought. Fear is, in the novelist Yann Martel’s words, “a wordless darkness.”

Of course people are frightened, and Mr. Brooks is correct to point out that it is an amorphous, "existential" fear. We often make a distinction between risk and uncertainty, in which risk deals with known probabilities (such as the roll of a fair die) while uncertainty deals with unknown probabilities (such as keeping your job). But our current fears reflect another level of uncertainty altogether: not only uncertainty about what is likely to happen, but what can possibly happen at all.

Just think of the things people worry about these days (reasonably or not). Ebola. ISIS. Climate change. Economic inequality. Human trafficking. Civil war. Terrorism. Not as exhaustive list, and obviously skewed by my perspective, but I hope it gets the idea across, which is that these are not risks that can be insured against or "mere" uncertainities that can be planned for. These are perceived threats that, many of them, could not have been imagined before they occurred, have unknown and potentially catastrophic consequences, and have no clear solution. As a result, they all speak to the fragility at the core of human existence—they merit a certain level of fear that is not easily assuaged by political statements from authorities who do not seem to appreciate their gravity or the trepidation they reasonably cause.

As Mr. Brooks wrote, "It’s a fear you feel when the whole environment seems hostile, when the things that are supposed to keep you safe, like national borders and national authorities, seem porous and ineffective, when some menace is hard to understand." In such conditions, I think skepticism about authority is entirely justified, and should not be reversed until authority shows the people it deserves to be trusted. When Mr. Brooks writes that Ebola "exploits the weakness in the fabric of our culture," I think he is spreading the blame too widely. When authority tries to respond to such existential threats but cannot do so outside an explicitly political lens, the message, as valuable as it might be, becomes soiled, and people turn elsewhere for information (and misinformation). But can we blame them?

I fear I will never understand David Brooks' blind appeals to authority and his unshakeable trust in people with power to use that power responsibly. Then again, I was raised to be distrustful of authority (an attitude he would likely attribute to my class upbringing). I have not yet had reason to change my mind, though, and the incompetence he himself identifies this recent episode is hardly going to give me one.


Mehmet Cangul on an upside to a reduction in employment

Mark D. White

Mehmet bookMuch has been written recently regarding Obamacare's predicted effect on employment and, even more recently, on the CBO's report on the effect of increasing the minimum wage on same—see, for instance, Ross Douthat's latest column, "When Work Disappears." 

As it happens, I was fortunate enough to see an advance copy of Mehmet Cangul's upcoming book Toward a Future Beyond Employment, in which he argues that there can be an upside to a gradual reduction in employment, but that society needs be re-evaluate its ideas about work, consumption, and leisure in order for that to happen. (I have an older piece at Psychology Today along the same lines, so I was drawn to Mehmet's arguments.)

I asked Mehmet if he would write a short piece for Economics and Ethics addressing the recent Obamacare controversy, and he graciously agreed. Below is what he wrote:

-----

The recent Congressional Budget Office Report, revealing Obamacare would cause more than 2 million job losses, has caused much stir. Republicans have been quick to point out that they were right all along about Obamacare’s cost on jobs and business. But the White House defended the result, arguing that much of the job loss will come from people choosing not to work and instead focus on their “dreams.” Their reasoning is that healthcare subsidies for the lower ladder of the income scale would enable workers to “escape” jobs that they would otherwise stay in only to keep their healthcare coverage. Some on the right have been quick to ridicule this argument about expanded choice, framing it as a last-ditch political effort to make the best of an embarrassing revelation.

However, we should take a pause from politics and ask this intuitive question: does it make sense that people would continue to work at jobs they would rather quit just so they can have affordable healthcare? This is in fact a severe distortion that prevents the full realization of what the American economy has already inherently achieved, more choice.

This is one of the core ideas of the book I wrote, Toward a Future Beyond Employment, that will be published by Palgrave this April. My main argument is that the Western economies that have been able to incorporate their technological progress structurally to their economic production should be able to afford more free time for their workers. Due to certain economic inefficiencies and cultural biases, however, the system is not able to fully internalize this opportunity. If Obamacare will give workers more choice, and ultimately more time, this should be welcomed, not attacked on the basis of a dogmatic cling to political correctness about job loss.

Some have argued that a declining work force would pose problems for economic production as jobs would increasingly be harder to fill. However, the trend of technology and automation points otherwise. The more sophisticated and nuanced automation becomes, the faster we will converge toward a paradigm where the demand for human labor will become either irrelevant or severely reduced (in terms of both laborers and hours) even in areas where we would never imagine robots could toil on our behalf. Just as the technological shift of manufacturing eliminated jobs in physical production, a parallel structural shift is taking place in non-tangible jobs such as administration. Increasingly more sophisticated software technology is rendering mental labor less relevant as well.

Does this mean more people will be idle without a purpose? This is a caricature. In truth, it only means that society will have to translate the time savings from this labor elimination toward alternatives that give individuals more choice and creative satisfaction while certain industries and their potential for traditional job generation face a natural decline. The economy is no longer one of industrial and material production, but instead operates on the basis of the production of ideas and concepts. More time away from declining traditional work structures should naturally enable more people to contribute to the production of ideas on an individualized basis.

In my book I argue that this is the next stage of economic advancement that Western economies face, and will result in higher welfare based on people having more time to use as they wish. Public policy that accommodates this evolution by expanding choice should therefore be encouraged. While Obamacare will continue to be debated on multiple grounds, its impact on jobs has to be considered more thoughtfully beyond headline numbers and short-term political gain.


Do we need different types of tenure? On Adam Grant in The New York Times

Mark D. White

AdamgrantIn an op-ed in today's The New York Times, Adam Grant, bestselling author of Give and Take: A Revolutionary Approach to Success (and fellow blogger at Psychology Today), examines the current tenure system in American universities and the skewed incentives they provide for continued work after tenure:

It's no secret that tenured professors cause problems in universities. Some choose to rest on their laurels, allowing their productivity to dwindle. Others develop tunnel vision about research, inflicting misery on students who suffer through their classes.

...

Instead of abolishing tenure, what if we restructured it? The heart of the problem is that we’ve combined two separate skill sets into a single job. We ask researchers to teach, and teachers to do research, even though these two capabilities have surprisingly little to do with each other.

Later in the piece he recommends three different kinds of tenure: research-only, teaching-only, and research-and-teaching, each tailored to a professor's talents and drives.

Some universities currently have similar positions: some have research professors, for instance, and most have some version of a lecturer. As far as I'm aware, however, the lecturer position, while it may carry some form of tenure, is rarely considered equivalent to professor positions, which demeans the devotion of one's time primarily to teaching. So Grant's proposal would certainly be an improvement over the status quo in this regard.

Having three types of tenure would allow for more delineation of job responsibilities, improve the targeting motivation (as Grant argues), and provide more precise guidance to committees charged with granting reappointment, tenure, and promotion. (As chair of my department I serve on such a committee at my college.)

Ideally, however, a scheme like this would not be necessary. Since evaluation occurs at the level of departmental and college-wide tenure-and-promotion committees, they can lead in reforming this process (with changes in motivation flowing down from there). They should allow for faculty to have different orientations regarding teaching, research, and service (the often-forgotten aspect of a professor's job). They should be willing to assess each faculty member according to his or her particular mix, as long as they remained "productive" in whatever way they chose to further the mission of the college or university. Fantastic instructors should be valued as much as prolific and acclaimed researchers, as well as active campus citizens and the faculty members who successfully combine two or even all three of these roles.

As I advocate for in my committee, all three roles are essential to a flourishing university but not every faculty member should be expected to excel in all. As economists know, there can be enormous benefits to specialization of labor, and I would like to think these benefits can be realized without creating additional bureaucracy and proliferation of tenure-tracks. While I agree with Grant's concerns, I would prefer to encourage plurality within the existing tenure system rather than making it more complicated. But this relies on those responsible for making personnel decisions to adopt this pluralistic mindset—and if they cant (or won't), then multiple tenure tracks may be the next best option.

(By the way, for a humorous look at this topic, see The Onion's recent post here.)


"Is Economics a Science?" Why I Couldn't Care Less

Mark D. White

There’s been a lot of discussion of late regarding economics’ claim to be a science; Harvard economist Raj Chetty recently answered this question in the affirmative in The New York Times in response to mutterings about Robert Schiller and Eugene Fama sharing the 2014 Nobel Prize (with Lars Peter Hansen) despite having different views on the efficiency of financial markets. Several months ago, Phil Mirowski (Notre Dame) made headlines criticizing neoclassic economics and its claims to be a science while discussing his book, Never Let a Serious Crisis Go to Waste: How Neoliberalism Survived the Financial Meltdown.

All of this makes me wonder: why is it so important to decide whether economics qualifies as a Science anyway? (The pretentious superfluous capitalization is intentional, by the way, representing the quasi-religious importance placed on this title.) Some thoughts follow below the fold…

Continue reading ""Is Economics a Science?" Why I Couldn't Care Less" »


An Answer to "Questions for Free-Market Moralists"

Mark D. White

I read with great interest Amia Srinivasan's contribution to the New York Times' philosophy column "The Stone" titled "Questions for Free-Market Moralists." After introducing the political philosophies of John Rawls and Robert Nozick, she states that "on the whole, Western societies are still more Rawlsian than Nozickian: they tend to have social welfare systems and redistribute wealth through taxation. But since the 1970s, they have become steadily more Nozickian." Then she presents four statements that she claims describe Nozick's minimal state -- and are representative of what she terms "free-market moralism" -- with which she assumes most people will not be comfortable. (Certainly not readers of The New York Times, by any rate.) But I'm not so sure, especially once we clarily what the four statements are talking about.

The four statements are:

1. Is any exchange between two people in the absence of direct physical compulsion by one party against the other (or the threat thereof) necessarily free?

2. Is any free (not physically compelled) exchange morally permissible?

3. Do people deserve all they are able, and only what they are able, to get through free exchange?

4. Are people under no obligation to do anything they don’t freely want to do or freely commit themselves to doing?

For each statement, Ms. Srinivasan provides an example of what such a world would look like: for instance, after statement #2, she suggests the following. (Note that this example also invokes statement #3 about inherited wealth.)

Suppose that I inherited from my rich parents a large plot of vacant land, and that you are my poor, landless neighbor. I offer you the following deal. You can work the land, doing all the hard labor of tilling, sowing, irrigating and harvesting. I’ll pay you $1 a day for a year. After that, I’ll sell the crop for $50,000. You decide this is your best available option, and so take the deal. Since you consent to this exchange, there’s nothing morally problematic about it.

This example points out my problem with Ms. Srinivasan's argument: she conflates political philosophy with moral philosophy. It is perfectly consistent to maintain, as in statement #2, that free exchanges are morally permissible while also believing that that is something morally problematic with the situation described above -- as long as you don't subscribe to a perfectionist system of morality that fails to distinguish between forbidden and merely "problematic" actions.

But there's more. Statement #2 really isn't speaking to morality -- instead, it's talking about legality that's simply based on a certain morality. How statement #2 should be read (based on my understanding of Nozick, at any rate) is as saying that the state has no moral basis to question free exchanges. Of course, the situation above is distasteful to most, but does this mean should it be forbidden by law? This is a different issue than the one Ms. Srinivasan addresses in her example -- and I suspect many would answer "no, it shouldn't be illegal" even if they regard the landowner's behavior as despicable. This doesn't imply a moral free-for-all, but simply a state that stops short of legislating all moral (or immoral) behavior.

Consider also Ms. Srinivasan's example for statement #4 regarding forced obligation:

Suppose I’m walking to the library and see a man drowning in the river. I decide that the pleasure I would get from saving his life wouldn’t exceed the cost of getting wet and the delay. So I walk on by. Since I made no contract with the man, I am under no obligation to save him.

The problem of duties of beneficence is an old and well-worn one in moral philosophy: while most would say we do have a general obligation to help those in need when it would come at little cost to ourselves, not as many would be willing to make that a strict requirement, much less a legal one (though some jurisdictions have). Ms. Srinivasan seems to draw a extreme and false dichotomy between coerced beneficence and rapacious self-interest -- I would like to think that no matter what kind of state we live in, people would still extend a hand to those in need when they can. (Furthermore, I see no reason to believe this would be any more likely to occur in a Rawlsian system where the state, not the individual, is the party understood to do most of the helping.)

As I understand him, Nozick was describing a state that enables people to make choices when they don't wrongfully harm others, and the market was but one framework in which they could do that. (For that reason, I disagree with the term "free-market moralist," but that's of little concern.) He did not, as Ms. Srinivasan writes, maintain that "the market can take care of morality for us," nor did Rawls hold that morality was the sole responsibility of the state. Fundamentally, Rawls and Nozick differed on the degree to which the state should exercise individuals' collective responsibility to each other on their behalf. Neither Rawls nor Nozick denies a role for private morality outside of the state. But Nozick and the "free-market moralists" believe that individuals, as parts of families and communities, bear the bulk of the responsibility to take care of one another, a responsibility borne voluntarily and, yes, imperfectly (unlike how perfectly the state conducts it, of course).

Ms. Srinivasan also holds Nozick's system to an incredibly high standard, arguing that to concede any weakness in any of the four statements "is to concede that the entire Nozickian edifice is structurally unsound. The proponent of free market morality has lost his foundations." But she neglects to mention the problems with Rawls' system, especially the very particular psychological assumptions that ground the "results" of the veil-of-ignorance exercise -- a brilliant metaphor also found in the work of other philosophers and with various predictions regarding the terms of the social contract.

Ms. Srinivasan states clearly that she believes that Western societies should be tilting back towards Rawls (I would say "further" rather than "back," but that's a difference of interpretation) and away from Nozick. Fair enough -- we disagree on that. But she makes Nozick's system an all-or-nothing proposition while ignoring problems with Rawls, and further misinterprets Nozick's work as describing the whole of morality rather than the operation of the state alone. In the end, her article shows a troubling lack of faith in people to care for each other outside the confines of the state -- and an overly optimistic belief in the power of the state to do the same.


David Brooks on libertarian paternalism and "nudge"

Mark D. White

In today's New York Times, David Brooks comments on libertarian paternalism in "The Nudge Debate." There is not a lot in his article that is surprising or unreasonable, but it does suffer from some vagueness and misunderstandings. For instance, Mr. Brooks conflates interventions of a paternalistic nature (such as nudging people into retirement plans) and those of a nonpaternalistic nature (such as nudging people into registering for organ donation). While the mechanisms in both cases are similar—and raise the same issues of unconscious manipulation and subversion of rational decision-making processes—the purposes and motivations are very different, with only the former involving the policymakers substituting their interests for those of the decision-makers themselves.

Of more concern is Mr. Brooks' contention that libertarian paternalism does not involve value substitution. He writes,

Do we want government stepping in to protect us from our own mistakes? Many people argue no. This kind of soft paternalism will inevitably slide into a hard paternalism, with government elites manipulating us into doing the sorts of things they want us to do.

As I explain in The Manipulation of Choice, there is no way for the government to know what we value well enough to help us make decisions in our own interests. Because they lack this information, policymakers necessarily impose their idea of people's interests on them when they design nudges. Policymakers think that it's in our interests to save more; policymakers think that it's in our interests to drink less soda. These are not unreasonable assumptions, of course, but they are assumptions nonetheless, and it is pure hubris on the part of policymakers to presume that they bear any necessary relationship to people's actual interests.

Because Mr. Brooks apparently doesn't recognize this, he concedes the "theoretical" point but dismisses any real-world concerns:

I’d say the anti-paternalists win the debate in theory but the libertarian paternalists win it empirically. In theory, it is possible that gentle nudges will turn into intrusive diktats and the nanny state will drain individual responsibility.

But, in practice, it is hard to feel that my decision-making powers have been weakened because when I got my driver’s license enrolling in organ donation was the default option. It’s hard to feel that a cafeteria is insulting my liberty if it puts the healthy fruit in a prominent place and the unhealthy junk food in some faraway corner. It’s hard to feel manipulated if I sign up for a program in which I can make commitments today that automatically increase my charitable giving next year. 

This last paragraph is illuminating, because it conflates three different types of nudges. The first, organ donation, is a social issue; such a nudge is not paternalistic and therefore does not raise any issues of value substitution (though, as I said above, the mechanism still subverts rational processes). The third, self-commitment, is vague; there is nothing manipulative in the concept of commitment, but if such commitment is elicited using a nudge that bypasses a person's rational decision-making faculties, then it's a problem. Only the cafeteria example is by definition a paternalistic intervention; Mr. Brooks may not be insulted by the management of the cafeteria putting their idea of his interests above his own and manipulating his actions in those imposed interests, but that does not justify an action which would insult many others.

Finally, I do not see the issue of libertarian paternalism as one of theory versus empirics—in the case of paternalistic interventions, the theory iself discounts any attempts to measure its success. Mr. Brooks finishes the paragraph above with this sentence: "The concrete benefits of these programs, which are empirically verifiable, should trump abstract theoretical objections." In the case of paternalistic interventions, the "theoretical objections" render any "concrete benefits" questionable and inherently unverifiable. How do you measure the "concrete benefits" of an action meant to improve people's choices according to their own interests if you have no way to ascertain those interests? Such knowledge is necessary in order to "verify" any benefits from such a program. With socially-motivated nudges, like automatic enrollment in organ donation programs, this makes some sense, but with measures explicitly intended to "help" people better make decisions in their own interests, the idea of verifying "concrete benefits" makes no sense whatsoever, given the inherent subjectivity of those interests.

Rather than an issue of theory versus evidence, the nudge debate is a matter of autonomy. Each person's right to further his or her own interests, in a way consistent with all others doing the same, is violated by policymakers who impose their own conception of people's interests on them and then design policy tools that subvert people's rational decision-making processes to steer them towards those imposed interests. Given Mr. Brooks' antipathy towards individualism, I am not surprised that he disregards concerns about autonomy as an "abstract theoretical objection." To some, however, the right to pursue their own interests without the government questioning them is a very "concrete benefit" to living in a free society.

Then again, if policymakers really knew our true interests, they'd know that already, wouldn't they?


David Brooks on same-sex marriage, freedom, and individualism in The New York Times

Mark D. White

In his New York Times column today, David Brooks hails the movement for same-sex marriage as an admirable step away from personal freedom and autonomy:

...last week saw a setback for the forces of maximum freedom. A representative of millions of gays and lesbians went to the Supreme Court and asked the court to help put limits on their own freedom of choice. They asked for marriage.       

Marriage is one of those institutions — along with religion and military service — that restricts freedom. Marriage is about making a commitment that binds you for decades to come. It narrows your options on how you will spend your time, money and attention.

Consistent with his views of individualism (which I've critiqued here and here), Mr. Brooks seems to have an overly simplistic view of freedom and autonomy, such as when he writes that "far from being baffled by this attempt to use state power to restrict individual choice, most Americans seem to be applauding it." Certainly, by marrying, people do give up some basic liberties to each other, but this is a choice freely made—and it is a choice to which gays and lesbians want access just as straights have long enjoyed. In other words, gays and lesbians want the higher-level freedom to restrict their own lower-level freedom (recalling Harry Frankfurt's conception of freedom of the will in which persons constrain their first-order desires based on their second-order ones). Marriage doesn't represent a diminuition of freedom: it is a higher level of it.

He goes on to say, "Americans may no longer have a vocabulary to explain why freedom should sometimes be constricted, but they like it when they see people trying to do it." Perhaps if Mr. Brooks expanded his conception of individual freedom to encompass the choice to constrain yourself, he'd see that Americans understand it extremely well—when that choice is ours. We choose to marry (or form long-lasting relationships), take jobs, enter into contracts, enroll in college, and make all types of commitments to family, friends, and community, all of which restrict our personal freedom. But they are choices that we freely make for any number of reasons, some out of self-interest and others out of a broader morality, and we welcome the opportunity to make these choices—a choice, in the case of marriage, that not all Americans currently enjoy.

The conclusion of Mr. Brooks' column conflates individual choices to make commitments with social pressure to do so:

And, who knows, maybe we’ll see other spheres in life where restraints are placed on maximum personal choice. Maybe there will be sumptuary codes that will make lavish spending and C.E.O. salaries unseemly. Maybe there will be social codes so that people understand that the act of creating a child includes a lifetime commitment to give him or her an organized home. Maybe voters will restrain their appetite for their grandchildren’s money. Maybe more straight people will marry.       

The proponents of same-sex marriage used the language of equality and rights in promoting their cause, because that is the language we have floating around. But, if it wins, same-sex marriage will be a victory for the good life, which is about living in a society that induces you to narrow your choices and embrace your obligations.

My idea of the good life derives from Immanuel Kant's kingdom of ends, a world in which each of us embraces obligations to each other while we pursue our own interests, narrowing our choices as each of us chooses, not as society "induces" us. Mr. Brooks' alternate vision reflects his limited view of individualism as base self-interest in which moral imperatives must be imposed by outside, not necessarily by government but through societal pressure. The question, of course, remains why individuals should trust the wisdom of the crowd for their moral guidance.