Sentimentalism

Vernon Smith on Adam Smith at the Social Economics Blog and Forum for Social Economics

Vernon smithMark D. White

The Social Economics Blog (the blog of the Association for Social Economics, of which Jonathan B. Wight and I are currently president and president-elect, respectively) is featuring an article from the Forum for Social Economics by Nobel leaurate Vernon L. Smith on Adam Smith, plus comments from three social economics luminaries (including Jonathan himself). The Forum's publisher, Taylor & Francis, has graciously made the Smith-on-Smith article and comments available of free of charge to encourage open and wide discussion.

The abstract to Smith's paper, "Adam Smith: From Propriety and Sentiments to Property and Wealth," follows:

“Why return to Adam Smith?” Because we learn that he had fresh-for-today insights, derived from a modeling perspective that was never part of economic analysis. Smith wrote two classics: The Theory of Moral Sentiments (1759; hereafter Sentiments); and An Inquiry into the Nature and Causes of the Wealth of Nations (1776; hereafter Wealth). In Sentiments it is argued that human sociability in close-knit groups is governed by the “propriety and fitness” of conduct based on sympathy. This non-utilitarian model provides new insights into the results of 2-person experimental “trust” and other games that defied the predictions of traditional game theory in the 1980s and 90s, and offers testable new predictions. Moreover, Smith shows how the civil order of “property” grew naturally out of the rules of propriety. Property together with what I call Smith's Discovery Axiom then enabled his break-through in Wealth that defined the liberal intellectual and practical foundation of two centuries of Western economic growth.


Another Ultimatum Game – with a Twist

Jonathan B. Wight

The current AER (December 2011) has an interesting article by Steffen Andersen, Seda Ertaç, Uri Gneezy, Moshe Hoffman, and John A. List entitled, "Stakes Matter in Ultimatum Games."

The authors set up high-stakes Ultimatum Games in eight villages in northeast India. The stakes varied from 20 to 20,000 rupees. At the time of the study, 20,000 rupees equaled about $410, or 200 days of labor at prevailing wages). That's a lot of cash!

One of the problems with determining if stakes matter to choice outcomes is that historically there are very few low-ball offers. To try to artificially manufacture more low-ball offers, the authors added this explicit bit of framing to the instructions for proposers:

Notice that if the responder's goal is to earn as much money as possible from the experiment, he/she should accept any offer that gives him/her positive earnings, no matter how low. This is because the alternative is to reject, in which case he/she will not receive any earnings. If the responder is expected to behave in this way and accept any positive offer, a proposer should offer the minimum possible amount to the responder in order to leave the experiment with as much money as possible. That is, if the responder that you are matched with aims to earn as much money as possible, he/ she should accept any offer that is greater than zero. Given this, making the offer that gives the lowest possible earnings to the responder will allow you to leave the experiment with as much money possible. (p. 3430)

The authors point out: "This frame informs proposers that the rational decision, if both parties aim to maximize earnings, is to offer the lowest possible amount" (Ibid., emphasis added).

Note that two things are going on here. The first is that a key insight of Ultimatum Games that in many cases people do NOT consider "gaining the most money" to be most important objective. We know this because they routinely chose to punish anonymous others at a cost to themselves. But including these instructions creates a not-so-subtle framing that the objective "should" be to earn as much money as possible.

The second point is that the authors suggest that people "ought" to make "rational" decisions, that is use logic to calculate gains and losses using a consequentialist ethical approach. But people may in fact use their moral sentiments or feelings to make such decisions. By framing both the method and the goal, the authors attempt to get an anomalous result and they succeed: Their main finding is that the demand curve for justice is negatively sloped—if sacrifice becomes more costly, there will be less sacrifice.

This study is important because it demonstrates that price does matter (an important criticism of virtue ethics that we need to consider) and second, because it shows the powerful effects of framing. This is a cautionary note to all econ teachers who claim they are only doing "science" when they frame the economic question and its method (see previous post on ethical principles for the classroom).


David Brooks has it right on "The Limits of Empathy"

Mark D. White

In his column in today's New York Times, David Brooks explores "The Limits of Empathy," arguing that empathy may help us feel for other people, but it is not enough to actually spur us to action and help us make tough ethical decisions, and in the end may amount to little more than a self-satisfying crutch:

These days empathy has become a shortcut. It has become a way to experience delicious moral emotions without confronting the weaknesses in our nature that prevent us from actually acting upon them. It has become a way to experience the illusion of moral progress without having to do the nasty work of making moral judgments. In a culture that is inarticulate about moral categories and touchy about giving offense, teaching empathy is a safe way for schools and other institutions to seem virtuous without risking controversy or hurting anybody’s feelings.

Brooks is right when he says people need something more to actually move them to action, some sense of duty or commitment--a code, in his terms:

Think of anybody you admire. They probably have some talent for fellow-feeling, but it is overshadowed by their sense of obligation to some religious, military, social or philosophic code. They would feel a sense of shame or guilt if they didn’t live up to the code. The code tells them when they deserve public admiration or dishonor. The code helps them evaluate other people’s feelings, not just share them. The code tells them that an adulterer or a drug dealer may feel ecstatic, but the proper response is still contempt.

But that still leaves the question: why should we presume someone is moved to action more reliably by a code than by empathy? Brooks' answer is spot on:

The code isn’t just a set of rules. It’s a source of identity. It’s pursued with joy. It arouses the strongest emotions and attachments. Empathy is a sideshow. If you want to make the world a better place, help people debate, understand, reform, revere and enact their codes. Accept that codes conflict.

A person's code is part of his or her identity, and our interest in maintaining our identity as moral persons can prompt us to moral action and guide us in instances of struggle and temptation. I'm not sure if Brooks was implying this, but while adhering to a code certainly does arouse emotions, those emotions should not be the primary motivating factor behind it. (As Kant wrote, we should feel good because we're moral, but we should not be moral simply because it feels good.)

To be fair, I think empathy is enough to motivate some people to moral action, and it is essential for any moral system to work. But Brooks is right to point out that empathy is at risk of becoming a buzzword, a verbal lapel ribbon for those who wish to appear to care for other people without having to back it up with action.


David Brooks on moral individualism: The false dichotomy lives on

Mark D. White

In today's New York Times, David Brooks writes in "If It Feels Right" about a recent study of young adults in America that reveals their incapacity to think in moral terms:

When asked to describe a moral dilemma they had faced, two-thirds of the young people either couldn’t answer the question or described problems that are not moral at all, like whether they could afford to rent a certain apartment or whether they had enough quarters to feed the meter at a parking spot.

“Not many of them have previously given much or any thought to many of the kinds of questions about morality that we asked,” Smith and his co-authors write. When asked about wrong or evil, they could generally agree that rape and murder are wrong. But, aside from these extreme cases, moral thinking didn’t enter the picture, even when considering things like drunken driving, cheating in school or cheating on a partner. “I don’t really deal with right and wrong that often,” is how one interviewee put it.

The default position, which most of them came back to again and again, is that moral choices are just a matter of individual taste. “It’s personal,” the respondents typically said. “It’s up to the individual. Who am I to say?”

This is horrible but hardly surprising--anyone who has taught an introductory ethics class knows that most college students enter the class woefully unprepared to discuss ethical issues in anything but the most uninformed and vague terms. This is not to say, however, that they have no moral sense; Intro to Ethics 101 is hardly required to be a good person, even if it does help one to talk about it. But the inability to discusss one's moral beliefs suggests that they may not be well considered or formed, and this is still of much concern.

Brooks chalks this up to moral individualism:

In most times and in most places, the group was seen to be the essential moral unit. A shared religion defined rules and practices. Cultures structured people’s imaginations and imposed moral disciplines. But now more people are led to assume that the free-floating individual is the essential moral unit. Morality was once revealed, inherited and shared, but now it’s thought of as something that emerges in the privacy of your own heart.

Unfortunately, Brooks is falling into the false dichotomy between individualism and sociality again. (See my earlier posts here and here for more on Brooks and this issue.) Morality doesn't have to come from society in order to focus on society. As Immanuel Kant wrote, the individual can and should realize, independently of external authority (though never completely separate from it), that he or she has duties and obligations to other people. The ideal source of a person's moral code is her own reason (not her "heart"), but the content of that code is nonetheless eminently social.

(As always, for more on the compatibility of individuality and sociality, see Chapter 3 on my book Kantian Ethics and Economics: Autonomy, Dignity, and Character.)


David Brooks on individuality and sociality: A Kantian perspective

Mark D. White

David Brooks has a fascinating article on new research on human nature in today's New York Times (a condensation, of sorts, of his wonderfully written piece in The New Yorker in January--and, apparently, his new book, The Social Animal: The Hidden Sources of Love, Character, and Achievement, which was reviewed recently in The Wall Street Journal). He shares the opinion of many of us here at this blog that most conceptions of human nature and choice in the social sciences are misguided, which inevitably leads to policy failures when people do not act like the policymakers expected them to act. As Brooks writes in the Times piece, "Many of our public policies are proposed by experts who are comfortable only with correlations that can be measured, appropriated and quantified, and ignore everything else." Exactly.

His preferred remedies for this shortcoming, however, I find more questionable. He goes on to say:

Yet while we are trapped within this amputated view of human nature, a richer and deeper view is coming back into view. It is being brought to us by researchers across an array of diverse fields: neuroscience, psychology, sociology, behavioral economics and so on.

This growing, dispersed body of research reminds us of a few key insights. First, the unconscious parts of the mind are most of the mind, where many of the most impressive feats of thinking take place. Second, emotion is not opposed to reason; our emotions assign value to things and are the basis of reason. Finally, we are not individuals who form relationships. We are social animals, deeply interpenetrated with one another, who emerge out of relationships.

The first insight, the power of the unconscious mind, I believe is unquestionable. The second insight I agree with in spirit, though I would quibble over the precise relationship of emotion and reason (as Jonathan and I have done on this blog in terms of Adam Smith--whom Brooks alludes to, and Jonathan discusses here--and Immanuel Kant). But the third insight I very much disgree with, as I discuss in chapter 3 of my book, Kantian Ethics and Economics: Autonomy, Dignity, and Character, published next month by Stanford University Press (a summary of which I presented at the recent Eastern Economic Association meetings in New York).

In that chapter, I make the case that a person is best regarded as individual in essence, social in orientation. As Christine Korsgaard writes in the first line of her book Self-Constitution: Agency, Identity, and Integrity, "Human beings are condemned to choice and action." Since each person's faculty of choice--however you choose to model or represent it--is her own, she is essentially individual. This does not mean, as most mainstream economists implicitly assume and most heterodox economists fear, that a person does not, or can not, take external influences and concerns into account. A person's thought processes, by necessity, are atomistic--they happen inside her head, after all, and no one else's--but the substance of those thoughts are not. And Kantian autonomy implies both: the capacity for independent thought and the responsibility to be social, that is, to take other people's needs and wants into account.

So contrary to Mr. Brooks' argument, we do not emerge out of our relationships, nor are we not defined by them. Instead we choose or endorse them in the process of what Korsgaard calls self-constitution, creating the persons we want to be, based on what I call character, compromised of judgment and will. Although we have little control over our social world when we are young, upon reaching maturity we are responsible to choose, manage, and reject our social networks, by reflecting on what they imply about who we are and who we want to be.

As I write in my book (pp. 101-102), with respect to a person's social network:

To be sure, social roles, links, and responsibilities also enter into this deliberative self-constituting process, and as with other experiences and choices, the agent is not a passive subject of her social identities. As Korsgaard writes,

you are a human being, a woman or a man, an adherent of a certain religion, a member of an ethnic group, a member of a certain profession, someone’s lover or friend, and so on. And all of these identities give rise to reasons and obligations. Your reasons express your identity, your nature; your obligations spring from what that identity forbids. (Korsgaard, The Sources of Normativity, p. 101)

But before these identities can become a part of an agent’s practical identity, her sense of self (or character) from which she acts, she must take an active role in endorsing these roles by choosing what groups to join, what people to associate with, and what social responsibilities to assume. Even the aspects of your social identity you are born into—being a child of your parents, a member of your community, a citizen of your nation—must be endorsed by you before they become part of you and reasons on which you can act autonomously. However the social identities come about, they “remain contingent in this sense: whether you treat them as a source of reasons and obligations is up to you. If you continue to endorse the reasons the identity presents to you, and observe the obligations it imposes on you, then it’s you” (Korsgaard, Self-Constitution, p. 23). So like preferences, social identities, along with their constituent roles and responsibilities, are subject to the endorsement of an agent’s judgment based on the moral law; as important as those features are to the agent’s life, they are nonetheless secondary to her character.

So I believe Brooks sets up a false dichotomy: the choice is not between being an isolated individual and a social animal. We are essentially individuals but we necessarily operate in a social world, which in turns affects and influences us, but only to extent to which we allow it to.


Mirror neurons, Adam Smith, and sympathy (at Knowledge Problem)

Mark D. White

Over at Knowledge Problem, Lynne Kiesling talks about mirror neurons, Adam Smith, and her new paper on both, titled "Mirroring and the Sympathetic Process: Some Implications of Mirror Neuron Research for Sympathy and Institutions in Adam Smith":

In The Theory of Moral Sentiments, Adam Smith asserts that humans have an innate interest in the fortunes of other people and desire for sympathy with others. Recent neuroscience research on mirror neurons has now provided evidence consistent with Smith’s assertion, suggesting that humans have an innate capability to understand the mental states of others at a neural level. This capability provides an important foundation for the Smithian sympathetic process, which has three components: sympathy as a synthesis of empathy with reason-based judgment, an external spectatorial perspective on the actions of others (and one’s own actions), and an innate imaginative capacity that enables an observer to imagine herself in the situation of the agent. This sympathetic process, and the neural framework that the mirror system appears to provide for it, predisposes individuals toward coordination of the expression of their emotions and of their actions. In Smith’s model this decentralized coordination leads to the emergence of social order, bolstered and reinforced by the emergence and evolution of informal and formal institutions grounded in the sympathetic process. This paper presents an argument that a sense of interconnectedness and the shared meaning of actions are essential foundations for the Smithian sympathetic process and the resulting decentralized coordination and emergent social order. The mirror neuron system appears to provide a neural framework for those capabilities.


Practical Equilibrium, Reflective Equilibrium, and Moral Choice

Mark D. White

Also in the new issue of Mind (July 2010) is an article by Ben Eggleston titled "Practical Equilibrium: A Way of Deciding What to Think about Morality," which (oddly enough) is again very relevant to the nascent discussion between me and Jonathan here:

Abstract: Practical equilibrium, like reflective equilibrium, is a way of deciding what to think about morality. It shares with reflective equilibrium the general thesis that there is some way in which a moral theory must, in order to be acceptable, answer to one’s moral intuitions, but it differs from reflective equilibrium in its specification of exactly how a moral theory must answer to one’s intuitions. Whereas reflective equilibrium focuses on a theory’s consistency with those intuitions, practical equilibrium also gives weight to a theory’s approval of one’s having those intuitions.

This parallels fairly closely my comment to Jonathan; as I understand it, Smith's impartial spectator is more like (Rawls') reflective equilibrium, in which a person facing a moral dilemma tries to take a view detached from personal circumstances, but nonetheless based on his or her moral sentiments (or, in a sense, intuitions). But Eggleston's practical equilibrium recognizes the need for some outside substantive theory of morality if that the person's choice (and by extension his or her intutions) are to be morally justified.


Neurosentimentalism and Moral Agency

Mark D. White

As it happens, the new issue of Mind (July 2010, go figure) has a paper that somewhat ties into Jonathan's post from yesterday (insofar as it focus on sentimentalism, not his evolutionary account thereof) as well as my work on agency and choice:

Neurosentimentalism and Moral Agency

Philip Gerrans and Jeanette Kennett

Abstract: Metaethics has recently been confronted by evidence from cognitive neuroscience that tacit emotional processes play an essential causal role in moral judgement. Most neuroscientists, and some metaethicists, take this evidence to vindicate a version of metaethical sentimentalism. In this paper we argue that the ‘dual process’ model of cognition that frames the discussion within and without philosophy does not do justice to an important constraint on any theory of deliberation and judgement. Namely, decision-making is the exercise of a capacity for agency. Agency, in turn, requires a capacity to conceive of oneself as temporally extended: to inhabit, in both memory and imagination, an autobiographical past and future. To plan, to commit to plans, and to act in accordance with previous plans requires a diachronic self, able to transcend the present moment. While this fact about agency is central to much of moral philosophy (e.g. in discussions of autonomy and moral responsibility) it is opaque to the dual process framework and those meta-ethical accounts which situate themselves within this model of cognition. We show how this is the case and argue for an empirically adequate account of moral judgement which gives sufficient role to memory and imagination as cognitive prerequisites of agency. We reconsider the empirical evidence, provide an alternative, agentive, interpretation of key findings, and evaluate the consequences for metaethics.

Very interesting!