Isaac Asimov

Which is "harder": social science or physical science?

Mark D. White

Yesterday, Kevin Drum at Mother Jones spoke up for social science following an editorial in Nature arguing against the NSF's proposed defunding of research in political science. Here's a bit of the op-ed:

Part of the blame must lie with the practice of labelling the social sciences as soft, which too readily translates as meaning woolly or soft-headed. Because they deal with systems that are highly complex, adaptive and not rigorously rule-bound, the social sciences are among the most difficult of disciplines, both methodologically and intellectually. They suffer because their findings do sometimes seem obvious. Yet, equally, the common-sense answer can prove to be false when subjected to scrutiny. There are countless examples of this, from economics to traffic planning. This is one reason that the social sciences probably unnerve some politicians, some of whom are used to making decisions based not on evidence but on intuition, wishful thinking and with an eye on the polls.

...As Washington Post columnist Charles Lane wrote in a recent article that called for the NSF not to fund any social science: “The 'larger' the social or political issue, the more difficult it is to illuminate definitively through the methods of 'hard science'.”

In part, this just restates the fact that political science is difficult. To conclude that hard problems are better solved by not studying them is ludicrous. Should we slash the physics budget if the problems of dark-matter and dark-energy are not solved? Lane's statement falls for the very myth it wants to attack: that political science is ruled, like physics, by precise, unique, universal rules.

And here's some of what Mr. Drum added to it:

The public commonly thinks of disciplines like physics and chemistry as hard because they rely so heavily on difficult mathematics. In fact, that's exactly what makes them easy. It's what Eugene Wigner famously called the "unreasonable effectiveness" of math in the natural sciences: the fact that, for reasons we don't understand, the natural world really does seem to operate according to strict mathematical laws. Those laws may be hard to figure out, but they aren't impossible. ...

Hari Seldon notwithstanding, the social sciences have no such luck. Human communities don't obey simple mathematical laws, though they sometimes come tantalizingly close in certain narrow ways — close enough, anyway, to provide the intermittent reinforcement necessary to keep social scientists thinking that the real answer is just around the next corner. And once in a while it is. But most of the time it's not. It's decades of hard work away. Because, unlike, physics, the social sciences are hard.

Bonus points for the Foundation mention!

(I don't have much to add; I made a similar point in this post, comparing the complexity of marcoeconomic forecasting models to meteorological weather-forecasting models.)


On Artifical Intelligence and Personhood (with thanks to Isaac Asimov)

Mark D. White

Thanks to Larry Solum's Legal Theory Blog, I became aware of F. Patrick Hubbard's new paper "'Do Androids Dream?': Personhood and Intelligent Artifacts," forthcoming in Temple Law Review, which considers the issue of granting the status of personhood to an artificial intelligence:

This Article proposes a test to be used in answering an important question that has never received detailed jurisprudential analysis: What happens if a human artifact like a large computer system requests that it be treated as a person rather than as property? The Article argues that this entity should be granted a legal right to personhood if it has the following capacities: (1) an ability to interact with its environment and to engage in complex thought and communication; (2) a sense of being a self with a concern for achieving its plan for its life; and (3) the ability to live in a community with other persons based on, at least, mutual self interest. In order to develop and defend this test of personhood, the Article sketches the nature and basis of the liberal theory of personhood, reviews the reasons to grant or deny autonomy to an entity that passes the test, and discusses, in terms of existing and potential technology, the categories of artifacts that might be granted the legal right of self ownership under the test. Because of the speculative nature of the Article's topic, it closes with a discussion of the treatment of intelligent artifacts in science fiction.

Skimming through this fascinating paper, I am especially grateful for the extended treatment (pp. 82-88) of Isaac Asimov and his conception of robotic artificial intelligence from his R. Daneel Olivaw novels (as well as his many short stories on robots), a longtime devotion of mine. (Did reading about the Three Laws of Robotics lead to my embrace of Kant later in life? Who knows...)


Ethical robots?

Mark D. White

Thanks to Orly Lobel at Prawfsblawg for pointing out this New York Times Magazine piece on new ideas. The one he points out in particular involves "ethical robots" (scroll down in the piece a few items), which will be programmed with basic ethical tenets and will perform more reliably (according to this programming) on the battlefield than humans would.

The idea that robots can be programmed for ethical behavior is based on the false impression that morality boils down to rules, a view that Deirdre McCloskey lampoons so well with her 3x5 index card metaphor. (The fact that the writer of the article mentions Kant's categorical imperative, often mistakenly interpreted as generating easily applicable rules, serves to reinforce this.) Anyone was has read Isaac Asimov's R. Daneel Olivaw novels knows that even a handful of "simple" rules (such as his Three Laws of Robotics) creates endless conflicts and conundrums that require judgment to resolve - and even Asimov's robots, with their advanced positronic brains, struggled with judgment.

The article does say that ethical robots would work "in limited situations," which suggests that the researchers have some idea of the minefield (pun intended) that they're getting into. But my concern is that people will read this piece, appreciating (as I do) what the researchers are trying to do to improve battlefield conditions (though I remain skeptical about the real-world prospects), and this will reinforce the "morality-as-rules" idea of ethics, and that the only reason people fail to follow these "rules" is weaknes of will, not that ethical dilemmas are complicated, contentious, and often irresolvable.

Even more curiously, the article claims that the robots ar programmed to "feel" guilt, in order "to condemn specific behavior and generate constructive change." Certainly, guilt (as with emotions in general) are essential to reinforcing moral behavior in imperfect humans (as well as being an integral part of the human experience), but why would robots need them - are they going to be tempted to resist their programming? One would think the point of developing robots was to guarantee "ethical" rule-based behavior - so where does the guilt come in?