Nudge meets long-term health care in reform bills
Adam Smith and the Great Mind Fallacy

Ethical robots?

Mark D. White

Thanks to Orly Lobel at Prawfsblawg for pointing out this New York Times Magazine piece on new ideas. The one he points out in particular involves "ethical robots" (scroll down in the piece a few items), which will be programmed with basic ethical tenets and will perform more reliably (according to this programming) on the battlefield than humans would.

The idea that robots can be programmed for ethical behavior is based on the false impression that morality boils down to rules, a view that Deirdre McCloskey lampoons so well with her 3x5 index card metaphor. (The fact that the writer of the article mentions Kant's categorical imperative, often mistakenly interpreted as generating easily applicable rules, serves to reinforce this.) Anyone was has read Isaac Asimov's R. Daneel Olivaw novels knows that even a handful of "simple" rules (such as his Three Laws of Robotics) creates endless conflicts and conundrums that require judgment to resolve - and even Asimov's robots, with their advanced positronic brains, struggled with judgment.

The article does say that ethical robots would work "in limited situations," which suggests that the researchers have some idea of the minefield (pun intended) that they're getting into. But my concern is that people will read this piece, appreciating (as I do) what the researchers are trying to do to improve battlefield conditions (though I remain skeptical about the real-world prospects), and this will reinforce the "morality-as-rules" idea of ethics, and that the only reason people fail to follow these "rules" is weaknes of will, not that ethical dilemmas are complicated, contentious, and often irresolvable.

Even more curiously, the article claims that the robots ar programmed to "feel" guilt, in order "to condemn specific behavior and generate constructive change." Certainly, guilt (as with emotions in general) are essential to reinforcing moral behavior in imperfect humans (as well as being an integral part of the human experience), but why would robots need them - are they going to be tempted to resist their programming? One would think the point of developing robots was to guarantee "ethical" rule-based behavior - so where does the guilt come in?

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Even Kant got messed up when he tried to make a simple rule out of the CI: see lying.

That's my point, Jeff - there are no simple rules that come out of the CI, nor did Kaant think there were, but merely guidelines for judgment. But programming ethical robots - which may be fine for the limited use for which they're intended - threatened to obscure this already obscure point.

Took me time to read all the comments, but I enjoyed the article.

Verify your Comment

Previewing your Comment

This is only a preview. Your comment has not yet been posted.

Working...
Your comment could not be posted. Error type:
Your comment has been posted. Post another comment

The letters and numbers you entered did not match the image. Please try again.

As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments.

Having trouble reading this image? View an alternate.

Working...

Post a comment

Your Information

(Name is required. Email address will not be displayed with the comment.)