Normative Ethics

CFP: Naturalisms in Ethics

There’s a call for papers for a conference focused on ethical naturalism. It addresses the multiplicity of ethical theories that fall under the label ‘naturalism’–hence the conference title ‘naturalisms’. The organizers are encouraging people to submit abstracts in any way related to the topic. Conference details are as follows:

  • When: Thursday July 14th- Friday July 15th, 2011
  • Where: University of Auckland
  • Abstract Submission Deadline: May 1st, 2011

For more details, consult the Pea Soup post or the conference website. Note: This conference occurs right before the APRA conference.

CFPs: Reasons, Gauthier, SLACRR

Recently, there have been some outstanding call for papers (CFPs) announced. These opportunities are as follows:

  • Reasons of Love: This conference is in Leuven (Belgium) from May 30 – June 1, 2011. The deadline for submission is December 1, 2010. Click here for more details.
    • Some Conference Questions: Do ‘the reasons of love’ constitute a
      genuine, distinctive category of reasons?  Are different kinds of love related to different kinds of reasons? What are the requirements of love, as opposed to the requirements of duty? Are love’s reasons rational or non-rational?
  • Gauthier Conference: To celebrate the 25th anniversary of Morals by Agreement a conference will be held May 13-15, 2011. The deadline to submit your abstract is January 15, 2011. You can submit an abstract on anything related to rational choice contractarianism. Click here for more details, including a list of participants.
  • SLACRR: The St. Louis Annual Conference on Reasons and Rationality (SLACRR) is May 22-24, 2011. Your abstract is due December 31, 2010 . Jamie Dreier is the keynote speaker. Click here to access the conference website.

Bonevac’s “Reflection Without Equilibrium”

In “Reflection without Equilibrium” Dan Bonevac argues against reflective equilibrium (RE) as outlined by Rawls. Dan concludes RE is a procedure that never terminates in a finite amount of time. As a result, RE must be revised in a pragmatic, intuitionist direction. I will argue this conclusion is based on a misunderstanding of Rawls’ constructivist model and how this model solves what’s known as the priority problem—assigning weight to a plurality of competing values.

First, let’s setup one of Dan’s arguments. In Theory of Justice (p.45) Rawls contrasts intuitionist versus constructivist solutions to the priority problem. Dan takes Rawls’ comments to imply that people can choose to make the moral facts as simple or as complex as they want. If this were true, according to Dan, wanting the priority problem to have a solution would make it have a solution. This would be a strike against the intuitionist because he believes that the plurality of competing values cannot be simplified; there are no higher order principles that can be ranked and consulted to determine the outcome and settle the issue between competing values or moral facts. Dan takes the quote from Theory (p. 45)  to imply that we could choose to make the moral facts simple, thereby defeating the intuitionist’s complexity thesis. From this line of reasoning the following argument is laid out on page 19:

  1. Choices in the original position determine the principles of justice.
  2. The principles of justice chosen in the original position determine the moral facts.
  3. People in the original position would choose to make the moral facts simple.
  4. So, moral facts are simple.

To show how deficient such reasoning, Dan runs a parallel argument about a batter wanting to hit a home run: (i) if a batter’s choices determine the trajectory of the bat and (ii) the trajectory of the bat determines if the ball goes over the wall, then (iii) every batter would chose to hit a home run on every pitch, so (iv) every batter would hit a home run on every pitch. Dan analyzes premises (1 and i) and shows how many factors determine whether the choices a batter makes determine the trajectory of the bat or whether choices in the original position determine the principles of justice. He weakens this premise to account for these factors (i.e. choices, under ideal conditions, contribute to determining…).

Next, Dan skips premise (2) and argues against premise (3). This is a mistake because (2), while stated explicitly by Rawls, is take out of context and misused by Dan. (2) is not a premise Rawls uses in an antirealist argument against the independent existence of moral facts. Rawls is not constructivist in terms of being an antirealist; instead, Rawls is a constructivist along Kantian lines. He believes certain criteria can be used by rational people in deliberation to reach agreement or disagreement about what is the case. Rawls thought people in the original position could reach agreement in judgments. The principles chosen in the original position and their coherence with moral judgments produces an outcome that reflects our moral sensibility.

Rawls was not arguing that people can fashion the moral facts in any way they see fit. The moral facts are not constructed in the sense of being infinitely pliable (thus, they are able to be made simple or complex based on how we want them to turn out).  Instead, the moral facts are constructed because simple facts, the existence of which Rawls or Kant would not deny, are determined to be moral facts by principles that assign them weight as reasons. This assigning of weight solves the priority problem. The moral facts are then used, within RE or the categorical imperative procedure, to support certain conclusions. Moral facts are not infinitely pliable nor are they fixed ethical truths. They are simple facts principles have selected as relevant from a moral point of view. I would reformulate the argument as follows:

  1. Choices in the original position determine the principles of justice.
  2. The principles of justice chosen in the original position determine the moral facts.
  3. People in the original position can agree (or disagree) about the moral facts.
  4. So, moral facts are objective.

Another consequence of Dan misconstruing the form of constructivism proposed by Rawls is that his argument against RE reaching equilibrium is on less sure footing. That is, it is wrong to associate the set of stable judgments with ethical truths. The target of reasoning for Rawls is not ethical truths but objectivity in the sense of agreement that mirrors our sense of justice.

Moral Heuristics

In my last post I claimed intuitions are often used (or should be used) like heuristic devices. As shortcuts of cognition, intuitions function as intellectual seemings that quickly move the agent from perception to judgment. This leap occurs without explicit analysis or sifting through evidence; instead, the agent references a rule of thumb. A rule of thumb is a generalization about what to do, think or feel in a certain situation. These generalizations are often highly intuitive and have the attractiveness of being common sense. The problem is that intuitions generated by and used like heuristics often reflect errors and biases.

My research is primarily in the area of reflective equilibrium. This method of moral justification is often charged with being intuition-laden, so the growing literature on experimental philosophy and intuition is of sincere interest. In the journal Behavior and Brain Sciences I came across an article about moral heuristics. It includes a primary article by Cass R. Sunstein and an extensive peer commentary on his article. Sunstein discusses heuristics and reflective equilibrium. Commentary related to this topic include: Peter Singer’s “Intuitions, Heuristics, and Utilitarianism,”  Edward Stein’s “Wide Reflective Equilibrium as an Answer to an Objection to Moral Heuristics,” and Philip Tetlock’s “Gauging the Heuristic Value of Heuristics.” In discussing the link between heuristics and morality Sunstein states:

Much of everyday morality consists of simple, highly intuitive rules that generally make sense, but that fail in certain cases. It is wrong to lie or steal, but if a lie or a theft would save a human life, lying or stealing is probably obligatory. Not all promises should be kept. It is wrong to try to get out of a longstanding professional commitment at the last minute, but if your child is in the hospital, you may be morally required to do exactly that (p. 531).

I plan on posting on this article in the future, but for now I wanted to bring it to your attention. The article can be found at the following site within Cambridge journals.

Reflective Equilibrium and Machine Ethics

Anderson and Anderson have designed a decision procedure to make a robot act ethically. This project is detailed in a paper found here. In this post, I will claim that Anderson and Anderson’s misappropriation of Rawls’s reflective equilibrium brings out an important objection to their project.

First, I will provide a rough outline of their project. For Anderson and Anderson the best way to implement ethics into a machine involves ethical principles. In the quest for a principle that could guide a robot to act ethically in cases where prima facie duties conflict, the system is presented with test cases. These cases are used as training data. They provide answers about ethical dilemmas where ethicists are in agreement about the right course of action. Based on these cases the machine generates a principle thought to cover the rest of the possible cases (in a situation comprised of three duties and two choices). If the machine provides a correct answer for the remaining cases, measured against agreement among ethicists about the cases, then the decision principle is complete and consistent. After stating the principle the machine discovered, Anderson and Anderson concluded:

This philosophically interesting result lends credence to Rawls’ “reflective equilibrium” approach—the system has, through abstracting and refining a principle from intuitions about particular cases, discovered a plausible principle that tells us which action is correct when specific duties pull in different directions in a particular type of ethical dilemma (p. 2).

Have Anderson and Anderson really discovered a result that “lends credence” to reflective equilibrium, or have they, by quasi-appropriating reflective equilibrium, attracted attention to a flaw in their approach? 

One of the objections to reflective equilibrium is that it is a rigged procedure (see Brandt, 1979). That is, lending initial credibility to intuitions about specific cases is thought to taint the outcome; credence placed in the input data, the considered judgments about specific cases, determines the outcome of the procedure. This problem finds acute expression in Anderson and Anderson’s absolute confidence in the agreement of ethicists about particular dilemmas. The cases are fixed starting points and final arbiters of the success of the procedure. To escape the starting-point dependence objection to reflective equilibrium philosophers emphasize that judgments about particular cases are vulnerable to revision throughout the procedure (see Scanlon, 2003). Belief revision involves “working back and forth” between the judgments and the principles to reach agreement between them, sometimes revising judgments and sometimes revising principles. Because Anderson and Anderson used fixed intuitions, and sought to accommodate the intuitions by bootstrapping up to a principle, the decision procedure they employed does not even count as an example of narrow reflective equilibrium–when judgments and principles are altered to achieve the best ‘fit’ between them. Anderson and Anderson have used a one-way arrow of revision, holding immune to revision the judgments of experts. Thereby, they rightly observed about the output of the procedure, “clearly, this rule is implicit in the judgments of the consensus of ethicists” (p. 2). The result is that the machine has not engaged in rational deliberation about moral situations, it has merely codified what was already implicit in the intuitions about the cases. Thus, the procedure begged the question against the outcome. Does such a method represent sound ethical deliberation? Or, could a method other than principle-based reasoning about duties provide a better alternative? 

I explored using utilitarianism as an impetus for designing an ethical robot in a paper found here. I have also considered using utilitarianism to inform the design of better, more ethical, decision-theoretic machines in the draft of a paper found here. Yet, I am not convinced either of these approaches are the best way forward for machine ethics. Anderson and Anderson are onto something in wanting to appropriate reflective equilibrium in designing ethical agents. Perhaps, the most robust approach to implementing ethics in a machine would involve a faithful appropriation of reflective equilibrium—one that achieved narrow equilibrium and progressed toward wide equilibrium. Wide reflective equilibrium involves consulting background theories thought to shed light on the strengths and weaknesses of the principles and judgments in one’s belief set (see Daniels, 1996). As Rawls has characterized the procedure, “one seeks the conception, or plurality of conceptions, that would survive the rational consideration of all feasible conceptions and all reasonable arguments for them.” (Collected Papers, p. 289). This may seem an ambitious approach to machine ethics, but what other methodology could be more justified?