Reflective Equilibrium and Machine Ethics

Anderson and Anderson have designed a decision procedure to make a robot act ethically. This project is detailed in a paper found here. In this post, I will claim that Anderson and Anderson’s misappropriation of Rawls’s reflective equilibrium brings out an important objection to their project.

First, I will provide a rough outline of their project. For Anderson and Anderson the best way to implement ethics into a machine involves ethical principles. In the quest for a principle that could guide a robot to act ethically in cases where prima facie duties conflict, the system is presented with test cases. These cases are used as training data. They provide answers about ethical dilemmas where ethicists are in agreement about the right course of action. Based on these cases the machine generates a principle thought to cover the rest of the possible cases (in a situation comprised of three duties and two choices). If the machine provides a correct answer for the remaining cases, measured against agreement among ethicists about the cases, then the decision principle is complete and consistent. After stating the principle the machine discovered, Anderson and Anderson concluded:

This philosophically interesting result lends credence to Rawls’ “reflective equilibrium” approach—the system has, through abstracting and refining a principle from intuitions about particular cases, discovered a plausible principle that tells us which action is correct when specific duties pull in different directions in a particular type of ethical dilemma (p. 2).

Have Anderson and Anderson really discovered a result that “lends credence” to reflective equilibrium, or have they, by quasi-appropriating reflective equilibrium, attracted attention to a flaw in their approach? 

One of the objections to reflective equilibrium is that it is a rigged procedure (see Brandt, 1979). That is, lending initial credibility to intuitions about specific cases is thought to taint the outcome; credence placed in the input data, the considered judgments about specific cases, determines the outcome of the procedure. This problem finds acute expression in Anderson and Anderson’s absolute confidence in the agreement of ethicists about particular dilemmas. The cases are fixed starting points and final arbiters of the success of the procedure. To escape the starting-point dependence objection to reflective equilibrium philosophers emphasize that judgments about particular cases are vulnerable to revision throughout the procedure (see Scanlon, 2003). Belief revision involves “working back and forth” between the judgments and the principles to reach agreement between them, sometimes revising judgments and sometimes revising principles. Because Anderson and Anderson used fixed intuitions, and sought to accommodate the intuitions by bootstrapping up to a principle, the decision procedure they employed does not even count as an example of narrow reflective equilibrium–when judgments and principles are altered to achieve the best ‘fit’ between them. Anderson and Anderson have used a one-way arrow of revision, holding immune to revision the judgments of experts. Thereby, they rightly observed about the output of the procedure, “clearly, this rule is implicit in the judgments of the consensus of ethicists” (p. 2). The result is that the machine has not engaged in rational deliberation about moral situations, it has merely codified what was already implicit in the intuitions about the cases. Thus, the procedure begged the question against the outcome. Does such a method represent sound ethical deliberation? Or, could a method other than principle-based reasoning about duties provide a better alternative? 

I explored using utilitarianism as an impetus for designing an ethical robot in a paper found here. I have also considered using utilitarianism to inform the design of better, more ethical, decision-theoretic machines in the draft of a paper found here. Yet, I am not convinced either of these approaches are the best way forward for machine ethics. Anderson and Anderson are onto something in wanting to appropriate reflective equilibrium in designing ethical agents. Perhaps, the most robust approach to implementing ethics in a machine would involve a faithful appropriation of reflective equilibrium—one that achieved narrow equilibrium and progressed toward wide equilibrium. Wide reflective equilibrium involves consulting background theories thought to shed light on the strengths and weaknesses of the principles and judgments in one’s belief set (see Daniels, 1996). As Rawls has characterized the procedure, “one seeks the conception, or plurality of conceptions, that would survive the rational consideration of all feasible conceptions and all reasonable arguments for them.” (Collected Papers, p. 289). This may seem an ambitious approach to machine ethics, but what other methodology could be more justified?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s