Tony Milligan Gives Lecture on Virtue Ethics

On Wednesday 25 September, GCRI hosted an online lecture by Tony Milligan entitled ‘Virtue, Risk and Space Colonization’. Milligan is a Lecturer in the Philosophy Department at the University of Hertfordshire.

Discussions of the ethics of risk and space colonization are often dominated by consequentialist ethics. This includes GCRI’s recent ethics lecture by Nick Beckstead as well as my own writings on the topic (e.g. [1-2]). Milligan’s lecture covers the same risk and space topics but in terms of virtue ethics. Regardless of one’s views (and I remain a committed consequentialist), this is a welcome contribution to the conversation, an opportunity to challenge and refine our thinking.

To set the stage, Milligan begins with the following hypothetical scenario. I paraphrase:

Imagine an upstanding citizen (we’ll call him Greg) who learns that Earth’s inhabitable lifetime is finite [3], meaning all life on Earth will eventually end. He concludes that this is an issue of overriding concern. He tries to persuade his family of this, but fails. Over upcoming weeks, the family notices certain items are missing: the television, some jewelry, the wine collection. Then comes the big whammy: real estate agents arrive to show the house to prospective buyers. The family puts its collective foot down: “Enough!” The man reveals that he has been selling assets to donate to space colonization research, in hope that this will help Earth-life from surviving beyond Earth’s habitable lifetime.

As Milligan explains, Greg may seem like the perfect consequentialist. He’s clearly acting to improve consequences for the future of humanity. But he’s also lost his grasp on the good life, which is the core focus of virtue ethics. Likewise the primary goal in virtue ethics is to cultivate virtues in ourselves, and to facilitate them in others. This yields a livable, psychologically available ethics, in contrast with Greg’s consequentialism. Even if we may nominally support the idea of consequentialism, in our daily lives we don’t act accordingly, especially not in our loving relations. Or at least this is what Milligan maintains – more on this later.

Virtue ethics suggests a different understanding of risk. The standard consequentialist conception of risk is the possibility of bad consequences, typically evaluated in terms of the consequences’ probabilities and magnitudes. Virtue ethics looks at risk differently [4], asking such questions as, “Are risks expressive of the relevant virtues?” For example, the virtue of justice would focus attention on who is causing the risk and who would be harmed by it. To demonstrate this concept, Milligan provides the example of nuclear electricity causing risks of meltdown and contamination from nuclear waste. The harms from meltdown go to (approximately) the same local community that gets the benefits of the electricity, whereas the harms from waste (generally) go to different people, including the many future generations that can be exposed to radiation. Thus the waste contamination risk may be less just than the meltdown risk, even if contamination is the smaller risk in consequentialist probability-times-magnitude terms. (I would add that by this logic, global catastrophic risk may be especially important under both frameworks. Global catastrophes affect people worldwide and into future generations, making them very unjust. And they’re the biggest risks in standard probability-times-magnitude terms.)

Virtue ethics likewise suggests a different treatment of space colonization. For the consequentialist, space colonization is important because it can lead to very big (good or bad) consequences, given how big space itself is. (One might note, not all forms of consequentialism would treat space colonization in this way, but this is a common treatment.) In contrast, virtue ethics tends to place less emphasis on larger quantities of good. Perhaps there is a duty to preserve life, in which case space colonization would eventually be required. But merely preserving life suggests a much smaller space colony than maximizing the total quantity of life, or of some other measure of the good. Similarly, as Milligan points out, a duty to preserve species suggests an urgent immediate focus not on space colonization but on maintaining biodiversity, given the many species going extinct right now. (Milligan endorsed a species neutrality that cuts across the virtue/consequentialism divide. I’m with him on this.)

The lecture sparked a lively discussion. A point I raised is that, upon closer inspection, maybe consequentialism and virtue ethics aren’t so far apart. In the opening story, Greg might seem like a devout consequentialist, but he’s probably not doing a very good job of it. His actions are making an outcast of himself, which will impede his future efforts to promote good consequences. More generally, those who ignore the virtues are unlikely to succeed as consequentialists. Similarly, the virtuous should probably also mind the consequences of their actions. Milligan cites Julia Driver as someone working at the virtue-consequentialism intersection [5]. It is probably even possible to craft specific virtue ethics and consequentialist frameworks such that the two yield equivalent recommendations. That said, most virtue ethics and consequentialist frameworks still have (often important) differences between them.

An interesting point raised by another attendee concerns how virtue ethics responds to circumstance. That is, as circumstances change, should the important virtues also change? For example, in earlier times, an important virtue was physical courage: acting in the face of physical, bodily danger. But physical courage is less relevant in today’s automated, sedentary world. As Milligan notes, moral courage remains highly relevant today. Milligan suggests that virtues should change with circumstance. An especially meaningful change would be if moral agents gain or lose autonomy, in which case the practical wisdom at the heart of virtue ethics could gain or lose importance.

As mentioned above, the lecture hasn’t shifted my own support of consequentialism. But I am grateful for the deeper understanding of virtue ethics and how it relates both to consequentialism and the topics of risk and space colonization. This is a dialog that is well worth maintaining. GCRI is in a good position to help facilitate, as it does not take positions on these questions, but instead seeks to promote open discussion so that all involved can improve and share their thinking.

Here is the full abstract of the talk:

Virtue ethics is one of the key components of contemporary ethical theory. It shifts our focus in the direction of living well and acting in line with admirable traits of character, traits such as practical wisdom. The practically wise agent will be just, courageous and a good decision-maker. They will recognize the ineradicability of our human vulnerability and will also tend to recognize which risks are most salient and are most in need of a precautionary response. However, it has been (repeatedly) suggested that plans for space colonization do not express such practical wisdom (or the associated cluster of admirable traits) but instead express an escapist unwillingness to face up to far more immediate Earthly problems. In response I will (1) concede that this often has been (and continues to be) the case. A good deal of enthusiast literature on space colonization from Tsoilkovsky to O’Neil and beyond, has always had a strong utopian and escapist dimension. However, I will also suggest (2) that that those who press such a charge of escapism too far may, nonetheless, find it difficult to avoid falling foul of their own charge. That is to say, they are in danger of a flight from genuine, albeit long-range, risks. Finally, I will argue that (3) practical wisdom in the context of space colonization involves neither the ignoring of long-range risks nor their over-prioritization but rather an acceptance that our human predicament, in the face of multiple risks which are simultaneously worthy of attention, is fundamentally (and inconveniently) dilemmatic.

The presentation was hosted online via Skype; no slides were used. Attendees included Luke Haqq, a PhD student in Jurisprudence at UC Berkeley; Jess Riedel, a post-doctoral researcher in quantum information at IBM Research; Carl Shulman, an independent scholar and Research Associate of the Oxford Future of Humanity Institute; Christian Tarsney, a PhD student in philosophy at the University of Maryland, and Seth Baum, Jacob Haqq-Misra, Tim Maher, and Grant Wilson of GCRI.

[1] Baum SD, 2009. Cost-benefit analysis of space exploration: Some ethical considerations. Space Policy 25(2) 75-80.

[2] Baum S, Wilson G. The ethics of global catastrophic risk from dual-use bioengineering. Ethics in Biology, Engineering and Medicine, forthcoming, DOI: 10.1615/EthicsBiologyEngMed.2013007

[3] Earth’s finite inhabitable lifetime has been in the news recently, following the publication of this paper: Rushby AJ, Claire MW, Osborn H, Watson AJ, 2013. Habitable zone lifetimes of exoplanets around main sequence stars. Astrobiology 13(9) 833-849.

[4] For one of the few virtue ethics accounts of risk, see Athanassoulis N, Ross A, 2010. A virtue ethical account of making decisions about risk. Journal of Risk Research 13(2) 217-230.

[5] Driver J, 2001. Uneasy Virtue. Cambridge University Press.

This post was written by
Seth Baum is Executive Director of the Global Catastrophic Risk Institute.
Comments are closed.