Values in Science & Science in Normative Theorising
Last year, Kevin C. Elliott published three new books on ‘values in science’:
Given that empirical research is often used by moral, social, and political philosophers in scholarship on questions of justice, we thought it would be interesting to chat to Kevin about his recent work and its implications for moral, social, and political philosophy.
Erin Nash: Hi Kevin, thanks for agreeing to be interviewed by Justice Everywhere. To start us off, perhaps you could tell us about how philosophers of science understand the term ‘values’, and how values influence science.
Kevin Elliott: Thanks, Erin. Briefly, I think values are qualities that are desirable or worthy of pursuit, but it’s important to keep in mind that many different things fall under this description. For example, when theories have qualities like explanatory power, wide scope, or predictive success, we typically consider these things to be values because they tend to indicate that a theory is true or reliable. Some label these ‘epistemic values’. Other qualities of theories, such as their tendency to promote environmental protection, public health, economic growth, or gender equality, are also typically regarded as values (often called ‘non-epistemic values’) insofar as they help us to achieve ethical or social goals. The question that permeates all my recent books is what role, if any, ethical and social values like the promotion of environmental protection or gender equality should play in scientific research.
For example, values can subtly influence the questions that get asked in socially relevant fields like agricultural research. There are many ways of trying to improve agricultural production so as to benefit poor farmers, produce more food, and lessen our impacts on the environment; they involve efforts to develop higher-yielding seeds, more ecologically sensitive farming strategies, or more efficient markets for agricultural goods. Even if researchers are not consciously being influenced by values, decisions to focus on some of these questions or approaches rather than others are value-laden insofar as they serve the interests of some individuals and institutions rather than others.
Values can also affect the assumptions and choices that scientists make when they are analysing or interpreting their results. For example, chapter four of Tapestry discusses how economic predictions about the costs of climate change are influenced by value-laden decisions about how much to “discount” costs that occur in the future relative to costs that are borne at present. Values can also influence how much evidence scientists or policy makers demand before they are willing to draw conclusions – this is the basis for what is often called the ‘argument from inductive risk’. In the seventh chapter of Exploring Inductive Risk, Robin Andreasen and Heather Doty discuss studies designed to identify inequalities in the retention or promotion of women university faculty, and especially women faculty of colour. They note that even when the available data suggest that disparities could potentially be present, one still needs to decide how much evidence is sufficient to conclude that problematic forms of discrimination are genuinely occurring. Choosing what rule to use for inferring that discrimination is occurring depends on value-laden decisions about what mistakes we are most concerned to avoid.
Erin: What is unclear, though, is whether philosophers of science take non-epistemic values to play a role in all aspects of science or only in some. For instance, in Tapestry, on one hand you say things like “…scientific reasoning is thoroughly imbued with value influences” (pp. 166–167). But on the other hand, you use caveats such as “…it is often unrealistic to find a perfectly value-neutral way of communicating scientific information” (p. 133). I’ve found this sort of hedging to be common in the literature. But I think this can be quite confusing! If it is the case that value judgements can be avoided in some circumstances, we are left with at least two further questions: (1) How do we identify those circumstances? (2) Where non-epistemic values are currently playing a role in science, should they be, or should they to be in the way or to the extent that they currently do?
Kevin: You make a very perceptive point. I don’t think I have this issue totally sorted out in my own mind. At present, my inclination is to say that non-epistemic values are always at least somewhat relevant to scientific reasoning, but I acknowledge that the extent of their relevance and the best ways of addressing them vary a great deal from case to case. One reason for insisting that values are always relevant is that all scientific research has at least some potential to influence society over the long term. So, given that scientists always run the risk of being incorrect when they draw their conclusions, social values are always somewhat relevant for deciding how much evidence they should be demanding. However, most of the work done in some fields, such as theoretical physics, does not have the sorts of immediate and obvious social impacts that we saw in the research I described earlier about gender discrimination, so in these cases it makes sense to consider social values more indirectly.
Erin: So how can we determine whether certain value influences are appropriate?
Kevin: In my Tapestry book, I suggest three criteria for determining whether value influences are appropriate: transparency, representativeness, and engagement. First, it is important for scientists to be as open as possible about the details of their work and the ways in which values might have influenced it so that others can recognise those influences. Second, when scientists make value judgements, those judgements should be informed by ethical principles and social priorities. Third, it is important to engage key stakeholders in efforts to identify important value judgements and to reflect on how to address them. However, much more needs to be said about the nature of these criteria and how they work together. For example, I don’t think we can draw the simple conclusion that whenever these three criteria are met, value influences are appropriate, or whenever they are not met, value influences are inappropriate. I see them more as rules of thumb that can help us to incorporate values in science more responsibly.
Erin: I like your criteria, but I have a few concerns. For instance, with regards to your second criterion, it doesn’t seem like we have broad common ground on ethical principles and social priorities within our societies. That isn’t necessarily a bad thing, especially if we endorse value pluralism. A commitment to democratic values and a concern for the proper place of science and scientists in democratic societies also motivated W.E.B. Du Bois’ early defence of the value-free ideal for science, as Liam Kofi Bright has recently explained. Along similar lines, Stephen John has argued that because people hold different values, science that is value-laden may fail to contribute to ‘public knowledge’. Do you find these arguments persuasive?
Kevin: I’m very sympathetic to the sorts of concerns that you are highlighting. This is partly why I suggest with Ted Richards in the concluding chapter of Exploring Inductive Risk that in some cases, scientists should try to avoid making controversial value judgements themselves. For example, when there is a great deal of disagreement about how to interpret the available scientific evidence, it might make the most sense for scientists to report the state of the evidence as clearly as possible, and let others decide what conclusions they want to draw. Similarly, sometimes it is possible to report more than one way of interpreting the available evidence so that decision makers can decide which approach fits better with their values. For example, economists studying climate change can report how their analyses of climate impacts differ based on the decision to use different discount rates. However, Ted and I would respond to the work of Bright and John by emphasising that even in these cases, an array of implicit value judgements have probably still played a role in how the available evidence was collected, analysed, and communicated. Thus, despite the limitations of my criteria, I don’t think we can avoid them. Scientists need to acknowledge value judgements as best they can (transparency), make them as responsibly as possible (representativeness), and invite critical reflection about them from an array of perspectives (engagement).
Erin: What advice can be derived from the values in science literature for political philosophers, social theorists, and policymakers who use empirical research to support their normative arguments?
Kevin: It’s really important for scholars and practitioners who draw on scientific research, and perhaps social-science research in particular, to recognise the potential for this research to be subtly influenced by non-epistemic values. As I noted earlier, the questions scientists ask, the assumptions underlying their interpretation and analyses, the evidence they demand before drawing conclusions, and the ways their results are framed and communicated can all involve value-laden judgements. Thus, when this research is informing important decisions that will have social consequences, it is important to scrutinise potential value influences and recognise how they may have influenced the research and its communication.
Erin: So perhaps this debate is best thought of as being situated at the interface of philosophy of science and moral, social or political philosophy? If this is the case, how do you think moral, social, and political philosophers might be able to contribute to, and help advance, this debate?
Kevin: I totally agree. We actually just had a discussion about the need to bring together moral, social, and political philosophy with the philosophy of science at a conference session devoted to my Tapestry book. An important theme was the fact that my criteria (transparency, representativeness, and engagement) need a good deal more elaboration, and this is the kind of work that moral, social, and political philosophers are well placed to do. For example, moral philosophers can help us think through the ethical principles that are most appropriate for guiding particular value judgements, such as decisions about what discount rates ought to be used when analysing the economic costs of climate change. Moreover, there will almost always be disagreements about these ethical principles, so we also need political philosophers to provide guidance about how to address these disputes. What forms of engagement should we employ for responding to disagreements about important value judgements? Which stakeholders should be involved in the deliberations? One of the most obvious lessons to be gleaned from the recent work on values in science is that philosophers of science desperately need guidance from moral and political philosophers, so I’m really grateful that you provided this opportunity to talk about my books on this blog!
A full review of Kevin Elliott’s A Tapestry of Values: An Introduction to Values in Science by Erin Nash can be found here.