The Moral Landscape: a review

In The Moral Landscape: How Science Can Determine Human Values, published in 2010, the author, New Atheist Sam Harris[1], argues for a naturalistic moral grounding which dissolves the distinction between facts and values, and thus makes objective moral truths discoverable - at least in principle if not in practice - via scientific inquiry. In terms of moral theory, and ignoring certain issues of metaphysics, as well as the book's diversions from moral theory, The Moral Landscape is a fine piece of rhetoric which argues its case well, and - again, aside from certain of its metaphysics - many of its claims are unobjectionable, however, it doesn't have much meat on its bones, and some of its claims are objectionable, with these objections having not been well anticipated or handled. In practical terms, this book does not adequately address or answer four serious questions: firstly, how much can science really determine values versus philosophy; secondly, how obtainable is moral objectivity really in practice given differing subjective (philosophical) human values; thirdly, how damaging could the general idea of science objectively determining values be to the autonomy of those, such as involuntary "mental health" patients, in already precarious situations; and, fourthly, can the conclusions of a hypothetical moral science be generally-accepted enough, or at least correctly interpreted and acted upon by the correct people, especially on certain key questions, to be of general social value?

Before substantiating my introductory sentiments, here is some context and scoping: Sam and I have in several respects fundamentally different basic beliefs: I am a dualist (provisionally, unless/until facts and reason convince me otherwise) in at least two senses - mind-body dualism, and philosophical manichaeanism; that which might otherwise be referred to as "ditheistic moral dualism" - whereas Sam is an atheist, and, as best I can tell, a philosophical materialist/physicalist, who, whilst admirably admitting that he can't see how it could be true, nevertheless defaults to an emergent view of consciousness.

Sam seems to see religious beliefs in general (with perhaps an exception for certain aspects of Buddhist belief) as delusional and less than worthless, dangerous even; I see them as partial truths with significant value despite their flaws and limitations, before even getting into the benefits - despite the drawbacks - of practising a religion (although I am not myself a practitioner of any religion).

Too, I believe that we are likely to possess libertarian free will; Sam does not even believe in the coherence of free will as a concept. Given these several differences, it might be seen as quite remarkable that Sam and I agree on moral theory to the extent that we do, although I am not so sure that it is (remarkable). In any case, I do not in this review attempt to resolve or even debate our differences on matters of metaphysics. I even avoid critiquing the chapters and sections on religion and free will in the book at all, preferring to stick to its moral theory proper.

To ground the rest of this review, I start by sharing my own view of morality. In doing so, it becomes, I hope, clear just how much Sam and I agree, and why. It also gives me the basis from which to explain just where, how much, and why we do not agree.

In essence, I see morality as something of a "tree", or perhaps more of a "tree-like web", of principles, which at the extremities might be more properly termed "rules" than "principles". At the root is the fundamental, objective, and unobjectionable principle that we "ought" to promote well-being and to avoid harm - and in this I essentially agree with Sam that "a concern for well-being (defined as deeply and as inclusively as possible) is the only intelligible basis for morality and values", although I prefer, for reasons which for brevity I won't go into, to approach morality from the perspective of harm-avoidance, rather than, as Sam does, from the perspective of promotion of well-being. This is not a fatal difference.

From the root of the "moral tree" (or web) branch out subsidiary principles, and from those yet further subsidiary principles, until we get to the most specific "leaves" of the tree, which are better described as "rules". Some of these principles and rules are in harmony, and some conflict, sometimes only given the right circumstances. Some principles and rules, too, have exceptions. The exercise of judgement is necessary then in two respects: firstly, in particularising more specific principles/rules out of more general ones (branching), and, secondly, in deciding how conflicts between principles/rules ought to be resolved, and/or in deciding when an exception exists, and in which circumstance(s) the exception applies. Here I note Sam's chess analogy: generally, the principle that one ought not to lose one's queen holds, but occasionally, there are exceptions, and/or other principles conflict to the point at which resolution of the conflict requires abandoning the principle not to lose one's queen, in favour of a more preferable, circumstantially-appropriate principle/rule. This works very well with my notion of a "moral tree" (or web) in which certain principles/rules ("nodes") sometimes conflict with other principles/rules ("nodes"), such that a more general "node" might sometimes be overridden in certain circumstances by a more specific one, potentially having branched off quite a way up the tree/web - and/or that there are additional "exception nodes" which branch off or are associated with (qualify) "principle/rule nodes".

And so I agree, too, with Sam that the supposed fallacy of being able to derive an "ought" from an "is" is itself a fallacy: such a derivation is possible and even elementary. In my view, the objectivity in the fundamental moral principle of harm avoidance (and/or promotion of well-being) derives from the nature of conscious experience: the fact that as sentient beings we can be harmed, and the knowledge of what it feels like to be harmed, leads a rational being to the conclusion that we ought to avoid harming others where possible (the "root" of the moral tree). Whilst this is not strictly speaking identical with that which Sam writes, a similar spirit can be seen in operation here (granting our difference in focus in that I am most concerned with harm avoidance): "I am simply saying that, given that there are facts—real facts—to be known about how conscious creatures can experience the worst possible misery and the greatest possible well-being, it is objectively true to say that there are right and wrong answers to moral questions", and here: "If this notion of “ought” means anything we can possibly care about, it must translate into a concern about the actual or potential experience of conscious beings (either in this life or in some other). For instance, to say that we ought to treat children with kindness seems identical to saying that everyone will tend to be better off if we do".

I also agree with Sam that a sense in which morality is at root objective is the epistemic sense in which we can arrive at it through being "free of obvious bias, open to counterarguments, cognizant of the relevant facts". I had, prior even to reading Sam's book, associated objectivity with "rationally" and "dispassionately" "consider[ing] the empirical facts of (especially sentient) existence", and I had suggested that in this case, "objectivity" means 'eliminating as much as possible our sense of self-importance which is due to our subjective perspective/experience, and trying to see "beyond ourselves" and "from a God's eye perspective"', and all of this seems to be essentially the same as that which Sam is driving at in his conception of epistemic objectivity. Unlike Sam, though, I also believe in the abstract existence - in some sense, whether or not strictly Platonic - of objective truths, which Sam seems to see as an "ontological objectivity" to be denied.

Sam and I also have a similar reaction to Jonathan Haidt's work. Sam writes that it "seems possible, for instance, that [Jonathan Haidt's] five foundations of morality are simply facets of a more general concern about harm". I had written, again, prior to reading Sam's book, in a remarkably similar spirit: "I'd suggest too though that if we wanted to be reductionist, we could go some way to reducing several of those other principles [as listed by Jonathan Haidt] into terms of harm avoidance".

This brings us to Sam's primary thesis, which is that, given a definition of well-being, which, Sam helpfully clarifies, "like the concept of “health,” is truly open for revision and discovery", "science can, in principle, help us understand what we should do and should want—and, therefore, what other people should do and should want in order to live the best lives possible".

This fits in nicely with my notion of a "moral tree". I have suggested that judgement is required in order to (1) particularise more specific (moral) principles/rules out of the general principles (i.e. to branch the tree), (2) identify (circumstantial) exceptions to these principles/rules, and (3) resolve (circumstantial) conflicts between principles. Scientific study is indeed one way in which we can enhance the quality of these judgements. I would suggest, though, and this is the first of a two-pronged critique, that philosophy is at least as important in this respect as - and very probably much more important than - science. Here's one example of why - an example of my own conceiving, before turning to those in the book:

By my definition, "well-being" includes the freedom to define one's own purposes and goals ("the principle of self-determination"). This alone is a philosophically- rather than scientifically-based principle, but bear with me because it goes deeper. There is a view prevalent in certain societies, including Australia from where I write, that an exception to this principle exists in the case of individuals thinking, feeling and behaving in ways which are unusual to the point of causing concern and even emotional distress to family members or others in society: in that case, if it can plausibly be asserted that the individual is likely to harm him or herself, such individuals can be physically coerced into the "mental health" system, and forcibly medicated, supposedly for their own ultimate benefit. Having been on the receiving end of such treatment multiple times, and thoroughly denouncing it, I pose the following question: can we answer as to what extent an individual ought to be free to self-determine in such cases through a moral "science" at all, or is the answer to this question, as I contend, one of moral philosophy?

I pose this example to raise another question too: to what extent would the notion of a moral "science" encourage authorities to override the will of individuals in such cases as this? Whilst Sam shows no sign of dictatorial tendencies in this book, I wonder nevertheless whether his general ideas could at least shift the balance towards paternalism in cases like this: "Now see here, lad, you are delusional, and our medical science shows that your delusions will be cured when you take this medication, and our moral science shows that, overall, this will be for your ultimate well-being. We know that you have a different view, but ... you are delusional, and so: science prevails over your mere personal will".

The possibility of this sort of problem seems to me to stem from Sam's lack of distinction between an individual's choices with respect to him or herself, and that individual's choices with respect to others. It seems to me that morality, particularly when it is promoted for the good of the general public, ought to emphasise concern with the latter (harms committed against others), if it even concerns the former ("harms" committed against oneself) at all. In other words, an individual ought to be imposed upon only when his/her choices cause harm to others, and not when some other individual or authority deems those actions to be causing harm to the individual him/herself. Of course, there is more nuance to this issue than I have allowed for, but this essay is not the place to elaborate on it - but Sam's book would have been (though, disappointingly, it turned out not to be).

I would, in any case, have liked to have seen more discussion of these sorts of issues - both the specific example I've provided as well as the questions it raises, including the relative scope of philosophy and science in the construction of ethical principles and in the resolution of conflicts of principles. I acknowledge of course that Sam asserts a permeable boundary between science and philosophy (more strictly, between science and "the rest of rational thought": "Science simply represents our best effort to understand what is going on in this universe, and the boundary between it and the rest of rational thought cannot always be drawn"), but I also acknowledge another reviewer's question: that if science and philosophy are so mutually permeable, then why emphasise "science" in the book's title?

Sam raises plenty of cases himself where, in my view, philosophy is a more apt tool of judgement than science. For example, during a discussion of consequentialism, he asks, "do we have a moral obligation to come to the aid of wealthy, healthy, and intelligent hostages before poor, sickly, and slow-witted ones? After all, the former are more likely to make a positive contribution to society upon their release. And what about remaining partial to one’s friends and family? Is it wrong for me to save the life of my only child if, in the process, I neglect to save a stranger’s brood of eight?" If Sam genuinely believes that these are questions for science, then he ought to have actually argued that case (which he did not). If not, then where is his admission that philosophy plays a role too?

I think, though, that it is uncontroversial that science is incapable of resolving the admitted problems with Sam's preferred ethical approach: consequentialism. Here is the first of the problems to which he admits:

When thinking about maximizing the well-being of a population, are we thinking in terms of total or average well-being? The philosopher Derek Parfit has shown that both bases of calculation lead to troubling paradoxes. If we are concerned only about total welfare, we should prefer a world with hundreds of billions of people whose lives are just barely worth living to a world in which 7 billion of us live in perfect ecstasy. This is the result of Parfit’s famous argument known as “The Repugnant Conclusion.” If, on the other hand, we are concerned about the average welfare of a population, we should prefer a world containing a single, happy inhabitant to a world of billions who are only slightly less happy; it would even suggest that we might want to painlessly kill many of the least happy people currently alive, thereby increasing the average of human well-being


Clearly, this proves that we cannot rely on a simple summation or averaging of welfare as our only metric. And yet, at the extremes, we can see that human welfare must aggregate in some way: it really is better for all of us to be deeply fulfilled than it is for everyone to live in absolute agony.

It disappoints me that Sam does not offer a (necessarily philosophical rather than scientific) rejoinder to this critique other than the broad, and unobjectionable, sweeping statement that "human welfare must aggregate in some way: it really is better for all of us to be deeply fulfilled than it is for everyone to live in absolute agony". No doubt, but that doesn't solve the problems raised, does it?

At least Sam has something of a rejoinder to another problem for consequentialism to which he admits; a problem that has, at least in significant part, kept me from identifying with consequentialism myself:

Some people worry that a commitment to maximizing a society’s welfare could lead us to sacrifice the rights and liberties of the few wherever these losses would be offset by the greater gains of the many. Why not have a society in which a few slaves are continually worked to death for the pleasure of the rest? The worry is that a focus on collective welfare does not seem to respect people as ends in themselves.

Sam's response?

[S]uch concerns clearly rest on an incomplete picture of human well-being. To the degree that treating people as ends in themselves is a good way to safeguard human well-being, it is precisely what we should do. Fairness is not merely an abstract principle—it is a felt experience.

OK, but to what extent ought we to treat people as ends in themselves when it conflicts with the maximisation of overall well-being? Can we really determine this "scientifically"? Or is this a philosophical weighing? See, I would have really loved to have seen a discussion to this effect in this book, and it could have been a very nuanced and subtle one, but, alas, I was disappointed.

Another problem in this respect is that the book is light on examples of which moral values science might help us to arrive at, and how. I skimmed through the first two chapters and collected this rough listing:

  1. Scientific study has already established that corporal punishment diminishes overall well-being.
  2. Scientific study has already established that early childhood experience, and emotional bonding are essential for the well-being of a person's ability to form healthy relationships later in life.
  3. In principle, science can answer these questions: is it better to spend our next million dollars eradicating malaria or racism? Which is generally more harmful to our personal relationships: "white lies" or gossip?
  4. [Strongly implied] Science can give us an account of "whether, why, and to what degree human societies change for the better when they outgrow [the tendency to treat women as the property of men]".
  5. [Implied] Science can tell us whether or not societies which permit honour killings are peaks on the moral landscape.
  6. [Implied] Science can tell us whether or not forcing women and girls to wear burqas makes a net positive contribution to human well-being; whether it produces happier boys and girls; whether it produces more compassionate men and contented women; whether it makes for better relationships between women and men, boys and their mothers, and girls and their fathers.
  7. [Implied] Science can answer questions about how to solve homelessness, including policies.
  8. Experiments, such as in kibbutzim, have determined that ignoring parents' special attachment to their own children does not work well.

Of these, it seems to me that the second and eighth are the strongest and least controversial examples, followed by the first. Regarding the first, though, there is a question to be raised: how much is there that science adds to a purely philosophical argument? No doubt, there are facts and figures with respect to the later well-being, defined in some reasonable way, of those who are punished corporally, as well as of those with whom they come into contact, as well as of society as a whole, but is not a philosophical argument about the right of sovereign beings to be free from harm even more persuasive than any consequentialist argument-from-scientific-study? Similar questions can be asked with respect to the fourth, fifth and sixth examples. The seventh arguably should have been left off the list because it is not so much an example of science being used to determine our values as it is of science being used to solve problems in living in accordance with a pre-existing value: the value of homing the homeless.

This leaves the third, which, it seems to me, poses an initial question which is hopelessly unrealistic and simplistic. Why, out of all of the problems we could address with one million dollars, would we force ourselves into a choice between only malaria and racism? And why only one million dollars? What if we borrowed more - couldn't we run a cost-benefit analysis on the value of doing that? And even if we are forced into this dichotomous choice, couldn't we split our funds between the two problems? The more realistic scenario is orders of magnitude more complex: a vast array of problems, several means of obtaining funds - each with its own pros and cons - as well as many other budgeting concerns than solving such problems, and, finally, many ways in which the funds can be divvied up. Is such a complex optimisation problem possible to solve in principle? Sure, I don't see why not, but let's acknowledge the challenges in practice.

As for "white lies" versus "gossip": is this a particularly helpful question to ask in isolation, and, again, in such a dichotomous way? Wouldn't we be better served by asking a more general question, such as "How can we maximise the value of our personal relationships?" Once again, we see that a realistic question is much more challenging for science to answer. Is it answerable in principle? Again, sure, I don't see why not, but again, let's not make the task seem less daunting than it really is.

So, of the eight examples listed, it is questionable for half of them whether science has much to contribute over philosophy, one is problem-solving rather than value-generating, one is unrealistically simplistic, and only two seem like good examples of a realistic moral science. I'm not sure how well, then, Sam has advanced his case through examples. Just how much can science really contribute towards morality?

I do not dispute Sam's thesis that science can help us to determine moral values, I'm just questioning the extent of the role it can play, particularly in relation to the role of philosophy. In this respect, it is helpful, I think, to return to the notion of a moral tree/web of principles/rules: here, it is obvious that at the highest level, the root node is not derived "scientifically", but, rather, philosophically: there is no scientific experiment that we can perform to determine that we ought to value well-being and avoid harm; this is a philosophical principle. It seems obvious (to me, at least), too, that those nodes closest to the root node will also be philosophically- rather than scientifically-derived. Science, it seems to me, could enter the picture only some way down the tree, potentially quite a way down. I have started on fleshing out my own suggested highest-level principles, admittedly somewhat roughly, and so far have seen no scope for science in determining them at this level.

This brings me to the second prong of my critique, again based on the notion of a moral tree/web of principles/rules. It seems to me that the further we are from the root of the tree, the greater the exercise of philosophical judgement is required in deriving branches, and thus the greater scope for subjectivity to enter the picture. There seem to me to be cases where the nature or degree of "harm" is unclear without reference to some personal value.

Let's take an example to make the point clear: imagine a renunciate ascetic who sees this life as a spiritual test or quest in which we are called to respond to a higher power and reject earthly passions and attachment to the material. Now compare this person's moral view on the harm done through accidental damage to property with that of mainstream society: this person does not value the material at all, and does not feel that accidental damage merits any form of worry, stress or even compensation - whether s/he is the "harmed" or the "harmer". Mainstream society on the other hand values material goods very differently, and prescribes that the damager compensate for accidental damage to the property of others.

Can science or even philosophy decide which of these moral values is "objectively" correct? I don't think so. Certainly, arguments can be advanced for both positions, but I doubt that any of them would be decisive, and certainly not universally acknowledged as such (and legitimately so, in my view).

Thus, when we talk about the "objectivity" of morality, it is in a somewhat limited (but certainly valuable) sense: up near the root of the tree we are all likely to agree on what is "objectively" true; further down amongst the branches, "objectivity" is going to be harder to come by; in some cases, the best we might hope for is consensus.

That brings to an end the crux of my critique of this book, but I feel it necessary to add a warning about the interpretation of science, and following that three more minor criticisms.

Whilst Sam notes that his argument is about that which science can determine about morality in principle, regardless of whether science can determine any given moral values in practice, we nevertheless need to be concerned with practicalities, otherwise the book has no real-world applicability. In real-world terms, it is unfortunately the case that a lot of science is contested even by scientists themselves, let alone by non-scientists. I think we ought to be skeptical of the idea of experts guiding us through a reliable moral science when so many existing, well-established scientific findings are not even accepted by "mainstream" scientists themselves: for example, the science of parapsychology has proven the existence of psi, and in particular telepathy, to a standard beyond that typical of any social or medical science, yet this finding is reflexively and routinely denied by many (most?) non-parapsychological scientists, who often have not even looked at the (compelling) data.

But let's say that you are a "skeptic", and you utterly reject what I just said of parapsychology: it remains the case that many people who are educated, well-read, and not fools do accept what I just said. What does this say for the possibility of a generally-accepted "science of morality", which, surely, will be similarly as contested as the science of parapsychology - or, for that matter, as the science of evolution? This is not strictly a criticism of Sam's thesis, it is simply a raising of one of the potential difficulties facing the endeavour of a moral science.

Moving on to my first minor criticism, which, I note, another reviewer has already made: there is a lot in this book which seems rather misplaced, almost cobbled together for the purpose of filling space rather than for contributing particularly effectively towards establishing the book's thesis. In particular, I think that to a large extent the discussions of religion, free will and (more arguably) cognitive biases are not particularly relevant to Sam's essential thesis that science can determine moral values, and that they would have been better replaced with discussions of such issues as I've raised above. And I get it, religion and free will in particular are matters of vital concern to Sam, but there is a time and a place to discuss them, and this book, it seems to me, was neither the time nor the place.

For the second critical point, I want to address something that Sam wrote towards the end of his book, regarding one of the potentially - in theory at least - successful challenges to his thesis (footnotes elided):

It is also conceivable that a science of human flourishing could be possible, and yet people could be made equally happy by very different “moral” impulses. Perhaps there is no connection between being good and feeling good—and, therefore, no connection between moral behavior (as generally conceived) and subjective well-being. In this case, rapists, liars, and thieves would experience the same depth of happiness as the saints. This scenario stands the greatest chance of being true, while still seeming quite far-fetched. Neuroimaging work already suggests what has long been obvious through introspection: human cooperation is rewarding. However, if evil turned out to be as reliable a path to happiness as goodness is, my argument about the moral landscape would still stand, as would the likely utility of neuroscience for investigating it. It would no longer be an especially “moral” landscape; rather it would be a continuum of well-being, upon which saints and sinners would occupy equivalent peaks.

It seems mistaken though to conceive of individuals as peaks (or troughs) on the moral landscape, because the landscape applies not to individuals but to the population as an aggregate, and conceiving things in this way seems to have led Sam to confuse the issue: he takes as his moral concern only the consequences of an individual's actions on that individual herself, and on this basis finds a seeming "moral" equivalence between sinners and saints, since the actions of each are equally pleasing to each. But what about their effects on the well-being of others? Surely, a sinner's choices are, in their effect upon the aggregate, going to tend towards a moral trough, whereas it will be the converse for a saint's choices, and here we restore our normal intuitions: that sinners and saints are far from morally equivalent, in which case this potential challenge seems to have been defused.

The confusion here seems to be a manifestation of the lack of distinction between self- and other-affecting choices which I raised earlier.

Now, Sam's notion that a society of equally-matched sadists and masochists would (could) be morally equivalent to a world of conventionally wired people is harder to defuse given his premises, but let's look at it a little critically: in fact, it doesn't take much to immediately question whether the experience of a masochist is genuinely one of equal well-being with that of a normally-adjusted individual. Masochism inherently involves pain and/or suffering: can it plausibly be asserted that, being confounded by pain and/or suffering, the "enjoyment" of a masochist is of the same quantification as the normally-adjusted (devoid of pain and suffering) enjoyment of a normally-adjusted individual? I don't think so. Too, and perhaps this is not quite compatible with Sam's consequentialist outlook, the very existence of individuals who mean avoidable harm to others - despite that all available others "enjoy" that harm - seems to me to detract from overall well-being, even if simply because it is not a world in which I would like to live, even as a masochist. I expect that, in that situation, were it put to me that it were possible to live in a world in which people related to one another lovingly as equals, rather than in predator-prey relationships, no matter how superficially "happy" those predator-prey relationships were, I would desire to switch to the normally-adjusted world, or at least see it as preferable.

The third and final minor critical point then that I want to make is one explicated well enough in a review by another reviewer, H. Allen Orr, such that, rather than rehashing it, I will simply link to that review: The Science of Right and Wrong. The point this reviewer makes which I want to emphasise is that Sam's claim that there is no meaningful distinction between facts and values rests on a weak set of justifications; and, in my view, it is probably, as this reviewer suggests, false. I don't think, though, that the failure of this claim is fatal to the rest of Sam's claims.

This book's primary success is at the same time its primary failing: in taking a very broad view of morality, and rarely "getting into the nitty gritty", many of the claims it makes with respect to moral theory, including the fundamental ones, are uncontroversial and even self-evident, however, too many are not, and this approach also leaves too many questions unanswered or inadequately answered for the book to carry a great deal of weight.



[1] In this review, I refer to Sam by his first name, simply because the convention of referring to one another by surname in formal/academic writing strikes me as being impersonal to the point of rudeness. I hope that I have not swung too far in the opposite direction into rudeness through over-familiarity.