Against Moral Progress

Obert argues for the existence of moral progress by pointing to free speech, democracy, mass street protests against wars, the end of slavery… and we could also cite female suffrage, or the fact that burning a cat alive was once a popular entertainment… and many other things that our ancestors believed were right, but which we have come to see as wrong, or vice versa.

-Eliezer Yudkowsky, Whither Moral Progress?

A Reactionary such as myself may obviously have some object level objections to this short list and I’m certain we will in the future point our readers towards such arguments or present them ourselves. But the heart of the Dark Enlightenment is meta, with regards to our assumptions and our institutions, so let us take a step up and ask why should we consider this sort of change of morality a good thing.1

Why do most people seem to consider the particular patterns of change we have  supposedly been experiencing in the past decades, centuries and millenia as legitimate and desirable? What good reason do we have to be believe in moral progress or “human moral development”?  Shokwave of LessWrong responds to this question in good faith:

I can’t put it into words, but I feel like not having 0slaves and not allowing rape within marriage are both good things that are morally superior for reasons beyond simply “I believe this and people long-ago didn’t”.

The process whereby things like this occur are what I’d call “human moral development”.

Ah, again slavery! The shibboleth of right mindedness for over a century, it tempts me to write some apologia just for the pleasure of it, but, I digress. So we have a mysterious process that with some deviations has generally over time made values more like those that we have today. Looking back at the steps of change we get the feeling that somehow this looks right.

Very well, but this doesn’t look like it takes into account the formidable power of the human mind to construct convincing narratives for nearly any difficult to predict sequence of events in hindsight. Add to this that we have many document examples of biases that are strong enough to give us that “morally superior for reasons beyond simply they are different” feeling2  and do indeed give us such feelings on some other matters, I hope I am not to bold to ask…

How exactly would one distinguish the universe in which we live in from the universe in which human moral change was determined by something like a random walk through value space?3 Now naturally a random walk through value space doesn’t sound like something to which you are willing to outsource future moral and value development. But then why is unknown process X that happens to make you feel sort of good, because you like what its done so far, something which inspires so much confidence that you’d like a perhaps godlike AI to emulate, quite closely, its output? Indeed why should you personally continue to allow it to edit your values until you have some proof that its workings stand up to your current morality?

Sure its better in the Bayesian sense than a process who’s output so far you wouldn’t have liked, but we don’t have any empirical comparison of results to an alternative process, or do we? Consider the other kind of changes that Shokwave notes, that feel right in the merely because its more similar to us way. Examples can be found in how we dress, what our system of writing or spelling is like and in food preparation and etiquette. It seems plausible that these kinds of changes of values and morality might indeed be far more common than the former kind. Even if these changes are something that our ancestors would have found to be neutral changes, which seems highly doubtful, they are clearly hijacking our attention away from the fuzzy category of “right in a qualitative different way than just similar to my own” that is put as the basis of going with the current process.

But again perhaps I simply feel discomforted by such implicit narratives of moral progress considering that North Korean society has demonstrably constructed a narrative with itself at the apex that feels just as good from the inside as does ours. How is it possible that this black box like process apparently inspires such confidence in us, while a process that has so far also given us comparably felicitous change that feels so right to us humans that we often invoke an omnipotent benevolent agent to explain the result, can terrify us once we think about it clearheadedly?

Will we be terrified by Moral Progress, once we think about it clearheadedly, as well? I think Azathoth isn’t the only sanity shattering Outer God waiting for us. With this series of essays I will endeavor to do precisely that. I would urge you to consider doing lighter drugs than the strange geometries we shall explore, it may, after all, permanently damage your current morality. Safety is not assured. I will begin in shallow waters, examining why you might want to hinder primordial terrors even when they seem to be doing something good.

Reworked from a comment comment originally posted on LessWrong.

1 This is more puzzling that it might seem at first. Nick Bostrom in his paper The Superintelligent Will makes the argument that value preservation or “goal-content integrity” is among the universally useful intermediate goals that agents pursuing any arbitrary final goal will converge on.

2 Among the many example of this would be the Halo Effect.

3 The Random Walk Through Morality Space example was first used, as far as I can tell, by Eliezer Yudkowsky. It is imperfect in that it can confuse the reader whether the more important point is the arbitrary nature of such an algorithm or the predictability of its output. The latter is important for some questions, but here I am primarily interested in the legitimacy we feel such an algorithm has. An alternative example would be the following: Write out a list of all the values you can cherish and think up in 5 minutes. Done? Now cross out the even items on the list. You must now strive to cease caring for them in favour of the odd items on the list. Repeat last two steps. Once given the initial list, the final outcome is trivially predictable. Yet, you probably wouldn’t be fine with this algorithm determining the moral changes of our civilization despite this knowledge of the future.

5 thoughts on “Against Moral Progress

  1. Pingback: Randoms | Foseti

  2. Pingback: Parable of the Unstoppable Mad Man | More Right

  3. Pingback: Don’t Trust God | More Right

  4. Pingback: The Cult of Neoreaction | More Right

  5. Pingback: Links for November | More Right

Comments are closed.