Diving deeper with Axiom M

I earlier made some rather dubitable statements that warrant a bit more careful consideration. Here’s the relevant text:

People ought to act in a way that will get them what they want.

Note that I am not suggesting that we should blindly follow whatever whims flutter across our minds. What I am suggesting is that we should endeavor to put ourselves in states that we prefer to all other feasible options. Giving precedence to transitory wants is a bad strategy for Getting What You Want since those desires are likely to change quickly, making them difficult to satisfy in any meaningful way. (As a general maxim, an important part of the project of Getting What You Want is wanting the right things in the first place.)

…it’s helpful to have desires that are actually achievable. It may even be worthwhile to intentionally train yourself to desire things that are more feasible.

For sake of notational simplicity, let’s call the statement “I should act in a way that gets me what I want” axiom M (’cause it’s Mckay’s Moral axiom). In the rationalist manifesto, I made the claim that axiom M is the only sensible moral axiom. That was perhaps too strong of a statement – I’m open to the idea that there may be other moral axioms that I could get behind; I just haven’t encountered any yet. However, it’s clear from the quotations I’ve included here that there are some details about the deployment of this axiom that need to be sorted out.

Why this axiom?

Before diving into the weeds, I want to take a bit of space to explain why I like axiom M.

Typically, when people think about morality, what they have in mind is some sort of universal principle or set of laws that govern what we should do. For a lot of people, this moral law or principle usually comes with some sort of divine authority; that is, there is some sort of godlike force or being that dictates what it means for something to be right or wrong. (Typically, X is good \Leftrightarrow God says X is good.) The problem that I see with this way of thinking is that it’s not clear to me what force these moral laws are actually supposed to have: Sure, maybe God says that I shouldn’t do X, but what if I just don’t care? Oh, you say that God will punish me if I do X? Well, what if I don’t care about the punishment? What if being able to do X now is more important to me than the long-term consequence? What if my nature is such that I would actually prefer hell to heaven? If that is the case, saying that I should still do X doesn’t seem to make much practical sense. Now, depending on your theology, you might have responses to these objections, but the point remains that I don’t have to care, and that point is enough to spur me to look a little bit deeper for a principle to base my idea of morality upon. I’ve focused here on the idea that moral laws come from some sort of God entity, but the objections I have can be applied even (especially!) if you do not believe in a God in the traditional sense. I’ve focused on the “divine command” argument simply because it’s been my observation that people who hold that perspective tend to have the most vociferous objections to the kind of relativistic thinking entailed by something like axiom M, and I don’t think it should be relevant.

Let’s jump back a little bit and ask ourselves why the idea of morality should have any force at all? Well, if you believe that morality comes from God, and that we should be good in order to receive God’s blessings and avoid God’s punishments, then what you are implicitly admitting is that your idea of morality has force because you want the positive consequences that God can impose and don’t want the negative consequences that God can impose. If you believe that morality is simply a feature of objective reality (which, as an aside, I personally have a really hard time wrapping my head around, but if you have a good argument for this, I’d be happy to listen), then again the only reason why you would pay any attention to that particular feature of reality is if some part of you wants to take it seriously. And of course, if you think that morality is simply an emergent property of the human psyche, then ideas about morality can, without much difficulty, be conceptualized as you wanting things that you consider to be “good.”

It seems like we should take seriously the idea that our fascination with morality is based upon the underlying fact that, well, we want things. We have preferences: we have some idea of “good” and “bad,” and we want things that are good and don’t want things that are bad.

I hope I’ve made a good enough case that you can at least see where I’m coming from when I say that axiom M is sensible. Note that I am deliberate in my choice of the word “axiom.” I do not consider this to be an incontrovertible law of nature, or anything of the sort. It is merely a convenient place to start, a postulate that seems to take me in useful directions.

One last note before moving on: The thinking that leads me to axiom M is very similar to the thinking that leads to utilitarian theories of morality. In fact, axiom M and its corollaries are essentially a flavor of preference utilitarianism. I note this to show that I am far from the first person to wander down these paths of thought (and certainly not the most precise or eloquent expositor of these ideas). My intent here is not to reinvent the wheel, rather to suggest how to put these ideas into practice and show why thinking in this way is useful.

Now, let’s move onward and examine what axiom M actually entails.

Temporal dependence

For the sake of philosophical exploration, let’s hop inside my mind as I go through the process of making a decision.

Let’s suppose that I’m hungry. The decision in question is what to do about that hunger. To answer that question, we need to consider what it is I want in this moment. Well, I’m hungry, so of course I want to eat something. If I’m hungry enough, that desire could be so great that it monopolizes my mental space, making it difficult to substantially desire anything else. In such a case – according to our axiom – the thing to do is to satisfy that desire by finding something to eat; anything else would not be Getting What I Want.

Now let’s add a little bit of context. Let’s suppose that, for some reason, the only easily available food is a tub of ice cream. Eating that ice cream will satisfy my present hunger, but, in the long run, might have negative consequences on my health. (Granted, eating ice cream once is probably not going to have noticeable consequences, but for the sake of argument let’s suppose that this particular ice cream is abnormally unhealthy. Maybe it’s sweetened with lead or something.) Should the health effect of the ice cream be a consideration in my decision-making process? Well, if I am at a point of hunger that I literally desire nothing else except to satisfy my hunger, then – according to our axiom – no. If my only desire is to satisfy my hunger, then I don’t care about the health effects. If I don’t care about the health effects, then why should they influence my decision?

Under normal circumstances, I do care about the health effects of my actions. In fact, if I go ahead and eat the ice cream, then after my hunger goes away I will probably be much more concerned about lead poisoning than about my temporary aversion of hunger. Does that mean that it was not the right thing to do? Well, in the moment I made the decision, it was the right thing to do, since it was the action that would Get Me What I Wanted. However, after the fact, with the desire to not be hungry no longer dominating my consciousness, that decision is no longer conducive to what I want. If I could go back in time, retaining the desires I have after making the decision the first time, it would no longer be the right thing to do.

I should note that my thinking about all this implicitly assumes that desires are highly temporally dependent. That is, the desires present in my mind at one moment are likely different from those present at another moment. The things I want today are at least slightly different from the things I want tomorrow. If you prefer to think that your desires are in some sense more permanent, either based on a commitment to a more solid sense of personal identity or to a sense that preferences held on a subconscious level should be relevant, you will likely arrive at slightly different conclusions that the ones I draw here. In any case, my claim is that only the things I want right now are relevant to the decision I am making right now. This does not mean that only short-term considerations are relevant, like in the ice cream example. My health, for example, is a long-term consideration that is relevant in a lot of the decisions that I make, but it can only be relevant to a particular decision if I on some level actually consciously process that desire while I am making the decision.

So what is the right thing for me to do after eating the ice cream and regretting having done so? Well, since at the current moment I do want to be healthy, and eating poisonous ice cream is inimical to my health, one good course of action is to act in a way that makes it less likely that I will eat poisonous ice cream again in the future. That could involve obvious steps like getting rid of the ice cream, or going and buying some other food. It could also involve intentionally retraining my mind so that when I get really hungry again, the desire to eat will not push aside all other desires.

One thing I want to point out is that, when making any given decision, short-term desires are not inherently more or less important than long-term desires. If my short-term desires are at some instant more intense than my long-term desires, I should weight my short-term desires more heavily, and the reverse is also true. However, if I am at a juncture where long-term desires are relevant, Getting What I Want may involve taking steps to make sure that those long-term desires remain relevant. (Being ruled by short-term desires may mean that you Get What You Want now at the expense of not Getting What You Want later. If all my desires are short term, that doesn’t matter, since I don’t care about what happens to me later.) I do think that someone who takes axiom M seriously will take deliberate steps to keep long-term desires at the forefront of his or her consideration, if only because allowing short-term desires to be too prominent will likely frustrate long-term desires when they do show up.

Spatial dependence

Just like I cannot use axiom M to sensibly make claims about I should want in the future (or should have wanted in the past) based on my current desires, it is not particularly meaningful to employ axiom M for making claims about what other people should or should not be doing given my personal preferences. I can say things like, “I don’t like the way Alice acts,” but statements like, “Bob should not act that way,” don’t make much sense given the framework I’ve tried to set up here, unless what I am implying is that Bob’s actions are not aligned with his preferences. In general, I cannot make statements about what is right for other people unless I know what it is they want.

Another point that I should make is that the degree to which other people are Getting What They Want is not inherently relevant in my decision making; if I don’t actually care about other people, axiom M does not require me to act to benefit them. That said, I want to state emphatically that just because the welfare of others is not inherently a moral consideration does not mean that it does not become a moral consideration in light of the desires that almost all real people have. Yes, a psychopath need not be concerned about how his actions affect others, but most people are not psychopaths and have a strong interest in limiting/counteracting antisocial behavior. (We should take action to mitigate the effects of other people’s actions based on desires that are antithetical to our own.) Altruistic behavior is totally justifiable under axiom M. In fact, I stand strongly behind the notion that nurturing altruistic values is a good path toward Getting What You Want.

Feasible and unfeasible desires

Not all desires are equally feasible. It may be, in this moment, that I really, really, to the exclusion of all other desires, want to have a pet dragon (fire-breathing and all). If that is the case, then I should take actions that will make that desire a reality. The problem is that, given my current understanding of the way the world works, there is really nothing I can do that will make it so I can have a pet dragon. That desire is unfeasible. The fact that it is unfeasible doesn’t mean that I shouldn’t try to make it happen, if it is truly the only thing I want; it just means that I will almost certainly fail at Getting What I Want. If, right now, I want my future self to Get What He Wants, then I should take steps to make it less likely for my future self to have unfeasible desires.

In general, desires lie on a spectrum of feasibility. By this I mean that even the best attempt to satisfy a desire is never completely assured to succeed. This suggests that we need some way of dealing with the inherent uncertainty associated with the outcomes of our actions when considering which action to take. Short of constructing a continuous utility function and choosing the action with the highest expected utility (which, as I see it, does not match the way that human psychology actually works), I think a reasonable way to proceed is to consider your options, make your best estimate of their outcomes, then choose the action with the outcome that you prefer. We can think of the idea of feasibility by recognizing that outcomes are stochastic, which is to say that a given course of action will not with 100% certainty lead to a particular outcome. It may be the case, for example, that action x has a 70% probability of resulting in outcome y and a 30% chance of resulting in outcome z (with y and z being mutually exclusive). When the results of actions are uncertain in this way, you need to consider the entire stochastic space of possible outcomes when deciding which alternative you prefer.

Let’s take the lottery as an example, assuming that it costs $1 to buy a lottery ticket, that the winnings are $1 million, and the probability of winning is one in ten million. When making the decision of whether to buy a lottery ticket, the alternatives I choose from are:

You should choose whichever outcome set you want more. (Minute probabilities like 1/10,000,000 are, of course, very difficult to think about, but that’s a separate issue.) Since the probability of winning $1 million in the lottery is so low, we could call the desire to do so unfeasible, but if the desire is strong enough to offset the low probability, it might still be the right thing to do.

Takeaways

As I’ve already stated, this way of thinking still needs some fleshing-out, and I don’t think it is the only reasonable way to approach things. That said, if we do accept axiom M (as I am inclined to do) there are at least a few practical takeaways from the above discussion:

goodness, truth, and alpine mornings