A [very rough] moral theory of everything

I’ve written before about what I think is a good foundation for thinking systematically about morality, specifically, answering the question, “What should I do?” To summarize, the basic principle is that a person should take actions that will get that person what he or she wants. At its root, the sort of morality that emerges from that principle is more or less an egoistic preference utilitarianism, although I do want to emphasize once again that endorsing a concept of morality that revolves around the fulfillment of personal desires is not the same thing as saying that we should go around lying, cheating, stealing, killing, and otherwise giving no regard to the consequences of our actions on others and on future events. I personally (and I think this is true for most people) find that focusing on the well-being of others leads to highly desirable outcomes; I am also a big proponent of long-term thinking.

The idea of morality begs to be extended beyond just the self, and not just in the sense of considering how your actions impact those around you, which is what I have considered before, but also in the sense of moving past the question, “What should I do?” and answering the question, “What should we do?” Making that step is the goal of this essay. Keep in mind that the conclusions I make here only hold if you’re willing to go along for the ride and first accept the premises I adopt.

Collective consciousness, collective desires and collective action

Before diving in too quickly, it’s helpful to be clear about who the agent is that we’re considering. In answering the question, “What should I do?” it’s reasonable to think of the agent as my conscious self (there is, of course, a lot more going on in the brain and the body beside conscious thought, but we’ll try not to take too much on and ignore that for now). In answering the question, “What should we do?” we need to extend the idea of a self to a group of people — us. The extension is not perfectly clean, but I think it’s useful to think of the collective consciousness of a group of people as the communication they have between each other — the conscious mind is constantly involved in a stream of thoughts that evaluate ideas, stitch sensory data with prior knowledge, and feed emotional impulses into outward response; the collective consciousness of a group of social individuals is involved in a dialogue of the same sort. Considering this collective consciousness as an agent, it’s sensible to treat the expressed wants of the group as collective desires (acknowledging that just as a single person can have many, often contradictory, desires, the voices within a group will often be myriad and discordant), and then apply the generalized moral axiom,

A moral agent should act in a way that gets it what it wants.

Axiom “M+”

to think reasonably about morality on the group level. Many of my previous thoughts about the individualized Axiom M still apply here, although it is admittedly non-trivial to think about how the combined voices of the group should be assimilated and put into action. That is, of course, not a new problem, but the ages-old question underlying all political theory. Under the framework I’m proposing here, any attempt to establish or manage government is an exercise in group morality. (As a side note, as an economist-in-training, I view the study of economics as an effort to quantify the morally-relevant dynamics of organizations, communities, and nations.)

Going further: the universal consciousness

It makes sense to apply moral reasoning to individuals and groups of individuals, but it may also be worthwhile to extend the idea of morality even further upward. We can go to the level of all living beings (allowing us to make reasonable statements about, say, animal welfare), to the level of the earth (allowing us to discuss the ethics of environmentalism), and so on upward even to the level of everything that exists.

How can we think of the consciousness of all existing things? We are, of course, entering very fuzzy territory here. Unless the philosophy of panpsychism is correct in a concrete way, it’s not obvious how to think of the entire universe as a moral agent. One idea I find to be useful, or at least beautiful, is to consider the desires of the universe as the physical laws that govern all energy and matter, in which case (assuming the physical world does in fact behave in a regular fashion), the universe’s actions are always perfectly in line with its desires, implying that the universe is itself a perfectly moral machine — each time a raindrop falls, a plant grows, a star shines, a neuron fires, a morally impeccable event is unfolding. What’s more, adopting this idea means that when we make discoveries that better our understanding of the universe, we are peering into the mind of reality.

goodness, truth, and the eye of the beekeeper