A Portfolio Theory For Effective Altruism

It concerns me that many in the effective altruism world focus on optimizing for “the best” use of time and resources and often direct people to donate to one non-profit. This strikes me like trying to choose the best investment. Altruism directed toward one cause seems to suffer from the same flaws as investing in a single asset (or even asset class). The flaw essentially being: you might be wrong about the best use of your marginal dollar.

This essay lays out the underpinnings of my portfolio theory of altruism and a rough structure of how I will place bets. I may develop a clearer model for portfolio development, but that is not my intent here.

What Are The Right Causes

This question can feel almost settled by places like GiveWell and Giving What We Can. However, there’s a lot variation in the effective altruism community in terms of what people actually work on and donate to. There are those who care deeply about the suffering of wild animals or those who worry that suffering is a fundamental aspect of the physical universe and so want to destroy it (I don’t fully understand this idea and could be misconstruing it). Largely this variation is created by:

  • The moral status of non-humans
  • Moral uncertainty
  • How much to value the future (human or otherwise)
  • Nonlinearity

What Is The Moral Status of Non-humans?

Animals likely have non-zero moral status. This is the overwhelming consensus of humans since we tend not to inflict pain on animals directly, and most will go out of their way to avoid directly killing them inadvertently. The fight here is over how much moral status.

If you were to give them 10% as much moral status as a human, it strikes me as quite strange that you would be willing to support the murder of an animal for food. I am personally a vegetarian and fair-weather vegan, and 1/10th of a human is probably the amount of moral status I ascribe to animals. (Others seem to assign less status, but at what point does something go from non-murderable to murderable?)

In my model, 10% of moral status indicates that I would have to have run out of almost all opportunities to improve human utility before I began to shift resources toward animals. Vegetarianism will be the exception because the negligible effort is non-transitive and meat-eating is itself actually detrimental to human welfare. I won’t put any transferable effort or resources into animal causes—unless there are clear benefits to humans as a by-product.

The Role of Moral Uncertainty

Moral uncertainty asks us to make decisions on issues where we are uncertain. Pascal’s Wager is the classic example of moral uncertainty. Say you thought there was a 1% chance of Pascal’s God existing, but believing led to infinite utility. The expected value of believing is infinite.

Moral uncertainty plays a more interesting role in finite situations. For instance, say you think there is a .001% chance that human fetuses have human-equivalent moral status. This is a very small chance, but the expected harm of 664,000 abortions per year is 6.64 human deaths. In my personal model, the utility derived from those abortions far outweighs that expected disutility. Uncertainty in the correctness of your moral stances is rarely factored into decisions where you feel pretty sure, but given how frequently the moral consensus shifts over time, ever being 100% certain of something feels like a mistake.

In my model, I’ll attempt to avoid supporting or opposing issues where I feel morally uncertain (beyond the standard passage of time could prove me wrong uncertainty) on the issue itself.

What Is The Right Social Discount Rate

The social discount rate can be thought of akin to a financial discount rate. The lower the discount rate, the more we should value the world and people in the future. The higher the discount rate, the more we should care about people alive today.

I tend to side with Parfit and Tyler Cowen, that the appropriate discount rate is close to zero, if not actually zero. I’m not convinced that people alive today have much more moral status than people who could potentially exist. But to presuppose the argument that we’re valuing non-existent (and potentially never existing) people, I’ll frame my time horizon as close to infinite in order to create the best circumstances for human flourishing.

Of course, a discount rate of near zero does not imply a clear course of action.

Oh, and the Universe is Literally Unpredictable Because It’s A Nonlinear System

Not only do our moral choices have to contend with how to value the far future, they have to at least acknowledge that chaos theory means small changes in initial conditions can cause large changes in future states.

A way to potentially combat this is to create utility in the present, so that at least the future starts from a higher base. Another is to make choices that don’t require you to be right about what happens in the future.

Where I’ll Place My Bets

In my model, I will face up to the discount rate and nonlinearity by trying to clearly maximize utility in the initial conditions (today) and as the time horizon extends make increasingly diverse bets with low probabilities, but given population and economic growth, potentially very large payoffs. Because of how these bets work, I can direct 70% of my resources to clear benefits for people and the world today, 20% toward organizations working to reduce the risks of reasonably foreseeable existential threats, and 10% toward speculative bets for the far future and expect similar utility payoffs.

A portfolio could be:

Today

  1. Carbon offsets 2X my current consumption: 150k kT
  2. GiveWell to do with as they see fit

100 Years

  1. International Campaign To Abolish Nuclear Weapons
  2. OpenAI
  3. Long Term Future Fund

 1000 Years

  1. ???

Leave a Reply

Your email address will not be published. Required fields are marked *