Top Reads of 2017

These are the texts that caused view quakes or inspired/haunted me to the point where I had to recommend them.

Fiction

The End of Eddy by Edouard Louis

Picked up in Paris on a recommendation as a book that took France by storm. Absolutely gut wrenching, brutal and immiserating. Sort of like a French Hillbilly Elegy, if JD Vance were gay and had no extended family to intermittently rely on. Gave me insight strangely into a lot of rural, French (and maybe Western) culture and how inescapable it is for many. And a clearer idea on the “double consciousness” that those who escape have. My favorite novel since the equally, viscerally dark A Little Life.

Forest Dark by Nicole Krauss

It’s brilliant, like all of her work. Deep and generationally intertwined, about impermanence and eternity. Whatever all of her books are about—the power of narrative to tie people together across space and time. If you’re unfamiliar with Krauss, start with The History of Love—my favorite novel and one of the book I buy every time I see it, so I always have copies to give away.

The Chalk Artist by Allegra Goodman

Allegra Goodman is back to form, writing about tech and its discontents. I’ll always be a Cookbook Collector fanboy, but her latest book is more open about its loyalty to literature and education as a…shield, respite or solution to the modern world.

Nonfiction

Reasons and Persons by Derek Parfit

One line interpretation: focus on easy ways to maximize utility regardless of time, space and personal identity up to the point right before it makes you miserable. This interpretation inspired my Ethical Laffer Curve post.

I hesitate to say that this is the most important book I’ve ever read. But I have the impulse to say it. It’s certainly the most important book I read this year. Parfit makes the case for a utilitarianism that isn’t bound by time, space or personal identity. Parfit’s work is hard to summarize, but I’ll foolishly try: he makes the case that we are wrong to think of different people at the same time as fundamentally different as the same people at different times. Essentially that our past and future selves are as different from our present selves as other people. He then makes the case that if this is true, it affects utilitarian ethics substantially. In the same way that it is wrong to harm a different person, it becomes equally wrong to harm your future self (by smoking for example). I’m going to stop trying to summarize it because it’s a 400 page book of logical proofs and thought experiments.

We Were Eight Years in Power by Ta-Nehisi Coates

TC once praised another writer as having “courage to look dead-eyed at ideology and all its limitations without lapsing into nostalgia or cynicism” something he called “ice water vision” and wished that to be the quality he cultivated most. Me too. TC is one of the few writers/intellectuals I’ve had the privilege to watch develop because he was so open throughout his career at the Atlantic. If you haven’t followed him since the beginning, this is an absolute must. His most famous work, Between The World and Me is undeniably important, but WWEYIP is a better place to start.

The Hard Thing About Hard Things by Ben Horowitz

This is the best business book I’ve read in a while—maybe ever. An admittedly low bar. Ben puts on display the cutting intelligence, compassion and ruthlessness that has made Andreesen Horowitz the greatest venture capital firm of all time. His intro below does better than I could.

Inadequate Equilibria by Eliezer Yudkowsky

This book is an interesting look at why certain markets or institutions fail, and how to recognize failures that YOU can see and fix or exploit (often the same thing). Unfortunately, half of the book is really inside baseball stuff about modest epistemology—but that can for the most part be safely skimmed. Suffice it to say: taking the outside view is often appropriate, but sometimes an individual can do better–the book is about recognizing those times.

Eliezer is a strange dude. He founded one of the world’s foremost AI safety research organizations, but then decided that unless humanity became more rational then his work there wouldn’t matter. So he set about to “raise the sanity water line”. What a fucking insane thing. Strangely, he seems to have succeeded somewhat—his ideas have permeated influential thinkers and think tanks, and MIRI continues to hum and help lead AI safety research.

Online Reads

Stubborn Attachments by Tyler Cowen

The polymath economist of Marginal Revolution fame turns his lens on…ethics, or something like it. He makes the case for economic growth as a moral good above almost all else—excepting certain inviolable rights. And argues for a social discount rate of 0. In the 8 months since I read it, I’ve concluded that the optimal discount rate is non-zero, but very close to zero. Based on this disagreement, but with a lot of respect for Cowen’s framework, I propose a Portfolio Theory for Effective Altruism as a way to think about efforts with different levels of certainty and time scales.

I rank this as his best work, followed by MR, and finally his Great Stagnation series–I wonder whether he would agree.

Update: I asked Tyler and he agrees with my ranking.

Definite optimism as human capital by Dan Wang

This piece theorizes about how optimism might be a driving force of innovation. As in, people literally being optimistic that things can change for the better drives productivity growth. He attacks our current productivity slowdown from multiple weird, creative angles.

Neuralink and the Brain’s Magical Future by Tim Urban

Tim Urban is the writer Elon Musk calls when he wants the world to understand him. If you haven’t read his Elon Musk series, start at the beginning and end with this one.

THE 2017 STRATECHERY YEAR IN REVIEW by Ben Thompson

I’m cheating a little bit because I’m letting him curate his top 5, but Ben Thompson is probably the smartest person writing about strategy and tech (fuck, business in any sense) today. Everything he writes is worth reading.

A Portfolio Theory For Effective Altruism

It concerns me that many in the effective altruism world focus on optimizing for “the best” use of time and resources and often direct people to donate to one non-profit. This strikes me like trying to choose the best investment. Altruism directed toward one cause seems to suffer from the same flaws as investing in a single asset (or even asset class). The flaw essentially being: you might be wrong about the best use of your marginal dollar.

This essay lays out the underpinnings of my portfolio theory of altruism and a rough structure of how I will place bets. I may develop a clearer model for portfolio development, but that is not my intent here.

What Are The Right Causes

This question can feel almost settled by places like GiveWell and Giving What We Can. However, there’s a lot variation in the effective altruism community in terms of what people actually work on and donate to. There are those who care deeply about the suffering of wild animals or those who worry that suffering is a fundamental aspect of the physical universe and so want to destroy it (I don’t fully understand this idea and could be misconstruing it). Largely this variation is created by:

  • The moral status of non-humans
  • Moral uncertainty
  • How much to value the future (human or otherwise)
  • Nonlinearity

What Is The Moral Status of Non-humans?

Animals likely have non-zero moral status. This is the overwhelming consensus of humans since we tend not to inflict pain on animals directly, and most will go out of their way to avoid directly killing them inadvertently. The fight here is over how much moral status.

If you were to give them 10% as much moral status as a human, it strikes me as quite strange that you would be willing to support the murder of an animal for food. I am personally a vegetarian and fair-weather vegan, and 1/10th of a human is probably the amount of moral status I ascribe to animals. (Others seem to assign less status, but at what point does something go from non-murderable to murderable?)

In my model, 10% of moral status indicates that I would have to have run out of almost all opportunities to improve human utility before I began to shift resources toward animals. Vegetarianism will be the exception because the negligible effort is non-transitive and meat-eating is itself actually detrimental to human welfare. I won’t put any transferable effort or resources into animal causes—unless there are clear benefits to humans as a by-product.

The Role of Moral Uncertainty

Moral uncertainty asks us to make decisions on issues where we are uncertain. Pascal’s Wager is the classic example of moral uncertainty. Say you thought there was a 1% chance of Pascal’s God existing, but believing led to infinite utility. The expected value of believing is infinite.

Moral uncertainty plays a more interesting role in finite situations. For instance, say you think there is a .001% chance that human fetuses have human-equivalent moral status. This is a very small chance, but the expected harm of 664,000 abortions per year is 6.64 human deaths. In my personal model, the utility derived from those abortions far outweighs that expected disutility. Uncertainty in the correctness of your moral stances is rarely factored into decisions where you feel pretty sure, but given how frequently the moral consensus shifts over time, ever being 100% certain of something feels like a mistake.

In my model, I’ll attempt to avoid supporting or opposing issues where I feel morally uncertain (beyond the standard passage of time could prove me wrong uncertainty) on the issue itself.

What Is The Right Social Discount Rate

The social discount rate can be thought of akin to a financial discount rate. The lower the discount rate, the more we should value the world and people in the future. The higher the discount rate, the more we should care about people alive today.

I tend to side with Parfit and Tyler Cowen, that the appropriate discount rate is close to zero, if not actually zero. I’m not convinced that people alive today have much more moral status than people who could potentially exist. But to presuppose the argument that we’re valuing non-existent (and potentially never existing) people, I’ll frame my time horizon as close to infinite in order to create the best circumstances for human flourishing.

Of course, a discount rate of near zero does not imply a clear course of action.

Oh, and the Universe is Literally Unpredictable Because It’s A Nonlinear System

Not only do our moral choices have to contend with how to value the far future, they have to at least acknowledge that chaos theory means small changes in initial conditions can cause large changes in future states.

A way to potentially combat this is to create utility in the present, so that at least the future starts from a higher base. Another is to make choices that don’t require you to be right about what happens in the future.

Where I’ll Place My Bets

In my model, I will face up to the discount rate and nonlinearity by trying to clearly maximize utility in the initial conditions (today) and as the time horizon extends make increasingly diverse bets with low probabilities, but given population and economic growth, potentially very large payoffs. Because of how these bets work, I can direct 70% of my resources to clear benefits for people and the world today, 20% toward organizations working to reduce the risks of reasonably foreseeable existential threats, and 10% toward speculative bets for the far future and expect similar utility payoffs.

A portfolio could be:

Today

  1. Carbon offsets 2X my current consumption: 150k kT
  2. GiveWell to do with as they see fit

100 Years

  1. International Campaign To Abolish Nuclear Weapons
  2. OpenAI
  3. Long Term Future Fund

 1000 Years

  1. ???

On an Ethical Laffer Curve

Note: I’m not concerned here with whether or not the Laffer Curve as originally intended is a useful idea or what the optimal marginal tax rate is.

I’ve always broadly believed that act utilitarianism was correct in its most demanding form. However, I rarely actually maximize utility impartially because, well, it’s really fucking demanding. Outside of being a vegetarian, Peter Singer would probably not approve of my lifestyle—I travel carbon-intensively a lot, donate way below 10% of my income to effective non-profits, and constantly waste time and money on anything that isn’t literally saving human lives.

Convinced that maximizing utility is correct, I went to one of the main sources for 20th century ethical thought, Derek Parfit, and his first work, Reasons and Persons. Having never really read analytical philosophy before, it was indeed slow going. I covered maybe only 5-15 pages of dense argumentation per day, and many days I was too intimidated to open it at all. After a month, I’m only through the first section of the book, on whether or not ethical systems are self-defeating.

BUT I have already had one view quake. My prior view was that there should be an unlimited focus on impartially maximizing utility.

  1. All people, all the time should do what would generate the best outcome.
  2. For people in the developed world, this would mean focusing 75% or more of their time and resources on altruistic endeavors, with an increasing percentage as your amount of time and resources increased.

One obvious problem with that is that even as someone who believes the above, I have never come close to that standard. The problem Parfit found with the above formulation is that if we could somehow convince (or more likely coerce) everyone to act to maximize utility, then a very large majority would be miserable and thus actually lower total utility.

Thus the Ethical Laffer Curve:

My new formulation:

  1. All people, all the time should do what would generate the best outcome.
  2. This means focusing as much of your time and resources as you can on maximizing utility, up to the point where that focus begins to decrease personal utility at a faster rate than total utility is increasing.

Some interesting points:

  • The X axis allows for some combination of personal preference. Person 1 may focus 95% of their time on utility maximization, Person 2 may focus on 30% of their time before hitting a negative return.
  • The Y axis allows for a lot of debate over what utility really is, although the boundaries seem defined as a slow, painful death and literally being Bill Gates—infinitely wealthy and pursuing work you find meaningful. It’s not clear that we need to precisely define utility in order to pursue such a varied utilitarianism as this model proposes.
  • People who think about utility maximization often leave out economic growth as an avenue for increasing total utility, but it is likely the most important (if pretty abstract) lever. That being said, most private sector jobs don’t increase GDP all that much. Entrepreneurs, technologists, engineers, consultants scaling innovations and financiers efficiently allocating capital should likely focus their time on driving productivity growth and donating their financial resources.

Like with a Laffer Curve, the debate is where the average and marginal “ethical tax rates” should be. It seems very apparent given the level of suffering in the world that the average rate is still too low, but that we’ve been steadily raising it—mostly through economic growth and the creation of international norms against war.

As for my ethical tax rate, I’m still certain mine is above my current contributions to utility maximization, but clearly not at Singer-esque rates.