Let’s begin at the beginning, shall we. Before EA was a multi billion dollar crusade for Artificial Intelligence armageddon and asteroid collision risks, before it received the approval of Peter Thiel and Elon Musk, before Macaskill's curiously named What We Owe The Future received gushing adoration from mainstream media, before all of that.
How those times must now seem distant. In the aftermath of Sam Bankman Fried's bankruptcy, the EA community has come under a lot of backlash. Several media publications are now scrambling to explain what EA is and where it has gone wrong. If the media are outdoing themselves to explain what you are about, rest assured it is not something good.
Many of these criticisms, while rather late, are valid. Effective Altruism has that winning combination of hubris and oddity which alarms ordinary people. But many of the criticisms are also lazy and careless. The good EA has done in the world is nothing to snigger at: The Against Malaria Foundation ( an EA organization) has donated 200 million nets and saved at least a hundred thousand lives in the process.
Effective Altruism might be radical but it was for a moment in time, a shining example of moral goodness. This is what invests its recent paradigm shift with so much disappointment: the most influential ethical movement of our era became hijacked by amateur philosophy and thinly disguised sociopathy.
Zoom in closely on EA and like a Mandlebrotian set, you will find the same recursive pattern of how all religious and social movements eventually betray everything they stood for. The rest of this essay is interested in piecing out why.
We Like To Think Of Drowning Children For Some Reason.
Let’s begin at the beginning, shall we. Before Effective Altruism was hardcore virtue signaling and long, tiresome debates about the axiology of future people, it was a rather radical idea which began with this guy:
Peter Singer is famous for many things. Okay, fine. He’s only famous for them to other philosophers. But he is genuinely famous for something in particular: the most popular thought experiment in the world, the parable of the drowning child.
There are already many variants of the drowning child paradox as befits any meme of such renown. I shall reproduce the most popular version of it here, from Singer’s classic book, The Life You Can Save:
Imagine you’re walking to work. You see a child drowning in a lake. You’re about to jump in and save her when you realize you’re wearing your best suit, and the rescue will end up costing hundreds in dry cleaning bills. Should you still save the child?
Well, yes, of course. Every reasonable person would answer in the affirmative. That’s all Singer needs to hit us with a sledgehammer of a conclusion: we are all as guilty as a bystander in that scenario who chooses to walk away and save his expensive suit.
Every single time we purchase something unnecessary like an extra pair of shoes rather than donating the same amount of money to rescue a child dying in some remote country, we have also chosen to keep walking. That’s what we all are, Singer cleverly insinuates: bystanders to immense suffering who are clinging on to their suits.
I remember the first time I came across Singer’s argument. It felt invulnerable as though there were no possible way to refute it. Here laid all of our moral guilt, beyond appeal to any authority, as stark as daylight.
Many in the EA community clearly felt the same way. Singer's drowning child parable is the moment of conversion, the ‘Damascus moment', for many effective altruists.
Animated by its implications, William Macaskill, the co-founder of Effective Altruism began donating a substantial share of his income to charity. Many others began to do so as well.
In recent years, the influence of Singer’s parable waned into extinction. As a judge of EA's recent red-teaming and criticism contest smugly put it:
Academics are stuck in 2015. It's great that academics are writing full-blown papers about EA, and on average I expect this to help us fight groupthink and to bring new ideas in. But almost all of the papers submitted here are addressing a seriously outdated version of EA, before the longtermist shift, before the shift away from public calculation, before the systemic stuff.
Some of them even criticize Singer 2009 and assume that is equivalent to criticising EA.
The implication of this was that EA had grown up from such puny problems like donating antimalarial nets and saving the poor and were now tackling ‘the biggest issues of the universe’. BIG MISTAKE.
In fact, EA never really escaped the shadow of Singer at all. Longtermism is still grounded in the same naive utilitarismism. It is only applied now not to actual problems but theoretical fantasies in the faroff future. EA went from saving ‘children drowning right now’ to estimating how many drowning children they could save thousands of years in the future.
Magic Tricks Don’t Always Come From Magicians
The amateur magician, I am told, believes magic is about distracting people and diverting their attention. Thus, his act is all smoke and clatter and absurd props.
The professional magician is different. He already knows human attention is a bit of a misnomer: it is in the business of distracting itself. All the magician has to do is play along and exploit it.
Peter Singer's parable of the drowning child is a magic trick. It’s difficult to notice that at first because it is so well written, so tighly constructed. Save children from their deaths. Well, who is going to argue with that.
But hidden within his simple, cautionary thought experiment are at least sixteen implicit assumptions:
1.) The cost of the moral action is small: dry cleaning costs for a suit.
2.) The cost of the moral action is clear: nothing more than temporary damage to the bystander's suit is necessary to save the child's life.
3.) The benefits of the moral action are huge: the saving of a little child's life.
4.) The benefits of the moral action are clear and can be estimated immediately: the bystander knows exactly what's at stake if he decides to dive in the pool. Feedback is also direct and immediate.
5.) The bystander has the capacity to solve the problem: it is assumed he can swim and is strong enough to carry her to safety.
6.) The action is a one-off: the bystander does not have to repeat this action over and over and over again.
7.) The bystander has no equally pressing problems at the moment.
8.) There are no relatives of the bystander, to whom he will owe a moral duty, who are also in dire need of help.
9.) The action results in absolute gain: the benefit of the moral action is not diluted by what anyone else does or doesn't do.
10.) There is no other person around with a specific moral, legal or political responsibility to help the child.
11.) The intent is strongly correlated with the possible result: By electing to help her, the bystander will not be unwittingly helping someone else who is in no need of any assistance.
12.) It is absolutely the best way to help her, and the bystander knows this: the problem is not complex with many possible solutions and there is no large degree of uncertainty around the best solution.
13.) There is nothing standing in the way of the bystander helping her
14.) The bystander can solve the problem on his own: solving the problem doesn't require any complex, large-scale response from many people acting in coordination.
15.) Her input is unnecessary.
16.) His actions, if he chooses to save her, are very unlikely to make the situation worse.
In the real world, it is often the case that many or all of these assumptions do not hold.
Take for instance assumption 1. The average dry cleaning costs of a suit is pegged at 12 to 15 dollars. On the other hand, GiveWell estimates that it takes about 4,500 dollars to save a life. There is a 300x difference between 15 and 4,500. Regardless of one's views on the matter, this substantially dilutes the force of the thought experiment. In normal circumstances, it simply costs a lot more to save a single life.
Or take assumption 3, that the benefits are clear. This assumption hardly holds in the real world either. During the 1980s, the International Monetary Fund embarked on a project of massive loans to African countries in return for deregulating their local economies and adopting neoliberal free trade policies. These projects were often known as structural adjustment programs, and their overall impact has been paltry at best.
Research by the revered economist, Ha Joon Chang, revealed that since the inception of structural adjustment programs, the economy of Sub-saharan Africa had barely grown by only 0.2 percent at the time of writing (between 1980 and 2009). In contrast, in the two decades prior to those policies, Sub-saharan African economies grew at 1.6 per cent in per capita terms.
Growing an economy is hard. It's not merely hard but also complex. It's often difficult to predict the long-term effects of macroeconomic actions until after substantial periods of time. In contrast, the benefit of the moral action in saving the little girl's life is clear and could be estimated immediately.
Asssumption 16 is also without guarantee. In his fascinating book Climate Alarmism And What It Costs Us All, Bjorn Lomborg writes:
'In Fiji, the government teamed up with a Japanese technology company to deliver off-grid solar power to remote communities. They provided a centralized solar power unit to the village of Rukua. Prime Minister Frank Bainimarama proudly declared he had “no doubt that a number of development opportunities will be unlocked” by the provision of “a reliable source of energy.” Understandably, all of Rukua was thrilled to get access to energy and wanted to take full advantage. So more than thirty households purchased refrigerators. Unfortunately, the off-grid solar energy system was incapable of powering more than three fridges at a time, so every night the power would be completely drained. That led to six households buying diesel generators. According to researchers who studied this project: “Rukua is now using about three times the amount of fossil fuel for electricity that was used prior to installation of the renewable energy system.” In rather understated language, the researchers conclude that the project did not “meet the resilience building needs” of the community.'
Neither is assumption 11 likely to be true. In a report released in 2020 by the World Bank, about 7.5 percent of foreign aid is estimated to be diverted into offshore accounts of tax havens like Luxembourg.
Assumption 9, the assumption of absolute gain, presents similar difficulties. By absolute gain, I mean certain goods are useful intrinsically. Survival for instance is an absolute gain. People's desire to survive are generally not based on whether other people around them have survived or not. Almost everyone alive in the world today has definitely lost someone close to them and still kept on living. Certain other desires like the need for basic shelter, clothing, and food can also be said to be absolute. Without absolute gains, people are very unlikely to be happy. Indeed, they are very unlikely to exist at all.
But not all gains are absolute. Some goods are only valued because other people don't have them. Whether they make people happier depends strongly on what other people around them already have or don't have. Beyond a basic cutoff, almost everything is of this nature. Although it does not bear admitting, status is an important component of happiness, and gains in status can only be relative.
Despite being thousands of times richer than the average man, billionaires are not thousands of times happier than the rest of us. The most recent analysis of the diminishing utility of money by Matthew Killingworth suggests the relationship between money and happiness plateaus at 200,000 dollars.
It seems that after some cutoff, doing all the good you possibly can will have negative returns once you adjust for the cost of your efforts relative to its diminishing utility.
The returns in happiness from making everyone much better off are far smaller than they initially appear once you account for the human tendency to get used to things and the fact that a lot of happiness derives from relative improvements rather than absolute improvements in welfare.
Assumption 6 is also suspect. Poverty is not simply a function of having less money and thereby needing more donations. It is best understood of as a collective problem. Till date, no country in history has ever gotten wealthy through charity. As long as the root problems remain unfixed, the one-off is unlikely to make much of a difference. Such charity might even backfire if it creates perverse incentives of dependence or if it is suddenly taken away, returning its former recipients to a less satisfactory life than they had before.
Effective Altruism was and is at its most effective when dealing with problems that resemble the drowning child parable as closely as possible. The remarkable success of EA in charity donations, organ donations, and vaccines is clear proof. These problems are all one-off, absolute gain problems where many of the other assumptions clearly hold.
Each time, the EA community shifted a little further away from solving problems where these assumptions could hold, each time it morphed into something less effective and altruistic. Each time it shifted outside of these problems, it lost a little bit of its soul.
Effective Altruism is simply an attempt at moral optimization. Optimization is easy when you have a clear, defined, linear function. And so EA did extremely well with charity donations and so on.
But when the function is unclear and complex, straightforward optimization is likely to backfire. In other words, if you were looking to cause the maximum amount of harm with effective altruism, the easiest way to do it is to find the fuzziest problem possible, one with little to no resemblance to the drowning child parable, and then try to EA your way out of it.
And the EA community went ahead to do just that. They even gave it a name: longtermism.
Why Did This Pivot Happen
Longtermism is a fairly uninteresting philosophy with extremely interesting (and dangerous) consequences. I won’t address it today, save to say that most of its criticism is heavily misguided.
What is more relevant here is why Effective Altruists were so eager to jump on its bandwagon. The reasons seem to me to be twofold.
The first is traditional EA is kinda boring. It is no coincidence that the community is dominated by nerdish young people. It’s the period of life when you are primarily driven by three desires: socialize, change the world, and have fun. EAs merely want to calculate their way in doing so.
For a time, traditional effective altruism projects satisfied all three. But it simply could not compete with longtermism.
What earns more social capital? Saving the world from Artificial General Intelligence or running deworming experiments? What raises your profile better? Getting endorsed by techno-utopian billionaires or doing more effective charity? What’s more fun? Calculating the total number of people that may exist in the future or doing thankless community work? All of those questions were rhetorical.
The second was a matter of funding. The people who sponsor the EA community have little interest in EA's original conmitments. Thiel, Musk and Fried, to name a few, do not give a damn about antimalarial nets or charitable donations to the poor. They do give a damn about leaving behind a technological legacy. Subtly, slowly, but surely, Effective Altruism changed itself to accommodate those preferences.
There was no coercion. It was a mutually beneficial partnership. And it is what makes EA's efforts to distance itself from Bankman-Fried so needlessly hypocritical.
By facing the truth and accepting him as one of their own, they could begin to acknowledge the deep rot in their community. In contrast, dismissing him as an opprtunistic fraud ensures they do not have to confront some painful lessons. But for how long?
An Effective Altruism movement that sidelined longtermism would have a much smaller profile, much smaller funding, and far less public attention. It would also do way more good. Would the EA community have the courage to say to themselves, ‘let’s begin again at the beginning’. Would it, more importantly, have the courage to do so. The odds don’t look good.
Fascinating, important take. As a big fan of EA, and even someone who thinks longtermism in some form is an important moral framework or perspective, I found this piece to be really informed, helpful, and balanced. Bravo.