Preamble
Like many of my fellow Apollo fellowship attendees, I was first introduced to the idea of EA last summer. Initially, it was intuitive as to why I should support EA. Effective altruism is both a philosophy, research field, movement and community that attempts to find and use methodological approaches to help others as much as possible with limited resources. As Peter Singer says, EA is both “a philosophy and a movement” that consists of principles and efforts to enact these principles. Specifically, the EA community has been focused on alleviating global issues such as artificial intelligence, and animal welfare and improving decision-making. Why wouldn’t you want to do the most good?
But as I’ve started doing more reading and looking into how EA manifests in the real world, my doubts have compounded. Honestly, I’m not entirely sure where I stand on these issues, so bear with me as I try to sort through my own thoughts…. On that note, this piece is more academic than most of my other pieces but I promise I’ll be back with more personal writing soon <3
What are the core beliefs of effective altruism?
Effective altruism uses 3 core principles to determine what issues we ought to prioritize funding towards
Importance
Neglectedness
Tractability
Firstly on importance (utility gained / % of problem solved). This principle aims to ensure that you help the largest number of people possible by supporting the most pressing and urgent causes. The issue is that people often have overly short termist views to help those in our close future as it is easier to imagine the impact, or to help the people around us as their suffering feels more tangible. However, If we presume all peoples' well-being and moral worth- regardless of whether they live down the street or halfway across the world - matters equally, then we ought to prioritise limited resources in a way that can maximise doing good. For example, “a donation of $50,000 can be used to train and provide one guide dog for one blind person in the United States, or it can be used to fund surgery to prevent blindness (from trachoma) for five hundred people in a poor country” (MacAskill, p. 61) Since some ways of doing good can help a dramatically larger number of people, it’s important to try to find the best way to help.
Secondly on neglectedness (% increase in resources / extra $). The Open Philanthropy Project puts it as “all else being equal, we prefer causes that receive less attention from others”. In short, this principle seeks to focus on groups who have less power to protect themselves. This means focusing on the groups who are most neglected, which usually means focusing on those who don’t have as much power to protect their own interests or where there is currently a lack of support given. If there are alot of other organisations working on the same cause, there are diminishing returns of additional resources from helping these same causes. This is why effective altruism takes a large interest in decreasing factory farming as they have little capacity to advocate for themselves by the nature that they are animals, and not many organisations work to help them.
Lastly on tractability (% of problem solved / % increase in resources). This refers to the ability to calculate and measure the amount of social good it could bring and how vulnerable the cause area is. While these calculations can’t be perfect, effective altruists believe it is better to have some estimate of where we can do the most good compared to none. Using the scientific method of testing one's beliefs to find the best way of doing good may lead to unconventional or nonintuitive results. But these randomised control trials or estimates are important and were even used for the EU to redirect their funding and can allow us to find the best way of doing good.
What’s crucial about these principles in the way effective altruism practises them, is that they are not absolute and are always debated. Open truth seeking encourages the community to hold strong beliefs weakly and be open and ready to change views radically. This allows to a continuous revision of these views and its application, allowing it to do the most good depending on the circumstances.
How does this neglect vulnerable actors?
In the book “The Good It Promises, the Harm It Does”, critics argued that “ funding metrics of EA ignored hostile work environments, and how many have been hurt by known serial sexual exploiters who lead groups assessed as ‘effective’ by EA-tied groups”. #MeToo advocates have described how the EA community has a pervasive issue of “toxic culture of sexual harassment and abuse.” Many who have felt alienated or delegitimize their struggles as less important than others have accused EA of its discrimination which not only sets a bad image for the movement, but loses support from one of the largest social movements in the world.
Effective altruists would counter this by saying the feminist movement already has massive funding and support, making it less neglected and vulnerable. The issue is however that this concept of declining marginal utility of money doesn’t actually help the most vulnerable either.
Envision there 2 possible allocations of funding
A program that focuses on helping the most amount of people that targets the poor but literate men in urban areas who have had access to basic education. A small amount of funding for each of these people has a higher likelihood of lifting them from poverty and can use the least amount of money to help the largest number of people.
A program focusing on those who are the most vulnerable, or suffer from “Ultra poverty”. This would attempt to help the disabled, illiterate or old people from rural areas. This program would have less likelihood of success at decreasing poverty and would cost more resources to help the same number of people as above.
While according to Paul Mosley & David Hulme, most would ponder this decision between growth and poverty alleviation, for EAs the decision is intuitive. “They favour projects that focus on fewer people when doing so delivers a greater gain in overall welfare. But in Ultra-poverty this is not the case.” This is because these are the individuals who suffer from “a composite of afflictions” including “lack important capabilities and skills, to be victims of social marginalisation and geographical remoteness, and to suffer from chronic illness or disability”. This makes it harder to be effective at achieving change for them, leading to the “systematic neglect of those at the very bottom”. Hence, EA is negligent to the people who need help the most, only further entrenches their suffering by placing them as less important for funding.
Why do these issues arise?
Effective altruism relies on data and tractability as a key metric in prioritising issues. They assess importance by relying on scientific reports and publications in certain areas. With this information, charities like GiveWell or Giving What We Can allocate the money people donate to the most important cause areas. This allows for donor confidence, efficiency in processing large chunks of data and a clear justification for donations. However, this reliance on data to determine which causes fulfil their metrics leaves effective altruism vulnerable to what critics call “methodological blindness”
This looks like 2 large aspects of bias
Observational bias
Effective altruism seeks to focus on fields with high quality information and data. However this “tendency to focus disproportionately on what is known, or readily verifiable, can lead to certain forms of bias and error”. This is because the burden of proof to prove one cause area is important doesn’t translate into other cause areas. But as people get accustomed to one area, they often push back other areas or find similarly important issues that are unmeasurably in these same metrics as less pressing.
Specifically, effective altruists largely support randomised control trials as a fair and equitable way of data collection. RCTs are “experiments where people are given, at random, either an intervention (such as a drug) or a control and then followed up to see how they fare on various outcomes.” It has even been considered the “gold-standard method of testing ideas in other sciences” as it is able to evaluate cause and effect rather than casual association. This works well for issues like proving smoking is bad for people's lungs where you can sample a small random group of smokers compared to an average control group. Or to understand the effect of antidepressants, you can compare people who do and do not receive it to compare the effect of it. The control group allows you to “control the total effects of all of the other factors that could affect people’s mood, not just control for the biases and errors in study” (Dattani 2022)
However, while often effective in small scale interventions, Peter Singer himself admits “They can be used only for certain kinds of interventions, in particular, those that can be done on a small scale with hundreds or thousands of individuals or villages, from which samples large enough to yield statistically significant results can be drawn.”
This is counter-effective because if the evidence and data EA uses to determine global priorities is inherently flawed, any funding being used to alleviate the most pressing problems is unlikely to actually help the most vulnerable.
Quantification bias
The issue with evaluating importance in numbers, is that many important issues are simply not able to be evaluated on a quantifiable scale. Think of this example: There are 2 patients. One is suffering with terminal cancer and the fear of death. The other is suicidal and wants to die. Both are clearly in pain but it is impossible to quantify the amount of suffering they are in compared to each other. Maybe it is possible to find people who are both terminally ill and suicidal and ask them what they think its worse. But even if they did it, there is a very small pool of people who suffer with both illnesses making the small pool of evidence unreliable.
Especially when this framework is applied to long termist funding which accounts for around 46 billion dollars with “Around $420 million, which is just over 1% of committed capital, and has grown maybe ~21% per year since 2015”, according to 80000 hours, EA currently spends a lot of their money on investing in the long term. However, it is logistically impossible to compare the suffering of someone dealing with climate change in 100 years compared to someone suffering with poverty right now. Effective altruists would counter this with the argument that there are more people in the future, which means on scale their suffering is more important. But the future has many ambiguities. Will plant based diets be more popular? Has climate change truly reached a state of no return? Which animals will still be alive? All of these factors means it is really difficult to properly assess the amount of suffering one experiences in the future versus the present, especially when the issue of epistemic access (ie. we do not know the experiences of other beings because we do not live the same experiences as them). This issue is worsened when comparing future generations as there are even more uncertainties. Hence, the predictions people make to compare suffering is unlikely to be accurate.
However even if the amount of suffering was quantifiable, the iteration effects of policies are not. When regarding solutions to HIV/AIDS, “According to recent estimates, condom distribution is a far more effective way of minimising the harm caused by HIV/AIDS than the provision of anti-retrovirals.” In short, it's more effective to prevent HIV/AIDS than curing those who are already affected. Even if this evidence was reliable, the reason “most governments and populations affected by the pandemic have rejected strategies that leave people with HIV/AIDS untreated” is because there are unquantifiable impacts and benefits from curing those who are struggling. It increases trust in the government for funding those who are sick, it gives hope to people that their lives can get better and this hope encourages people to tackle hardships in their lives. But it is impossible to measure the impact of hope or trust. This is why “we cannot move directly from information about the most cost-effective intervention, on a DALY per capita basis, to reliable conclusions about the best overall policy or program.”. Policies and the manifestation of it in the real world can not be measured by numbers and effective altruism is unable to account for the real impacts that rise from policy changes or funding allocation, making it ineffective.
Moral implications
The core premise of EA is its commitment to impartiality, the idea that every human and sentient being hold equal moral weight. Immanuel Kant argued that “we ought to treat all human beings as ‘ends in themselves’—as free, rational beings equally worthy of dignity and respect.” This is often coded into ideas such as human rights and legal systems where states hold a social contract towards all of their citizens.
Many criticize EA for being contradictory to the principles it seeks to uphold. Instead of uplifting the most vulnerable and claims that it rather, dehumizes and takes away the most fundamental rights of people. A right is defined as a “justified claim to some form of treatment by a person or institution that resists simple aggregation in moral reasoning”. Rights are powerful because they are non-utilitarian. Regardless of the benefits to the economy, we would not force people into slavery because it violates their most fundamental right to freedom. Regardless of the benefits for the poor, we would not morally justify taking away all the money from the poor without their consent because they have the right to their property and money. In this way, “rights build upon the notion that it is wrong to use an individual to achieve some outcome in a way that ‘does not sufficiently respect and take account of the fact that he is a separate person, that his is the only life he has”
The issue with EA is that its often consequentialist approach to achieving good can take away people's most fundamental right. When thinking back to the ultra poor who are stripped of their right to self defence and safety, EA neglects people who’s most basic rights are not achieved and redirect resources to animals or other more ambiguous and long term cause areas.
For example, regarding sweatshops, imagine this situation. These sweatshops have terrible working conditions, are poorly regulated, underpay workers and abuse underprivileged communities. However, it has comparatively brought the nation more money and the economy has started to flow more due to the money multiplier effect it has had. An NGO is created attempting to challenge the government to create stricter rules that would make better working conditions in exchange for less job opportunities. For many of us, this would seem moral, as you could protect the most fundamental rights of people. However, many EAs, following MacAskill’s lead, would not. They would argue “ that poor people are better off in a world where sweatshops exist than one in which they do not, and that a policy of non-interference is to be preferred for that reason”
But for people who believe the above theory that people should not be used as a means to an end and that rights are absolute in the face of utilitarian outcomes, we would not support this trade-off regardless of the economic benefits it brought.
EA =/= Utilitarianism?
However, it would be wrong to claim EA is purely consequential although it certainly is similar. According to 50000 hours, “it is maximising, it is primarily focused on improving wellbeing, many members of the community make significant sacrifices in order to do more good, and many members of the community self-describe as utilitarians”. However, the difference is that “does not claim that one must always sacrifice one’s own interests if one can benefit others to a greater extent.” or that “one ought always to do the good, no matter what the means”. Rather, that there are “pro tanto reasons to promote the good, and that improving wellbeing is of moral value.” This is why EA often discourages people from opting into harmful careers in order to make more money to donate - often called the earning to give approach. Because we can be wrong about our calculus on the extent of harm we do, and once you opt into a career, it is often hard to reverse.
Hence, while EA and utilitarianism share benefits it is not identical. The issue is however, this does not tackle the idea that they still often dehumanise peoples struggles and are okay with justifying taking away the fundamental rights of the most vulnerable.
My final take :
I think there are various ways in which EA is defined and perceived in the status quo. But the large principles of doing good and that it is better to try to estimate the good we can do and be wrong than never attempting is convincing. However, the community has various changes it could include to be more inclusive, be more ethically convincing and be even more effective than now.
My personal evaluation of these various and often conflicting views is that there is a middle to be met between the 2 extreme ends. Perhaps it is true, some causes need to be deprioritized. But it is worth trading off aspects like animal welfare or blindness in dogs in order to funnel that money into the ultra poor. Not only because pragmatically, this would make EA lot more popular and a larger pool of funding allows for EA to be flexible, increase research to truly find what is more effective, but also because this would allow for EA to contribute to alleviate the worst forms of human suffering.