After more than four years of working for an InsurTech company as a data scientist (a notoriously uninformative job title), I have recently undergone a rather dramatic career switch. Returning to the nonprofit world was far from an obvious choice, considering how unpleasantly my last NGO stint came to an end. Whether this will turn out to be a smart move career-wise we’ll have to see, but this wouldn’t be a particularly interesting topic to write about — LinkedIn influencers be damned. Instead, I wanted to spell out, on a fundamental level, why I’m so excited about the work this foundation is carrying out. I don’t intend this to be representative of how other staffers think about it; it is simply an elaborated response to the question “Why GiveWell?”.
Since not everyone will be familiar with what GiveWell is doing, let me start by outlining what problem they are trying to solve. You can skip this section if this doesn’t apply to you.
Doing the most good per dollar
If you belong to the richest 10% globally (and chances are very high that you if you’re reading this), the idea to donate some of your wealth to help the less fortunate ones has probably already occurred to you, and maybe you’re even regularly donating to one of the myriads of organizations that promise to do just that. Numerous as they are, you will need a way to decide who to give to. Are you likely to prioritize a charity that supports a cause particularly dear to your heart? One that helps people in your neighborhood, one that is well-known, or simply one that caught you off guard when walking through the city center? Other things being equal, most of us would rather support an organization that is effective at achieving its goal, but do we have a good sense of who they are?
It turns out that identifying charities that actually have a big impact is not straightforward. The reason why GiveWell exists is that its founders identified this niche and set out to address it. The idea that you could sit down and rank charities quantitatively is not entirely new, but such rankings were usually based on very crude metrics, such as: What percentage of their budget was spent on operational costs and marketing? Was the CEO “overpaid”? Did the donations reach the people it was intended to reach? Whatever else these may tell you about an organization, they don’t get anywhere near the heart of the matter. A charity could pay its staff very little, have almost no advertising budget, and be very good at ensuring that the things they promise to implement are in fact getting implemented, and yet be a terrible choice if their program is built on a misunderstanding about how their interventions work.
Effective altruism — a movement that was, in many ways, spearheaded by GiveWell — is often described as a philosophy of “doing the most good possible”. (It is perhaps no surprise that one of its major intellectual forebears, Peter Singer, is also the most famous proponent of utilitarianism.) Taken literally, this seems both methodologically naïve and practically impossible, but I think there is a less strong form of it that I very much endorse. I think of it as two premises:
- We who were fortunate enough to grow up in a wealthy country, with a very developed health care system and numerous other amenities, have a certain moral obligation to help people in need. Not because it is our fault that others have less — the world is not a zero-sum game — but because luck is an essential factor shaping the distribution. Most of us could be giving more without being noticeably worse off, although exactly how much is a question that has no meaningful answer. (This is the “altruism” part.)
- On top of the first imperative, there is, in my mind, also a duty to perform due diligence and ensure the donations are put to good use. It’s not enough to merely feel good about giving away money — we also have an obligation to make sure it is not being wasted, be it via corruption, disastrous unintended consequences, or projects that sound good in theory but can offer little to no evidence in their favor. Simply speaking, you can double your impact by donating twice as much, but you can do an order of magnitude better by focusing on the right programs. (This is the “effective” part.)
Although (1) is the claim that would be harder to derive from first principles (and I won’t attempt to do so here), (2) is the one that has been most controversial. How so? Simply speaking, almost all charities are incredibly inefficient, and that unsurprisingly rubs both the people behind the organization and its donors the wrong way. For donors, it’s not exactly flattering to hear that they care too much about their own reputation, and display a pronounced bias for helping people in their vicinity (who are already quite rich, by global standards). For charities, the verdict is that they’re often not transparent, inflate their successes, and stick to their own preconceived ideas of what should work rather than trying to figure out what actually does work. And finally, there are the outside observers who either think the calculus is severely misguided, or that the very idea of trying to quantify aid is naive and hubristic.
It would be utterly pointless if everyone tried to undertake an investigation into the effectiveness of the thousands of charities that might possibly qualify by themselves, and it’s great that it’s possible to outsource such a task to an organization that specializes in it. Having said that, I think it is still helpful to understand what exactly one is signing up for if one chooses to donate to GiveWell.
GiveWell has established one metric as its principal guiding light: Whichever intervention they investigate, they are single-mindedly devoted to identifying those which are maximally cost-effective, even though they understand very well that these estimates are extremely uncertain and shouldn’t be taken literally. Drawing up a balance sheet of costs and benefits is a very powerful mental tool to ensure you’re not leaving anything out of the equation, even if the final result is often best understood in terms of orders of magnitude. Some people have claimed that this analytical framework is so narrow that it leaves out the most important questions by definition, but I think that, given funding constraints, I think this is a sensible niche to focus on.
Worth noting also that this doesn’t mean the entire methodology behind it is just an exercise in number-crunching, where you would use the results of an RCT (or, better yet, a large meta-analysis) and ruthlessly extrapolate the findings, ignoring everything else that could get in your way. And that feeds into a larger question: In some sense, it’s hard to explain what sets GiveWell apart from other players in the space, who also have very qualified staff, larger budgets, and at least some incentives to find out what’s true. What makes them unique? This isn’t a binary classification, but I think they stand out on several fronts:
- Transparency: Of course, every organization claims to be transparent, and some have put a lot of horsepower behind it, but here are some examples that are rather uncommon: Publicly available recordings of each board meeting, a section dedicated to past mistakes, unusual openness to admit what they don’t know or couldn’t be bothered to check.
- Explicitness: Rather than hiding behind verbiage, GiveWell tries to make it as clear as possible what are the main factors that drive their assessment, and how their claims and assumptions are supported. You would think this is how science generally operates, but this is not always true: Academic footnotes, for example, rarely tell you what specifically provides force to an argument, usually just linking another paper (or a series of papers) without further comment.
- Scope of analysis: It’s great to know that something works well somewhere, but this isn’t enough. You also want to know if it scales, if it works in a different context, and if it’s not already well funded. Especially the latter point often gets neglected, and it’s still rare to hear organizations talk about the risk of crowding out other funders.
My impression after working here for a few months — and yes, absolutely, take this with a grain of salt — is that those aren’t just empty corporate values that the CEO pontificates about during annual reviews, but very ingrained in the organizational cultures. This sometimes results in quirks that would cause raised eyebrows in almost any other work setting, but it also makes you question why the rest of the professional world has settled on the norms they have.
Perspectives: Effectiveness and the very long run
As I mentioned earlier, GiveWell used to be more or less synonymous with EA. but this clearly isn’t the case anymore. Partly that’s because the movement has grown enormously and spawned the creation of many new organization along the way, but partly because there’s been an important focus shift within EA — away from areas where it’s possible to collect fairly robust evidence of what works and towards the more speculative domain of existential risk (variously referred to as “x-risk”, “global catastrophic risks” and “longtermism”), especially around AI. I don’t think this is totally unwarranted, and there are good reasons to believe that people are at least somewhat myopic when it comes to problems that are temporally far removed from them. But I also believe that much of it is indeed just that — idle speculation. (See also my reviews of MacAskill’s and Ord’s recent books on the subject.)
This isn’t the place to present a fully fleshed-out critique of longtermism, but in short, my major reservations with it are: Because the evidence base is so thin, most of the arguments depend on long causal chains, each link of which could bring the whole project down. My impression is that EAs who think that, for example, a malicious AI takeover is very likely to happen, are usually content to refute specific counter-arguments brought forth against their position. And oftentimes they succeed in doing so, because they have defined the terms of the argument and thought about possible weak links for longer than almost anybody else. What they usually don’t consider is the outside view — by which I mean: How many examples have there been of really clever people who tried to construct intellectual edifices from first principles (Descartes, Kant, Russell…), and how often did they succeed? I think there’s been a lot of unfair criticism of EA after the FTX disaster in late 2022, but some of the things Sam Bankman-Fried epitomized (an aggressive reliance on expected-value calculations, and a certain smugness — EA attracts a lot of highly intelligent, analytic people, which reinforces the impression that people who don’t identify as EA simply “don’t get it”) are also reflected in the movement at large.
There’s a different angle from which one can criticize the programs GiveWell supports: It typically takes the form of “no country has ever gotten rich on (foreign) aid”. This may either be the case because giving (unconditional) aid to people makes them dependent on it, or else because it keeps autocrats and dictators in power, who can continue their ruinous policies as long as aid keeps flowing in. It takes away the pressure to reform kleptocratic regimes, corrupt bureaucracies and whatever other homegrown obstacles there are to prosperity. So, in the long run, people would be better off without.
I don’t deny that certain forms of aid are harmful, and that many charities have produced nothing but questionable results. However, this doesn’t imply that there aren’t any interventions that do an enormous amount of good, and that makes the challenge to identify the very best ones even more important. I also agree that sustainable economic development trumps aid, but there are two issues worth pondering: First, even though there may be some agreement on what kind of policies produce sustained growth, this is far from a strong consensus. And even if policymakers and economists agreed that, say, privatization and the rule of law are necessary to get there, do we really know how to implement them in practice? Besides saying “country X should adopt practice Y”, do development economists who believe this to be the case really have a lot of suggestions on how to make this happen, besides appealing to politicians? (Note that this applies even more strongly for individual donors who consider giving away part of their earnings — what levers can they realistically move?) And second, sure, over time the effect of economic growth might well outdo anything that charity can ever achieve. But even a 5% annual growth rate won’t do much for the misery of a malnourished child today, a mother who loses her infant to malaria, or the person crippled by an easily preventable disease. I believe that if there is a way to accelerate the good stuff, we should press hard on the gas pedal, rather than being satisfied with “this won’t be a problem anymore in the distant future”.
There’ll no doubt be a lot for me to learn going forward, and intellectually it’s very rewarding. If you want to know more about our research or consider giving to one of the outstanding charities we’ve identified, there’s a lot of material to engage with on our website. We also do quarterly open threads where you can ask just about anything you want!