This year I actually attended talks at EA Global; a departure from my strategy last year of ‘ignore all programming; talk to as many people as possible until my voice runs hoarse’. Towards the end of the ‘Women and Nonbinary people in EA’ meetup when they were shuffling us out of the meeting space, I caught the eye of a young person from Israel.
They mentioned that they were new to EA, and very excited by the ideas and the goals of the movement. But then amid the fragmented conversation it came out:
‘I don’t feel very welcome here.’
This was less than 12 hours into the conference so I was surprised they had picked up that vibe so quickly. I asked what made them feel unwelcome. We both agreed it wasn’t people being overtly rude. Nor was it just the overwhelmingly technical focus of many of the attendees.
It was somewhat, but not really, the overall demographics – the heavily young, white, male, atheist, well-educated skew. We agreed though that it was more than that, and that there was something subtle going on that made it hard for them to get into everything and feel like they could be a part of the community.
I wondered aloud with them how they would find the rest of the conference, and made them promise to report back.
I have a habit of wandering in and out of sessions at conferences that probably vaguely irritates the facilitators when it’s in small groups. In this case I had wandered into Julia Wise’s ‘Mental Health and Wellbeing’ discussion workshop late after cutting short an interesting conversation in the hallway. It took me a moment to even understand what the conversation was about, but Julia made the group rapidly feel at ease and the 30+ people were soon talking fruitfully about their fears and anxieties and how EA values clashed with other parts of their lives.
One woman spoke about feeling like she wasn’t good enough to do direct work; Julia responded by asking the group how many people had experienced impostor syndrome within EA.
Nearly every hand went up.
A few people mentioned strategies for keeping themselves on the straight and narrow, like remembering that they were ‘still doing more good than their non-EA friends’. I gave a reply that accidentally became a minor manifesto.
I said that I believed there was a certain type of person who was drawn to EA because they didn’t feel good enough, or like they had to earn their place in life somehow. I said I believed the principles behind EA and the psychology of the movement fuelled that. I also mentioned that too many people believed they weren’t allowed to be happy if they weren’t the perfect EA, or if they weren’t currently in the process of doing the most good they could. I argued that EA was beginning to have the same function as a religion in terms of providing purpose in (some) peoples’ lives, but that for a pseudo-religion it was doing a crappy job of providing people with the necessary social support to take on the difficult challenges it presented. At times it felt like the movement just attracted people and consumed all their excitement and enthusiasm without regenerating it, leaving people feeling burnt out and alienated.
I then wandered out in search of a different presentation and instantly regretted not staying.
From that point, throughout the conference I had people come up to me and talk about how much what I said in the workshop had resonated with them. That they felt like EA had a guilt problem, that they too had experienced it, and that they agreed that if EA were to thrive with the current demands it placed on people that it needed to to become a community that gave people (and not just people in the right social networks) adequate social support. All of these people came out of the woodwork, as if by magic, to earnestly ask me how we could solve the very real problem that they had previously thought only they were struggling with.
I felt like a minor fairy godmother as I wandered through the venue, collecting whispered stories of people who felt lost, or who felt like they weren’t useful to a movement that they felt only wanted technical supergeniuses who could write AI papers or do research. The rumbles had started.
Later on, as I was speaking to a handful of friends and new acquaintances who had each been in or around the EA movement for a few years by this point, it occurred to us that not a single person in the group actually identified as an EA. EA-affiliated, maybe, but either something had stopped us from fully embracing it, or we had gotten disillusioned with the movement after being hardcore EAs for a while. We joked that so many people were having doubts that the only people at the conference who identified as EAs were the people who had just heard about it a few months ago and were in the honeymoon period. We agreed that there was something in the absolutism, in the black-and-whiteness of the dominant sales pitch, that made us uneasy to half-identify, or identify as ‘part of the EA movement’ in some way. I knew that many leading EAs and EA orgs had tried to do something about this; to emphasise that not everybody needed to be hardcore, but it seemed in that moment that we were collectively some evidence that those efforts hadn’t worked, or worked enough.
Effective Altruism is a psychologically demanding belief system. At it’s core are a few fundamental assumptions (maximising utility, egalitarianism, a moral duty to do the greatest good) inherited from utilitarianism, Peter Singer, and the elite-educated men and women who founded the movement. These assumptions, if you are under a lot of stress or predisposed to anxiety, depression or neuroticism, can feel oppressive. I remember years ago when I had decided that yes, I wanted EA principles to be the guiding principles in my life, feeling an overwhelming sense of dread when I realised that there was no way to resolve the tension between maximising my utility function and that of everyone else without being a martyr or a giant asshole. This ended up making me way less productive, and I only became able to engage with altruism again once I dropped the demanding belief system of EA.
It’s my thesis that the psychological issues that crop up within EA, while they are not the fault necessarily of the founders or leaders of the movement, a) do stem from really fundamental parts of the ideology, and b) are exacerbated by the public narratives around the leaders who are most highly visible. So my suggestions for improvement, if they are actually valuable, would require pretty extensive refactoring of the basics of EA, and ideally a shift in the leadership by those who represent EA both outwardly and to those within the movement. I’m aware this is a big undertaking, so I am currently at the level of sketching out ideas in the hope of starting a discussion, not proposing a panacea. I also want to note that I am explicitly not comparing EA to any other movement (which I’m aware also have their problems) – I am only comparing EA to itself.
Firstly, uh, the ‘child drowning in a pond’ argument. This has been an iconic and central part of EA for years now, and it has definitely contributed to the association between ‘we want to find the way to do the most good’ and ‘it would be immoral not to’. While in belief systems like veganism it is reasonably possible to meaningfully live up to the central ask, in EA, for most people, it is not. That is partially because ‘do the most good’ is an unbounded optimisation problem, and partially because peoples’ monkey brains cannot meaningfully distinguish ‘the best I can do is not the best anyone can do’ from ‘I am failing to do the best I can do’. The original goal is simple can sometimes feel like a call only to those who already have a shot at doing ‘the most good’ in a competitive human sense. This leaves the poor, the disabled, the marginalised and the non-technical feeling like they have nothing to contribute (even when the community is crying out for empathy, great ops people and community builders). And it leaves everyone feeling like they aren’t doing enough, because ‘enough’ in this case is literally impossible.
There are two sub-components to this problem – a) everyone feels guilty for not doing enough, even when they are, and b) people feel like their lives aren’t purposeful or worthwhile if they don’t do the most good. I have a sneaking suspicion (that I mentioned in Julia’s workshop) that even though EA was started by people who wanted to be altruistic out of recognition of their extreme privilege, EA attracts the sort of people who tend to take on the world’s problems in order to feel less bad about themselves.
Whatever the solution to this part is, it has to involve a recognition that taking the weight of the world on your shoulders is dangerous, difficult, and absolutely not mandatory. Responsibility disproportionately larger than your sphere of control is a recipe for a bundle of stress and unhappiness, of the kind that makes you weaker, not stronger. Most people (particularly young idealistic people who are yet to finish the necessary personal growth work to fix whatever ickiness developed from their childhood) are generally not ready. There is a reason cultures have serious initiation ceremonies for adolescents, and that’s just for taking on the responsibility for your own adult life (and maybe that of your family or close community). A thriving EA culture would help those people develop the strength to take on whatever moral responsibility is appropriate for them to manage, along with wise mentors who can advise caution when they want to take on too much too fast.
Current EA culture lowkey says ‘Well, the world is suffering, and it’s your responsibility to fix it,’ and then the newbie closes the browser tab and has to endure their next five existential crises on their own.
One small part of this is the identity ‘Effective Altruist’. Think about it for a second – when you are identifying as an EA you are saying (with your words if not your intentions) that you are already highly proficient at the skill of doing good in the world, and you are already doing it.
Looking only at the psychological effects of identifying as an ‘Effective Altruist’, there is a small lie inherent the minute you take on the moniker. Because for most people who become EAs they are not yet proficient at the skills of doing good. EA should be a term like ‘knight’ – only awarded sparingly and then only to outstanding individuals whose contribution over a long period of time is unparalleled. Not something that people label themselves as soon as they’ve read ‘Doing Good Better’ and made their first donation to AMF.
This is not for PR purposes. Calling yourself an EA when you don’t feel effective nor wholly altruistic feels, to the intellectually honest believer, hollow and insincere. It’s like giving everyone participation trophies. With the shift in focus to collective do-gooding spearheaded by Will at last year’s conference, maybe the way out is to deemphasise the value of individual effectiveness (something that I think unfortunately happens as a byproduct of the fact that 80K’s career advice gets so much media coverage in comparison to other organisations’ work). We might instead emphasise the fact that we are building several machines, or a garden, that itself is improving at doing good.
This is difficult because the term altruism is very human-shaped and inherently has a subject who is altruistic – it doesn’t make sense in English to say that a machine or an institution is altruistic in the same way you would discuss a human. But its a direction that might be useful to aim at. Ideally new converts wouldn’t have to make any commitments or take on any moral responsibilities at all – at least until they were partway through a gradual process of strengthening themselves and understanding the landscape.
In my discussion of EA being a process of building a garden I hinted at what I think is another significant psychological hazard within EA – utilitarian utility-maximisation. This framework has the unfortunate proporty of both being incredibly obvious and fundamental to those who believe in it and completely ridiculous and mystifying to those who don’t. When I rounded up my new Israeli friend again to check in on how they were going in their relationship to EA as it evolved throughout the conference, this was one of the things that came up. It came up later in a group conversation about the shift in 80K’s top careers based on new research and knowledge making people feel crappy when their thoughtfully chosen earning-to-give job wasn’t as effective a career anymore.
A formal linear optimisation function is an algorithm, but it must be only one of several tools used to achieve a long-term and unstructured goal. One of the main things that alienated me from EA in the last couple of years has been feeling like I can’t pursue diverse strategies, like making myself healthy and powerful as a number one and not secondary priority, or focusing on paradigmatic and ecosystems-level changes at the expense of ‘optimal’ ones, without feeling like a ‘bad EA’ within the community.
Maximising utility is great when the problem is well-defined, the terms are clear and agreed-upon by everyone, and the metric we’re optimising for is clear and unlikely to cause any complex secondary effects. With things like ‘minimising suffering’ literally none of these are the case. While yes, it is an important tool in the pursuit of doing good (as long as you understand that it isn’t a requirement for being a ‘good enough’ human, which is not an easy thing to remember within EA at times), it shouldn’t be up front and centre as the predominant strategy of the EA movement.
But ‘removing maximising utility as the movement’s frontman’ isn’t easy even if we’d like to change it (which the community may object to). We would need something even more psychologically, logically and emotionally powerful to sit higher in the belief system’s value hierarchy. This is maybe why religions have been so good at getting people to do good (as defined by the religion). They have a pretty damn powerful idea (God) that the whole altruistic motivation system tops out at. EA mostly tops out at atheist privilege, enlightment-era moral axioms, and abstract maths. I’m not sure how we solve it but it’s interesting to consider what we might put in this highest position.
Maybe a social movement is not the right structure for a collection of humans who want to do the most good in the long run? Social movements are unstable; they rely on the constant frustration of their members and ideally the good ones pick a well-defined problem and dissolve when their problem is solved. EA’s problem is not well-defined or clearly scoped (at least as it is now; when the goal was just ‘make philanthropy more effective’ a few years ago it might have been). Perhaps there are other institutional structures that would better serve the movement’s abilities to meet it’s goals and allow its participants to flourish? I don’t know what they are (here I am throwing out so many problems without answers) but it would be useful to think about. The default, an informal social movement, is prone to the all the same diseases as other social movements and doesn’t offer a lot of support to its members in their challenging quests.
EA as a movement is currently attracting people like they are rocketships, giving them a minimal amount of fuel and then kind of launching them into the sky. Some people inside EA orgs and living in EA hubs may not feel like this, but they are probably the exception to the rule. If we want this movement to succeed and avoid burnout we need to work out how to turn it into an ecosystem that sustains the members who are taking on difficult tasks, and takes advantage of the fact that it exists within a planetary civilisation of abundant resources, and not on an isolated craft out in space.
Towards the end of the conference I ended up at a table with a few of the people who had mentioned their concerns to me after the wellbeing workshop. The atmosphere was vulnerable but caring; we all felt relieved that the problems we were experiencing were not isolated, frustrated at the way the movement was going up until this point, but also optimistic that a way for change was possible.
Quietly a few very established community members joined our table; they were, to put it bluntly, people who did not have similar faces or skin colours to the ones who had initiated the conversation. I half-feared that when they listened to our stories and our feelings of alienation they would dismiss our concerns. I had seen that happen within EA a few times before.
Instead, to my delight the three men listened compassionately and openly, and made the others feel heard without dominating or changing the conversation. It slipped out that they each worked for major EA orgs within the Bay Area, and as the conversation continued my respect for all those at the table grew.
The EA org men, for their openness, and their willingness to be led in solutions by the people experiencing the problem. And the people who had come to me originally; the new, the marginalised, the insecure – for seeing a movement that sometimes made them feel guilty and unwelcome and choosing to stay and create change themselves rather than giving up and running away.
4 thoughts on “Gardens versus rocketships: towards sustainable wellbeing in Effective Altruism”
Thank you for the write-up 🙂 Lovely piece and I’m glad this is being discussed.
It seems weird to me to read this part: “I remember years ago when I had decided that yes, I wanted EA principles to be the guiding principles in my life, feeling an overwhelming sense of dread when I realised that there was no way to resolve the tension between maximising my utility function and that of everyone else without being a martyr or a giant asshole. This ended up making me way less productive, and I only became able to engage with altruism again once I dropped the demanding belief system of EA.”
First of all, I don’t understand how you can be Effective as an altruist if you’re an asshole. Surely we can take that off the table as any kind of solution right away. Aside from that… I guess, I know I’m not “theoretically” doing the best I can… but I don’t feel any dread or very much guilt, despite the fact that I haven’t had a job for over two years and therefore haven’t been able to give effectively the way I wanted.
Aspiring EAs need to realize that in order to be Effective, you must ALWAYS take care of yourself first. If guilt is something that drives you to action rather than sapping your energy, perhaps it has a useful place in your life. But for me, guilt doesn’t allow me to cure malaria or get a better job where I can Earn To Give the big bucks. The proper EA thing to do therefore is cast out guilt and not feel it. If you signed the GWWC pledge, you’re fantastic! Feel good about that!
Am I doing the best I could theoretically do? I still don’t have a job, so certainly I am not. But I also know that the reason I’m not living up to my potential is simply that I do not know how. Is it my fault that I don’t know how? No. Well, maybe, but it doesn’t matter. What difference does it make if it’s my fault or not? I simply don’t know how to be a better person, therefore I don’t expect better of myself. I do have access to some sources of knowledge, and I am making an effort to use those sources. What more can I ask of myself than to do what I know how to do and make an effort to learn more?
So there you have it – I’m giving almost nothing, I’ve been ineffective as an EA for years now, and I don’t feel guilty about it. Why? Because I know that in the future, when I finally get that job and I figure out how to be successful, I will be giving heavily to effective charities and it will be glorious. Because I have learned to live a comfortable but affordable and sustainable life. And because beating myself up will only delay my path to success, and therefore beating myself up is exactly the wrong thing for me to do.
> “Whatever the solution to this part is, it has to involve a recognition that taking the weight of the world on your shoulders is dangerous, difficult, and absolutely not mandatory.”
It’s not just “not mandatory”. It’s counterproductive. Avoid it. It’s good to feel empathy for those in need – for a time. But only for a time. Doing it too much will sap your energy. Once you’ve internalized that lesson that there are people out there who need your help, STOP. I think for many aspiring EAs, the best thing they can do is to forget about all the suffering in the world and focus on improving their own lives for a time. I don’t mean to forget about them literally – what I mean is, avoid thinking too much about them. You can probably help others better with sympathy rather than empathy – feeling bad about their suffering does not help them; a sense of duty to help them, on the other hand, absolutely will.
I remember there was this episode of Friends or Seinfeld where they argued whether anyone was really an altruist. One character opined that altruism didn’t really exist, anyone who is doing good is really doing it for themselves. While I do believe real altruism exists, I think Effective Altruism needs to be based on an understanding of human nature – that we live our whole lives inside one single body, with one single brain that is wired in a certain way we cannot change. Our human minds are not satisfied by altruism per se, not unless it is coupled with feeling good about ourselves. To be Effective, we must account for this.
For example, before I learned about EA I signed up for a World Vision sponsored child. World Vision doesn’t allow you to talk to your child directly or find out their location, but it does send out things like birthday cards and other things that you are supposed to sign and return, and it offers ways to send messages to your child. This is good – I think World Vision understands that giving money is not enough, that people need to feel good about giving, and I think the birthday card is as much (or more) about making the donor feel good as it is about the child.
I’ve noticed 80000 hours has a problem with dryness. It’s kinda boring. I love it for its objectivity and analysis – I would hate for that to be lost – yet humans are wired to need more than that. This is definitely something EA orgs need to work on.
> maybe the way out is to deemphasise the value of individual effectiveness [….]
> or focusing on paradigmatic and ecosystems-level changes at the expense of ‘optimal’ ones, without feeling like a ‘bad EA’ within the community.
+1. I have lots of ideas myself that are not known to be highly Effective, but I also know EA orgs don’t have everything figured out yet. As long as no one has researched my ideas and found evidence that they are not effective, I’m going to keep believing in them and splitting my resources between those and EA charities.
Maybe in the scheme of things it will turn out that the ideas I supported weren’t effective (IMO it’s more likely that my ideas could have been very effective, but that their effectiveness was highly nonlinear and not enough resources were allocated to reach good ROI, but I digress). In that case, individually I did not maximize the utility function. However, collectively as a movement, I think if we all branch out and try a lot of ideas, a few of those ideas – ideas that EA orgs weren’t sure about or did not have the resources to investigate – will turn out to be extraordinarily important and Effective. The average effectiveness of all of us together can thus be higher, and that’s what really matters. (As justification for my belief, I would refer you to The Black Swan and its concept of Extremistan – a single tremendous success can compensate for many failures, and vice versa.)
It is important, of course, to have humility (not to be confused with humiliation) and realize that what you think will be effective may not turn out to be effective at all. You have to be open to evidence that you’re on the wrong path, but please try not to feel *bad* that you took the wrong path post-hoc. I don’t know about you, but I get annoyed by every movie with that old plot point where a character is overcome by guilt because “if I hadn’t stayed late at work for that meeting, she would still be alive today!” Feeling guilty because you did something perfectly reasonable (given the information available to you at the time) is nonsensical.
> This is maybe why religions have been so good at getting people to do good (as defined by the religion).
I’m rather skeptical that religions have been good at that. They have been fairly effective at getting a subset of their followers to follow certain well-defined rules “religiously”, so to speak. But as a former churchgoer I don’t think religious people are better than others at general principles – nuanced or ambiguous stuff like educating yourself (as my religion taught), doing good in your community, giving to charity (when your church hasn’t said how much or to whom to give), or more serious things like knowing when and how to lie when the Nazis ask if you know any Jews around here.
The very principles of EA demand that we figure out what to do about nuanced, ambiguous, difficult and serious issues. But I don’t think we have to do this on an *individual* level – not at all! The great thing about EA, if you ask me, is that we have these supergeniuses at the top figuring out what the most effective charities are. That means the rest of us don’t have to do squat if we don’t want to!
I actually applied for a charity-evaluator job at one of the EA orgs and they turned me down without an interview. I felt a little bad about that, but if they turned me down because they found someone more qualified, I really should be happy about that. It’s important that they hire the best people, and if that’s not me, I’m kinda glad they didn’t pick me. The reality is that I can probably make more money in my original career, and that means I can give more to EA charities.
The thing about it is, it’s easy. I don’t have to think about it at all. I simply look at the GiveWell charities and pick one that looks good to me. I know professional charity evaluators have examined the charity in great detail as part of their full-time job so I can have confidence I’m being Effective by giving to it. I even know how much to give – I signed the GWWC Pledge, and that settles it. In my prior religion I was taught since I was a child to give 10% tithing, so the GWWC Pledge (I chose the 10% level) was a cinch. And it’s a habit. I have a little spreadsheet to keep track of how much money I make and how much I owe to charity, and that’s that. (I use the word “owe” – I used to believe I had a debt to God, so it’s easy to just continue thinking of it as some sort of debt to humanity I am “paying forward” for giving me a warm home with fantastic food, access to an entire World Wide Web, access to incredible modern medical miracles – the list goes on.)
I’ve also developed a habit of frugality and minimalism – not buying things I want until the price is right. This doesn’t mean torturing myself – I absolutely spend money on certain comforts I don’t really need, because I have to be comfortable if I am to be happy, and I have to be happy if I am to be Effective.
As an example, I had a big mole on my face which I was kind of ashamed of. Morally I didn’t like the idea of spending money on pure vanity to make myself look better, and the price of removing it was $120 (for a procedure that took barely one minute, I might add). But my best friend encouraged me to spend that money, I did it, and you know what? I feel better about myself. Does this make sense from an EA perspective? Well, it might make me more confident in my day-to-day life, which gives me a higher chance of getting a good job, which would translate to higher earnings that I can give to effective charities – money that would not have otherwise gone to those charities. It’s not a certainty, but I think there’s a very good chance I made the right decision.
Meanwhile my frugality ensures that if I can afford to give more than 10% to charity, eventually my money will pile up until it becomes obvious I should give more of it away. When that day comes, I know I will give more – that’s just the sort of person I’ve decided to be. My financial means may turn out to be high or low, but whatever I earn, I will deploy effectively. Having made that decision, I don’t have cause to worry about my effectiveness. So I don’t.
So the way I see it, being an EA isn’t difficult just because it’s guided by a complex, difficult, nuanced ideals that can never be perfectly fulfilled. For most EAs it is not your responsibility to figure out how to be effective; leave that to those geniuses at the top and be thankful you don’t have to figure it all out yourself.
It doesn’t seem that much different from my religion, actually. I was taught in Sunday school that we’re all flawed and must strive for perfection but will never achieve it. Similarly in EA we have complex and nuanced ideals like consequentialism that can never be done perfectly, but it remains a worthy goal.
But perhaps more importantly, my religion also had a lot of relatively simple rules and values that any churchgoer could follow insofar as they cared about those rules and values. Likewise GWWC has the “minimum 10%” recommendation and GiveWell has a list of recommended charities, and if you give that 10% you are Effective as an Altruist, period, if you ask me. Maybe there’s more you could do theoretically, but probably the best thing you could do is improve your own life, health and career enough that you can afford to give more.
I think it’s necessary and important to have an EA social movement, but it is perhaps premature to grow it quickly right now.
I think we need to provide more … something … to aspiring EAs. Now that I’ve left my religion, morality is as important to me as ever, and now that I’m starting a family I wish the EA and rationality communities could fill that void left by the absence of church. How will I teach my future children to overcome bias and what-not? How will I instill a desire to improve the world and help that drowning child in the pond around the world? It can be challenging to teach these principles to children. When my stepson heard the drowning-child scenario, he said he wouldn’t help the child. I was taken aback until I learned the reason why: he can’t swim.
Very nice article. Needed.