Envisioning the future of effective altruism

Being an EA is a weighty undertaking; I imagine the title of Effective Altruist being something that is conveyed on someone after years of fruitful service like a knighthood, not something someone calls themselves by the time they’ve ordered Doing Good Better off the Internet. I do not call myself an Effective Altruist; effective altruism is one goal I strive for, but not the only one.

This prepubescent movement has, like all good social phenomena, experienced its fair share of growing pains. There are lots of ways the movement could easily go astray and only a few ways it could truly succeed.

What might the world look like, with Effective Altruism in it? Let me sketch out three visions of the future.

…=

In the first, EA has grown big, with members in the seven or more digits. EA leaders shout new thoughts in blog posts and conference talks like through a megaphone, desperately trying to transfer the intricate and complex knowledge they’ve developed about their respective problem-spaces into the heads of people vastly different in experience, culture and geography from them. It’s still hugely an internet phenomenon, and many hardcore EAs have never met another one in real life.

The pockets of EA physical communities collaborate wildly, but there’s always a lag between the ideology of the capital and the ideology of the colonies.

In remote outposts like new university groups sometimes it takes years for people to hear that a previously promising career track has been devalued, or that a new and promising cause area has sprung up. There are lots of shouting matches about Earning to Give.

In many of the more remote places the groups recycle the same ten intro ideas over and over again, with a revolving door of fresh acolytes learning the mantras ‘how can we use reason and evidence to do the most good?’ and something less polite along the lines of ‘shut up and multiply’ and instantly becoming insufferable in their local communities. Local leaders find the rhetoric very hit or miss amongst newcomers; precise Facebook ad targeting reveals a tight demographic – aged 20-32, mostly Caucasian males, educated at elite universities, living in countries with high trust in the government; atheism, low neuroticism, high openness, high grades in maths.

The drone of the mantras drowns out the questioning, and in the colonies people don’t question, they just leave. People wring their hands about the too-small movement numbers but the churn is just as bad. Once the honeymoon period is over and the utilitarianism starts biting newcomers in the ass, they coalesce into depressive social circles when no one feels like they are effective enough to be worth anything; there’s an exit trail littered with the broken souls of those who couldn’t stand the self-flagellation.

In the big cities of the empire, the mood is different. EAs are highly specialised, and fly frequently to efficiently-run conferences to continually explain their niche to other niches. They’ve caught the tech bros’ reputation for simplifying complex societal issues into highly distilled models that tend to break things when implemented; that one test where they got rid of all the mosquitoes in Sierra Leone still makes Effective Altruism PR people shiver. All the while the mostly white, mostly male, mostly utilitarian and tech-fluent population in the capital spends half their time publicly boasting of their effectiveness and privately dying inside of the same feeling of worthlessness that traps the EAs in the outer colonial network.

They’re getting attacked by the diversity brigade relentlessly on social media. They really can’t, for the life of them, work out why people from poor and marginalised communities don’t want to adopt the EA mindset. Who wouldn’t want to save the world effectively with them?

In the second future, the EA movement has become influential in some sorts of public policy positions. EA doctrine became much more concerned with the need for surveillance in order to prevent unfriendly AGI being developed. They haven’t yet either made or prevented a real AGI, but they’ve harnessed the less-general AIs that exist – to implement government policy for the British and Canadian governments that optimises the happiness potential of peoples’ newsfeeds and does some minor law enforcement.

More and more EAs are standing for government on a platform that some sardonically refer to as ‘Well, we’re effective, and we’re altruistic, so we’re obviously better than those ineffective slobs you’ve currently got’. The Elitism platform. They’ve got some good policies, and they want to get rid of factory farming. They’re probably a good influence?

Ten years later and there’s been a surprisingly brutal famine in poorer parts of Canada when they banned factory farming in one go and diverted a bunch of taxes to AI safety research by firing a boatload of community service workers and healthcare administrators. Nobody’s really sure though if it was the EAs’ fault, because in the meantime they’ve gotten really good at PR, and because clearing the name of Effective Altruism in these incidents is obviously better for the world overall…

I mean. just think how worse off we’d be if the EAs weren’t able to continue their good work because regular people thought some accidents were their fault.

A few years later, the deadly influenza strain in an EA biosecurity lab jumps the fence and kills 20 million people in the US and southeast Asia. Several arrests are made and some EA figureheads are hauled in front of the UN to explain themselves. They patiently explain existential risk mitigation frameworks to the simpleminded, irrational UN Assembly and assure the legislators that the chances of such an event occurring were less than 0.01% in their risk mitigation models.

Google hires a taskforce staffed entirely by EAs and throws millions at them on the condition they make a good AGI in 18 months. A handful of retired senior FBI people are hired to construct incontrovertible evidence that this is the most effective cause in existence based on the Effective Altruist literature to wave in front of the hopelessly anxious EA AI researchers. The EAs pore over the final memo but can’t find the flaws in the model, so they throw themselves into the work in earnest.

A few attempts at public arguments are hashed out online but no utilitarian objections can be found to the memo.

No one in the inner circle of EA questions the project after that.

In the third future the EA movement doesn’t exist. The idea of ‘doing the most good with the resources we have available’ is pretty common knowledge now – it’s not just a cultural meme, but embedded in the dominant political and social institutions. Givewell-like evaluators are a fundamental part of every major national policymaking institute and development agency.

There are a handful of small but well-funded international taskforces that monitor and protect against every kind of existential risk imaginable; somehow someone has made news that ‘We didn’t get hit by an asteroid this year,’ both palatable and entertaining within the news cycle and it’s a regular and welcome part of the media conversation. Major progress in slowing down the rate of development of dangerous technologies has meant that the national x-risk institutions are able to continually react and adapt and the best researchers are optimistic about humanity’s chance of survival.

We’ve stopped asking kids what they want to be when they grow up, and instead ask gentle questions about their moral values; many people pick causes to spend their entire lives on, regardless of income, and institutional programs pick up the slack to fund their work whenever the market doesn’t.

Nobody calls themselves an effective altruist; but there are historical records of the movement existing. Most of it just became the way we collectively thought about the world, and seamlessly melted into the fabric of the biggest and most widespread institutions. There’s even EA art now, even though there’s no real movement anymore; people feel genuinely grateful for the singular focus these types of questions have given to humanity and their own personal lives, and feel compelled to express that in a way that allows lots of people to share in and experience their gratitude.

Altruism, the concern and compassion for all life, present and future, is a key part of what stabilised society after the rocky 2020’s, and the ‘effectiveness’ part of EA grew a lot more nuanced and incorporated a lot more counterintuitive insights before it blossomed into a core feature of business, government and civic life. The nations look slightly different for geopolitical reasons now but the one-world, one-mission viewpoints espoused and developed by the EA movement made some lasting impact on how humans thought of themselves over the longer term.

This EA-flavoured world has learned to see suffering as its enemy, its only enemy; and with work and compassionate collaboration one that can be muted, if not destroyed.

Effective Altruism, as a movement, has its roots in positivist science, orthodox economics, and institutions glowing with privilege like Oxford with a tendency to espouse beliefs that aim at universalism. But as a social movement, it is also subject to pressures to be politicised, to grow as large as possible, and to accommodate diverse and often incompatible aims. It has also become a haven for weird ideas, about radical ideas like eradicating the malaria mosquito using gene drives and using artificial intelligence to maximise utility. The movement also has a tendency towards self-fulfilling insularity; there’s a tautological assumption that the Effective Altruism organisations working on cause X must be the most effective players in the field of cause X by virtue of being EA organisations, and that no other organisations or actors are making important progress or have valuable knowledge. I think this is myopic and likely to make EA as a movement much less effective in the long run.

Defining what is ‘altruistic’ is a group activity, and the EA movement needs to continually re-engage with this process, and encourage engagement with it at every level and in every location and not only in the inner circles of top EA organisations. Similarly, understanding how the needs, desires, biases and worldviews of the initial founders of the movement have grown out into the limitations, focus areas, and ideological frameworks that now define the movement is necessary if we want people who don’t look or think like us to be a part of it.

I think there’s an important project to be pursued in spreading the principles of effective altruism into the non-EA institutions where they can help, and inspiring people already in do-gooder roles to adopt this lens in their work. That’s a vastly different project than either growing or intensifying the Effective Altruism movement, and, I think, a more sustainable one – more of a percolation than an expansion. But it requires separating the EA identity from the movement from the values and principles that people within the movement uphold, and being willing to adapt and remix them to different contexts, in collaboration with the people who are already there.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s