Tag Archives: Effective Altruism

Do Effective Altruists Appreciate Nature?

An effective altruist recently made this post in a Facebook group:

How do you personally reconcile your views about nature with enjoying natural spaces?

It seems to me many (most?) EAs think nature, broadly defined as any living space including stuff like gardens and planted trees, contains a lot of suffering. Some might see it as the single biggest, ongoing moral catastrophe yet observed, much worse than the very worst humans have done to each other.

But I also see lots of us posting pictures of ourselves in parks rather than parking lots. We go outdoors to forests, rivers, mountains, gardens, avenues and seem to enjoy it as much as anyone else.

Not just philosophically, but personally, how do you go through life believing nature to contain misery, while also enjoying it?

What do you think when you’re looking at a beautiful view, or old tree, or somewhere like Stonehenge or the Grand Canyon or Central Park or anywhere with living things in it?

A significant minority of effective altruists believe non-human animal well-being is the most important focus area. Within that group, a minority, believe the most important focus is the welfare of wild animals, including from naturally occurring causes (as opposed to from human-made causes). As far as I know, there is little to no good quantified or empirical data on the proportion of EAs primarily focused on wild animal welfare who believe the suffering of wild animals is so great in nature environments themselves are net negative in their existence. It’s my impression those among them who believe nature itself is net negative in expectation are in the minority. However, they’re more vocal due to the fact of how imperative they may see the problem relative to others. They also appear better coordinated. This is also true on the level of the whole movement, such that their voices are disproportionally magnified relative to those who disagree with them. I expect this can create the impression they’re much greater in number in EA than is actually the case.

So it’s simply the case the majority of effective altruists go through life enjoying nature because they don’t believe it’s full of misery; they don’t believe the misery in nature justifies destroying it; they don’t believe misery matters so much relative to other values destroying nature is justified; or they don’t believe the misery in nature outweighs other factors of positive well-being in nature.

I personally don’t believe nature is so full of misery that it’s worth destroying, because that isn’t borne out by empirical evidence that’s yet been collected. I don’t personally believe destroying nature is either justified or desirable, and even if it were on consequentialist grounds, it wouldn’t dominate expected value estimates. I don’t have sufficient reason to believe misery or simple dimensions of well-being is all that is of value. I also don’t believe there is sufficient evidence to conclude the animals in nature whose suffering is thought to dominate so much its destruction would be justified, i.e., invertebrates, either have morally relevant experiences, or we know enough about what those experiences are like to conclude they’re primarily of suffering.

I believe it’s possible nature may be net negative because of all the potential suffering in it, to the extent it might dominate anything else in or part of nature. I cope with it by ignoring it, because I can holistically enjoy nature beyond a degree I’d even describe as aesthetics. That’s because to not let myself enjoy nature even given the assumption it’s so full of suffering as to be the worst thing ever does nothing to reduce wild animal suffering. In the meantime, if enjoying nature is what makes someone’s life meaningful enough it motivates them more to reduce wild animal suffering or do the most good as they best judge it by their own values, than indeed doing so could motivate one to reduce wild animal suffering. To experience nature in commiseration may drive motivation to those who faithfully believe to reduce wild animal suffering is the most important cause. However, there are many effective altruists I know and who I expect experience scrupulosity regarding nature as their sensitive to language in arguments melodramatic relative not to the potential scale of suffering in nature, but what evidence any effective altruists currently have to justify those sentiments as real.

Advertisements

Crucial Considerations for Terraforming as an S-Risk

Summary: in this essay, I go over the concerns of how terraforming may pose an s-risk for wild animals. I explore from a perspective of suffering reduction both the political and technological factors influencing scope, intensity and probability of s-risks from terraforming or other space colonization efforts spreading biological life throughout the solar system. I also explore a possible strategy for mitigating s-risks from terraforming. Finally,  I evaluate how much of a priority s-risks from terraforming should be relative to other s-risks, and recommend for prioritizing s-risks further. 

Painting a Picture: a Potential Future of Accelerated Terraforming

A lot of suffering reducers are concerned about the possibility of space colonization spreading wild animals who primarily live lives full of nothing but suffering. From ‘Will Space Colonization Multiply Wild Animal Suffering?‘ by Brian Tomasik:

Scientists and futurists often encourage humanity to spread out into the galaxy, in order to reduce risks to human survival, build new civilizations, and bring life to desolate planets. One fairly realistic proposal is the terraforming of Mars, in which the Red Planet would be made increasingly hospitable to life, until plants and animals could be sent to live there. More speculatively, some have advocated “directed panspermia” — disbursing genetic and microbial seeds of life into space in the hopes that a few will land on and populate other planets. Finally, some futurists imagine virtual worlds in which computational agents would interact; these would sometimes include animals in the wild.

In these scenarios, animals are proposed to be used as a means to help humans, as a vehicle for spreading life to the stars, or as part of realistic virtual environments. What is almost always ignored is that these animals would have feelings, and many would suffer enormously as a result of being created into Darwinian ecosystems in which predation, disease, and premature death are endemic. Theologians ask why a good God would create so much cruelty in nature. Future humans should likewise ponder whether it’s morally acceptable to create new ecosystems containing vast amounts of suffering, or whether they should pause and seek more compassionate alternatives if they embark upon space colonization.

One case of space colonization potentially spreading wild animal suffering that could also begin in the span of decades is the terraforming and subsequent human colonization of Mars by individuals like Elon Musk and his company Spacex. While many people may be skeptical a single company or individual, no matter how many billions of dollars they have, will alone be able to terraform or colonize space. However, visionary technologists like Elon Musk can advance the state of technological development for terraforming and colonizing other planets, which in turn could inspire nation-states or future supra-national political entities to colonize space. Nation-states or a future world government will have many more resources to martial to terraform and colonize other planets. While state actors may be perceived as not scientifically and technologically innovative, as long as non-state actors make theoretical and technological progress on the human potential to terraform and colonize other planets, state actors merely need to copy their work and implement it at a scale making terraforming genuinely possible. Indeed, state and non-state organizations have a long history in the United States of cooperating to make progress on advancing humanity’s reach into space. With companies like Spacex working with multiple countries to get satellites into space, and the possibility of governments and private companies working together to mine asteroids in the near- or medium-term future, space colonization between state and non-state actors takes on a global dimension.

Efforts in earnest by national governments to colonize Mars have begun with the United Arab Emirates recent announcement of their visionary plan to colonize Mars with 600,000 people in 100 years. While one might naively expect countries leading the world economically and scientifically like China or the United States would sooner announce plans to colonize Mars than the United Arab Emirates (UAE). However, the UAE is also an absolute monarchy with the seventh-largest oil reserves in the world, making its government exactly the kind with not only vast economic resources but the political will and ability to channel them into massive, long-term missions like colonizing Mars. The UAE’s early entrance into the into the game of interplanetary colonization could be an omen of things to come. As technology advances, national economies grow, and some resources on Earth become scarcer, economic warfare to colonize Mars and other planets could break out. This international competition could of course take on a more thoroughly political dimension like the Space Race. While a zero-sum competition to colonize Mars and the rest of the solar system might seem absurd and irrational, a glance at history shows us with transformative technology which may pose an existential risk humanity and our governments haven’t been as careful as we could or should have been with regards to the goal of minimizing the chances of extinction. We should not by default expect better from humanity in terms of rationally pursuing the goal of preventing risks of astronomical suffering either. What’s more, conditions in international politics could set in whereby great powers will be incentivized to invest a huge amount of resources to colonize space as part of economic and political competition. Under these conditions even rational state actors may have difficulty opposing their incentive gradient despite their incentives being unaligned with their values. From the Wikipedia article on the Prisoner’s Dilemma:

In international political theory, the Prisoner’s Dilemma is often used to demonstrate the coherence of strategic realism, which holds that in international relations, all states (regardless of their internal policies or professed ideology), will act in their rational self-interest given international anarchy. A classic example is an arms race like the Cold War and similar conflicts. During the Cold War the opposing alliances of NATO and the Warsaw Pact both had the choice to arm or disarm. From each side’s point of view, disarming whilst their opponent continued to arm would have led to military inferiority and possible annihilation. Conversely, arming whilst their opponent disarmed would have led to superiority. If both sides chose to arm, neither could afford to attack the other, but both incurred the high cost of developing and maintaining a nuclear arsenal. If both sides chose to disarm, war would be avoided and there would be no costs.

Although the ‘best’ overall outcome is for both sides to disarm, the rational course for both sides is to arm, and this is indeed what happened. Both sides poured enormous resources into military research and armament in a war of attrition for the next thirty years until the Soviet Union could not withstand the economic cost. The same logic could be applied in any similar scenario, be it economic or technological competition between sovereign states.

Especially in a future where nuclear weapons and other extremely powerful technologies if internationally developed for military purposes would logically end in mutually assured destruction, space colonization as a form of economic warfare may become an attractive non-violent alternative for international competition. All of this could create strong incentives worldwide to advance the development of the technology to not only make space colonization and terraforming viable, but to find ways to make it happen on shorter timescales. Even in a future when a world government may form as a singleton to coordinate an end to international space races, in the interim the global advancement of terraforming and space-colonization technology may create a culture in which terraforming and space colonization are both seen as highly desirable. Thus a singleton representative of human interests may be incentivized to develop technologies to accelerate the civilizational potential to terraform and colonize other planets.

Differential Technological Development and the Hedonistic Imperative

An idea in the fields of global coordination and existential risk studies is that of differential technological development, first put forward by philosopher Nick Bostrom. A proposal for reducing wild animal suffering put forward by Bostrom’s fellow pioneer of transhumanism, David Pearce, is that of the Hedonistic Imperative and the Abolitionist Project. The development of emerging technology, like new bio- and nano-technologies, which would accelerate the feasibility of terraforming other planets are in the same fields of technology which would accelerate the feasibility of the Hedonistic Imperative the Abolitionist Project. In a world where suffering reducers face daunting and steep odds of spreading values focused on prioritizing moral circle expansion and s-risk reduction, and thus fail to cause decision-makers in charge of space colonization to care about the impact of terraforming on wild animal welfare, a redundancy measure would be to:

  1. Identify the likely paths by which biotechnology, nanotechnology and other fields would be advanced to accelerate the timeline for the feasibility of the Hedonistic Imperative and the Abolitionist Project vs. to accelerate the timeline for the feasibility of terraforming Mars and other planets.
  2. Advance a campaign of differential technological development in biotechnology, nanotechnology and other fields advocating to invest resources in projects aimed at accelerating the feasibility of the Hedonistic Imperative and the Abolitionist Project at a rate at least commensurable with the rate at which the feasibility of terraforming is advanced in the same technological fields.

This strategy could be implemented as a redundancy measure acting under the assumption terraforming and other space-faring projects potentially creating astronomical amounts of suffering aren’t prevented, but additional technologies which prevent otherwise astronomical amounts of suffering are developed in lockstep and incorporated into said projects. Ideally, this strategy would allow s-risk reducers to keep track of technological advancements around the world accelerating the feasibility of terraforming, and respond accordingly in conducing the advancement of technologies accelerating the feasibility of the Hedonistic Imperative and the Abolitionist Project as well.

Conclusion: Advanced AI as a Tempering Consideration

All of the above considerations might also be rendered moot by advanced AI, such as Seed AI undergoing recursive self-improvement to become superintelligent, being created before the technologies which would make either terraforming or the Hedonistic Imperative feasible. Indeed, even given the strong possibility of the worldwide political will for humanity to colonize and terraform bodies throughout our solar system and beyond, unaided by advanced AI, it’s  the consensus in effective altruism is superintelligent AI could be created within a century, while without the benefit of advanced AI it’s commonly thought the processes of both terraforming Mars (let alone other planets) and the Hedonistic Imperative could take several centuries to successfully bring about. In conversation with Brian Tomasik on this subject, he conferred that on those timescales he also expects biological life and intelligence will be replaced by digital/machine alternatives. Ultimately, none of the considerations in this post should be enough to sway s-risk reducers away from prioritizing values spreading, moral circle expansion, global coordination and AI alignment predicated on the assumption machine intelligence as a technology will transform life on Earth before the spreading of biological life to other celestial bodies becomes a practical consideration. Nonetheless, under the assumption timelines for advanced AI development are long, the considerations in this essay would deserve relatively more attention from s-risk reducers. While the pursuit of a fully blown project focused on mitigating s-risks from terraforming isn’t at this stage justified, I’d like to see a medium-depth dive into strategies for differential technological development to prevent s-risks from the spread of biological life beyond Earth in the scenario timelines for superintelligent AI are long (i.e., on the scale over well over a century).

Appendix: Lessons Learned in Writing This Essay

In writing this essay, I brought together concepts developed from several separate intellectual communities associated with the effective altruism movement, such as: AI alignment, prioritization research, existential risk reduction, transhumanism, rationality, and anti-speciesism/anti-substratism. One thing I realized is ideas such as differential technological development, the importance of global coordination, and the applicability of game theory more often applied to mitigating risks of human extinction are also critical for the mitigation of risks of astronomical suffering as well. S-risk reducers benefit in multiple ways from cooperating with others in common intellectual projects, and in doing so s-risk reducers can also influence differential intellectual progress consideration of s-risk reduction throughout communities focused on the scientific advancement of emerging technologies. In this vein, in addition to following closely the output of organizations like the Foundational Research Institute, I recommend s-risk reducers follow the output of the Future of Humanity Institute and the rationality community on sites like LessWrong for new crucial considerations and key insights for dealing with astronomical stakes, and contribute to the common intellectual goals shared by all far-future focused effective altruists.

Extreme Risks Due to Uncertainty

Summary: I describe ‘Extreme Risks Due to Uncertainty’, how various types of uncertainty contribute to the risk of an ongoing moral catastrophe, discuss their ramifications for effective altruism and provide some examples and features of these risks.

Two types of existential risks are extinction risks and risks of astronomical future suffering (s-risks). AI alignment as a field and the rationality community emphasize astronomical stakes. Indeed there’s a logically valid argument a misaligned machine superintelligence poses the likeliest case for a worst-case scenario humanity or its descendants have no hope of bouncing back from. However, to a lesser extent the astronomical stakes argument applies to other existential risks from other types of technology as well. Yet sometimes effective altruists seem to apply a rhetorical force from the ethos of astronomical stakes on AI alignment that isn’t done for any other x-risks. Nonetheless, AI alignment is thought to pose everything from the immediate extinction of humanity;  s-risks; and, if poorly aligned but not threatening extinction, will stultify humanity’s will from being carried out throughout the light cone.

It seems the categories used by effective altruists to identify the nature of risk reduction projects are:

  • Global catastrophic risks (GCR), or existential risk (x-risk) reduction, more generic terms.
  • AI alignment, indicating a more exclusive focus on the impact transformative AI will have for/on humanity, life and civilization.
  • S-risk reduction, a focus on mitigating cases or risks of astronomical suffering. Arguably factory farming and wild animal suffering are not risks but ongoing cases of catastrophic suffering.

A paper I’ve seen shared by effective altruists before is this one on the possibility of an ongoing moral catastrophe.

Given how neglected they are by everyone else, I imagine many effective altruists would argue their causes are cases of ongoing moral catastrophes going unacknowledged. While in a sense that’s fair, the sense in which we should think about the possibility of an ongoing moral catastrophe is as unknown solutions to open problems past the boundaries of present-day science and philosophy. These are risks of potential catastrophe due to uncertainty.

An example of this is not having enough knowledge we can be confident in the sentience or moral relevance of non-human animals according to some worldviews. One can think of the philosophical uncertainty of what to value at all, and how much that should consider non-human well-being. However, most value systems seem to have the possibility to give weight to non-human animal well-being. Even then, there’s the question of how much animals’ experiences or preferences matter relative to that of humans. That’s a matter of moral uncertainty. Then, assuming a system which assigned proportionate moral weight to all species appropriately given some data, we need to find out what that data is. This is a matter of questions like what species have morally relevant experiences, which many effective altruists believe can be answered empirically. (Many philosophers would argue questions about sentience are also a matter of philosophical uncertainty, but given shared assumptions about what’s at all possible with empirical science, questions about sentience seem tractable.)

There are a few theoretical clusters of questions/problems which, depending on how they’re answered, will dramatically alter the moral weight assigned to possible trajectories of the long-run future. Some examples:

  • How much moral weight to assign to experiences like suffering vs. happiness; what else contributes to well-being; or if things in addition to have intrinsic moral value (e.g., justice, art, culture, etc.) are examples of moral uncertainty.
  • How if sentience once as close to objectively defined and measurable as possible, and which species have a real subjective experience of pain/pleasure, are partly questions of empirical uncertainty.
  • Questions of population ethics, philosophy of identity, philosophy of consciousness, cause prioritization models and meta-ethics have practical ramifications. However, as a matter of course these questions are so abstract compared to what effective altruists mostly focus on they may be more appropriately termed matters of “philosophical uncertainty”.
  • The ease and validity of how to accept resolutions to intellectual problems in general, whether they’re in our own normative ethics or they have implications for x-risk and AI alignment, are matters of disagreement on the best epistemology. What is the best epistemology to use to resolve a problem is called “model uncertainty”. Doing morally relevant epistemology is a new enough phenomenon in effective altruism there doesn’t seem to be a consistent term for working on epistemological problems directly, whether that be some kind of cause prioritization, decision theory, or other theoretical research. Another term could be “epistemic uncertainty”.

It seems the answers to these questions will have huge implications for cause prioritization. So, in addition to the existing buckets like “x-risk”, “AI alignment” and “s-risk, I’d posit another: catastrophic risk due to uncertainty. Here are some things the ‘risks due to uncertainty’ have in common:

  • Unlike x-risks, which mostly have a technological or political cause, these risks due to uncertainty are all based on knowledge. That means the evaluation of the moral
    value of an outcome can hinge entirely on changing one’s beliefs.
  • Risks due to uncertainty have in common is unlike philosophical problems throughout most of history, resolving philosophical problems effective altruists cares about is a race against the clock. Whether to colonize space, or how to align transformative AI, are questions with answers of practical importance to effective altruism. Breaking these big questions down into factors which can be more easily addressed is the purpose of much research in effective altruism.
  • The professionalized organization of neglected theoretical research to look more like what’s typical of science is different than most philosophy. Unlike philosophical questions from Ancient Greece that remain unanswered today, effective altruism has to find satisfactory resolutions to issues of uncertainty it faces on timescales as short as several decades, as they’re necessary to make irreversible decisions which will determine how valuable outcomes in the long-run future are.

I have thought of some terms which could describe this concept: “uncertainty risks”; “moral risks (m-risks); “epistemic risks”; or “knowledge-based risks”. None of these are great. “Catastrophic risk due to (moral/empirical/model/epistemic) uncertainty” is a definition which broadly captures all instances of the idea I’ve given above. “Knowledge-based risks” is the most accurate short term that sounds accurate to me, that also sounds the least silly. Suggestions for a better descriptive term are requested.

Ideas for Making Anti-Specieist Appeals to Conservatives

Summary: Based off of Jon Haidt’s Moral Foundations theory, I brainstorm how and why the vegan, animal rights and anti-speciesism movements might alter their messaging targeted at conservative audiences by appealing to values conservatives more closely hold. I suggest effective animal advocates may have a bias blindspot as to why conservatives might perceive veganism and animal rights negatively, and I suggest further in an effort to create more effective anti-speciesist/vegan messaging for conservative audiences. 

If we take Jon Haidt’s Moral Foundations theory, we can tie the values more highly prioritized by conservatives to vegan or vegetarian diets throughout history. From there, we translate them into terms which carry the same themes but to a different target audience (i.e., whichever modern conservatives you’re seeking to influence the diets of).

Traditional societies in every civilization of East Asia (e.g., Japan, China, Korea, etc.) South Asia (i.e., Indian subcontinent) and Southeast Asia (Indochina+all those islands) have had state religions like Buddhism or Hinduism which encourage or prescribed vegetarian or vegan diets at higher per capita rates than any other agricultural civilization in history. Indigenous peoples of North America are commonly known for living in harmony with nature for thousands of years. Stereotypes aside, this entailed a diet of not much meat and a lot of fish, but only when it was plentiful. So that’s something like a modern reducetarian/flexitarian diet. A lot of this could be respect for religious authority, but in the Western world the dominant religions don’t encourage vegan/vegetarian diets. The ‘sanctity/purity’ value could easily be a factor in religiously motivated veganism/vegetarianism, along the lines of all life being in harmony with itself, and humanity the chosen species to be nature’s stewards by divine/cosmic decree. A lot of the Ancient Greek philosophers were vegan or vegetarians, so that ties into the idea of humanity being a virtuous exemplar for all life, which precludes from eating which produces suffering. If we build a conceptual bridge from Buddhism to the Ancient Greeks in that manner, that gives vegan memes a foothold in Western civilization, which is a conservative memeplex or whatever. I don’t know if you can tie veganism to Greco-Roman ethics popular in the modern Judeo-Christian worldview, as it’d be hard to find a quality source for that kind of thing.

Of course the ‘sanctity/purity’ value is a double-edged sword. The conservative foundation for eating meat seems to be a clear, i.e., “clean” or “pure” delineation between man and animal, and to reassert that man eats whatever animal he wants. Conservatives face social influence pushing them to not even think about becoming vegetarian/vegan, since the opposites of the other Haidtian values conservatives prioritize are ‘betrayal’ and ‘subversion’. By engaging in impure behaviour which blurs formerly clear lines you undermine your own ability to contribute to and defend your ingroup. Loyally respecting the will of the ingroup comes first, especially when the taboo behaviour undermines the integrity of one’s moral foundation, i.e., undermining purity. So from a typical conservative viewpoint being vegan blurs the line between the hierarchy of man on the top, and all other animals beneath. This is similar to why one reason conservatives are more adverse to LGBT folks is because being LGBT undermines the purity of traditional partriarchy and the structure of the institutions upholding it.

So to start off with I’d say emphasize a loyalty to humankind as a form of human universalism, and connect that to doing one’s part to reduce meat intake as a part of individual responsibility to prevent runaway climate change or an anti-biotic resistant outbreak; respect for the sanctity of nature with lots of picture of how factory farming is destroying jungles; hammer on the angles of factory farmed animals looking and being gross to eat because they’re exposed to awful conditions and anti-biotics which undermine the strength of anti-biotics for humans. This shows how factory farming is impure, which I think could be used as a meme to move conservatives to eat more free-range or free-run or local or hormone-free meat and animal byproducts. I know the conditions for those aren’t as good as we’d like, and it might be better to convince people to go vegan. But if on these grounds we can convince conservatives to sympathize with systemic changes to factory farming to improve conditions, conservative political parties could be moved to adopt into their platform advocating for systemic farm animal welfare reforms. Between that and liberal parties advocating for the same, it’d increase the chances the sort of corporate farm animal welfare reform campaigns which have found EA great success will work even more in the future. Tie all that into a message about how a conservative going vegan is somehow technically not a betrayal or subversion of human supremacy (tolerance for continued speciesism depends on how much you want to spread anti-speciesist values vs. just ending factory farming by itself).

I know many effective animal advocates want to spread values primarily along the lines of intrinsically valuing the well-being and experiences of non-human animals. Unfortunately from a first glance it doesn’t appear the intrinsic value of non-human well-being will resonate with conservatives as much it does liberals. Some conservatives conceive their omnivorous diet as tied into religious and/or cultural tradition, and while vegetarianism/veganism may not be bad in itself, a conservative might see it as wrong for themselves to move away from their culture’s traditional  diet in the process of going vegan/vegetarian. Of course, as effective altruism and animal advocacy put more proportional focus on systemic/institutional change over individual behavioural change, a new opportunity for appealing to conservatives presents itself. If going fully vegan/vegetarian is unappealing to conservatives because it is seen as betraying traditional diets, effective animal advocates could argue for going reducetarian/flexitarian primarily on anti-speciesist grounds, or eating meat more conscientiously, i.e., opting for clean meat or the least harmful alternative as often as possible. While most effective animal advocates are liberals and so it may seem cognitively dissonant to us for a large number of conservatives to eat animals “conscientiously” without intending or aspiring to a fully vegetarian/vegan diet, technically this could be a road to anti-speciesist values-spreading, moral circle expansion, and increased public support for reforming/abolishing industrial farming across the political divide. Admittedly reliably measuring attitudinal shifts like this in the absence of clearer behavioural markers like a complete shift in diet would be difficult.

Ultimately in writing this post I realized it seems difficult to think of positive ways to make anti-speciesist/vegan appeals to conservatives, since the messages which work for liberals apparently don’t work on conservatives. What I’ve also realized is I haven’t seen animal advocates putting themselves in the shoes of conservatives and thinking of how from their perspective veganism/vegetarianism is, intrinsically or extrinsically, a betrayal of their current values. For all we know this could be tied to perceptions of liberalism which aren’t technically wedded to the anti-speciesist project. For example, much like how liberals are caught in filter bubbles giving them exaggerated and atypical examples of conservative close-mindedness, conservatives may only be encountering examples of veganism and animal advocacy done in a poor and hypocritical manner most animal advocates would also oppose. I haven’t seen effective animal advocates consider the possibility current positive messages about veganism/anti-speciesism might be adequate if only they weren’t tailed by negative perceptions of veganism and animal rights, or vegans and animal rights activists themselves. This suggests a next step forward might be to survey conservatives to discover the reasons for not decreasing their animal (by)product intake, and to identify the kind and degree of negative associations they link to veganism/vegetarianism and animal rights.

What Effective Altruists Mean When They Say ‘Utilitarianism’: An Overview

Effective altruists use terms and notions in utilitarianism very inconsistently. Lots of effective altruists come from classical liberal arts education at universities like Oxford. A lot of them are philosophically uneducated folks from the internet. That’s fine, as much of academia has stagnated as is, and so intellectual development is as likely to come from the blogosphere as anywhere else. The rationality community is notorious for using idiosyncratic terminology to describe concepts which have existed in “real”, i.e., academic philosophy for a long time. But there are all kinds of effective altruists who do the same thing. And the inconsistent connotation of different kinds of utilitarianism is an example that makes conversations about utilitarianism become confused quickly.

“Utilitarianism” seems to be used to mean any of the following by effective altruists:
optimizing for those conditions which maximize the satisfaction of one’s full moral intuitions.

  1. Optimizing for one’s full moral intuitions transformed into an expected value function in the sense of the hedonic calculus of classical utilitarianism.

  2. Optimizing for one’s full moral intuitions transformed into an utility function in the sense of Von Neumann-Morgenstern utility theorem from decision theory.

  3. Maximizing the happiness and or/minimizing the suffering of moral patients in a generic sense, typically amenable to moral patients’ preferences.

  4. Maximizing the pleasure and/or minimizing the pain of moral patients, including down to a neurobiological level of utility assessment, e.g, wireheading, virtually any
    creature being capable of experiencing pleasure and pain, etc.

  5. Consequentialist morals which lead to classical utilitarianism, as opposed to negative or “positive” utilitarianism.

  6. Any form of consequentialism excluding egoism.


I put “positive” in scare quotes because in the history of utilitarianism over the last 200 years, “positive utilitarianism” doesn’t appear to have been used as a term to describe any common variant of utilitarianism. But I keep seeing it used among effective altruists. “Negative utilitarianism” as a term and a variant of utilitarianism is an intellectual development of the late 20th century appears still to be a more niche position outside of effective altruism.

“Negative utilitarianism” seems to be used in context to mean any of the following by effective altruists:

1. Optimizing for those conditions which minimize the probability of maximally and intuitively bad outcomes, as opposed to those conditions which maximize the probability of maximally and intuitively good outcomes.

2. Utilitarianism with utility defined as giving more proportional weight to reducing suffering than increasing happiness.

3. Utilitarianism with utility defined as giving moral weight only to reducing suffering and zero to increasing happiness.

4, 5. Respectively the same as (2) and (3) but with “happiness” switched out with “pleasure” and “suffering” switched out with “pain”.

6. Utility defined as eliminating the possibility, or minimizing the expected probability of, any suffering, up to and including eliminating or minimizing the expected probability of any life-forms or experiencing beings.

I’m not as familiar with the term “positive utilitarianism”, so I haven’t observed all these instances of different meaning for it used. However, they can be derived by inverting the meaning of a corresponding definition of “negative utilitarianism”.

1. Optimizing for those conditions which maximize the probability of maximally and intuitively good outcomes, as opposed to those conditions which minimize the probability of maximally and intuitively bad outcomes.

2. Utilitarianism with utility defined as giving proportionally more moral weight to increasing happiness relative to decreasing suffering.

3. Utilitarianism with utility defined as giving moral weight only to increasing happiness and zero to decreasing suffering.

4, 5. The same definitions as (2) and (3) respectively, but with “happiness” switched out for “pleasure”, and “suffering” switched out for pain.

6. Classical utilitarianism.

The biggest problem for effective altruists is thinking through these definitions can lead one to the conclusion self-identified utilitarians of the opposing kind of having goals which seem intuitively catastrophic. There is a definition of “negative utilitarianism” where one can model an agent as having the primary objective of eliminating life. There is a definition of “positive utilitarianism” where one can model an agent as willing to go to any extreme to maximize happiness without regard to suffering, including permitting arbitrary amounts of torture-level suffering for indefinite periods of time.

If some effective altruists suspect each other of such motives because each describes themselves as a certain kind of utilitarian, that’s a fast way for things to become political and everyone getting mindkilled. My estimation is most effective altruists aren’t so formally utilitarian in any sense that they would pick upon a definition of utility/utilitarianism that would permit an outcome of either the total elimination of all life and experience, or arbitrary amounts of torture-level suffering. This appears to be a minor problem in effective altruism which could in theory but isn’t in practice leading to internal movement conflict rather than cooperation and coordination.

This is only one example of how effective altruists using different definitions of utilitarianism can lead to confusion. I think a lot of disagreements among effective altruists over utilitarianism, including those effective altruists who aren’t utilitarians, stems from everyone using different definitions of utilitarianism without being aware of that. Laying out the structure of the problem above describes it. This is the first step to solving the problem. I don’t know what one would do next though.