Do Effective Altruists Appreciate Nature?

An effective altruist recently made this post in a Facebook group:

How do you personally reconcile your views about nature with enjoying natural spaces?

It seems to me many (most?) EAs think nature, broadly defined as any living space including stuff like gardens and planted trees, contains a lot of suffering. Some might see it as the single biggest, ongoing moral catastrophe yet observed, much worse than the very worst humans have done to each other.

But I also see lots of us posting pictures of ourselves in parks rather than parking lots. We go outdoors to forests, rivers, mountains, gardens, avenues and seem to enjoy it as much as anyone else.

Not just philosophically, but personally, how do you go through life believing nature to contain misery, while also enjoying it?

What do you think when you’re looking at a beautiful view, or old tree, or somewhere like Stonehenge or the Grand Canyon or Central Park or anywhere with living things in it?

A significant minority of effective altruists believe non-human animal well-being is the most important focus area. Within that group, a minority, believe the most important focus is the welfare of wild animals, including from naturally occurring causes (as opposed to from human-made causes). As far as I know, there is little to no good quantified or empirical data on the proportion of EAs primarily focused on wild animal welfare who believe the suffering of wild animals is so great in nature environments themselves are net negative in their existence. It’s my impression those among them who believe nature itself is net negative in expectation are in the minority. However, they’re more vocal due to the fact of how imperative they may see the problem relative to others. They also appear better coordinated. This is also true on the level of the whole movement, such that their voices are disproportionally magnified relative to those who disagree with them. I expect this can create the impression they’re much greater in number in EA than is actually the case.

So it’s simply the case the majority of effective altruists go through life enjoying nature because they don’t believe it’s full of misery; they don’t believe the misery in nature justifies destroying it; they don’t believe misery matters so much relative to other values destroying nature is justified; or they don’t believe the misery in nature outweighs other factors of positive well-being in nature.

I personally don’t believe nature is so full of misery that it’s worth destroying, because that isn’t borne out by empirical evidence that’s yet been collected. I don’t personally believe destroying nature is either justified or desirable, and even if it were on consequentialist grounds, it wouldn’t dominate expected value estimates. I don’t have sufficient reason to believe misery or simple dimensions of well-being is all that is of value. I also don’t believe there is sufficient evidence to conclude the animals in nature whose suffering is thought to dominate so much its destruction would be justified, i.e., invertebrates, either have morally relevant experiences, or we know enough about what those experiences are like to conclude they’re primarily of suffering.

I believe it’s possible nature may be net negative because of all the potential suffering in it, to the extent it might dominate anything else in or part of nature. I cope with it by ignoring it, because I can holistically enjoy nature beyond a degree I’d even describe as aesthetics. That’s because to not let myself enjoy nature even given the assumption it’s so full of suffering as to be the worst thing ever does nothing to reduce wild animal suffering. In the meantime, if enjoying nature is what makes someone’s life meaningful enough it motivates them more to reduce wild animal suffering or do the most good as they best judge it by their own values, than indeed doing so could motivate one to reduce wild animal suffering. To experience nature in commiseration may drive motivation to those who faithfully believe to reduce wild animal suffering is the most important cause. However, there are many effective altruists I know and who I expect experience scrupulosity regarding nature as their sensitive to language in arguments melodramatic relative not to the potential scale of suffering in nature, but what evidence any effective altruists currently have to justify those sentiments as real.

Advertisements

My Thoughts On the Gender Wage Gap

Summary:  A friend asked me my opinion on the gender wage gap. Below is my response.

Literally everybody who cites statistics not from social scientist on all sides tends to use crap numbers that miss the bigger picture. Lots of social scientists from all kinds of fields also have crappy numbers that mischaracterize reality. Bundling all types of jobs together when different types of careers and occupations are developed on the basis of all kinds of siffereny local or workplace cultures is nonsense. Basically, obviously there will be industries where controlling for everything else some systemic type of sexism is the remaining factor controlling for wage gaps or comparable metrics of equity like authority in the workplace. Unfortunately, this level of nuance is lost in the broader culture wars by virtually literally everyone who isn’t discussing all the above content in the context of research.

What anyone ought do in practice, on a piecemeal basis, identifying those industries wherein sexism is legitimately having the worst consequences where nobody has tried anything, for which it actually matters[1], and which something can be done about it without systemically changing the values of everyone in society, it should be done. This is started by reaching out to the people in and around those industries most likely in practice to be willing and able to do something about it. Lots of people these days say action isn’t worth it if it doesn’t result in systemic change. I’ve talked to all manner of people of every perspective on this issue, and from now angle is that position seen as sound by anyone else, so I reject it outright.

There are people who do urge for systemic change, and put their money where their mouth is. This is a difference between liberal, Marxian and radical variants of feminism I don’t know enough about to follow the state of discourse at this time, but I encourage its continuation. Of course, by this point I’m entailing commentary on the state of organization and discourse on the political left in general, which is a separate topic.

[1] I am not going to become upset about the inequality between men and women in the field of midwifery, and I’m not going to become upset the more men than women are firefighters when any firefighter regardless of self-identified sex or gender must constantly meet some minimum threshold for fitness to be able to reliably and sufficiently perform their jobs in emergency situations, and it just so happens men are more likely to be able to run while lifting 300 lbs. people on their shoulders than women.

What Kind of Man Lets Sexlessness Destroy His Ego?

Summary: I see so many discussions framed around sexless men framed in terms of how men perceive the sexual dynamics between themselves and women. This quickly gets political, which never appears to solve any of the problems resulting from this social phenomenon anyone cares about. It also bugs me I never see in public discourse this discussion framed in terms of what men can do to change their perceptions of themselves. So after a friend posted a comment, I (unfairly) pounced at the opportunity to explain some of my thoughts on the subject.

In a discussion about an article in GQ magazine covering men’s reaction to the #MeToo movement, a friend made this comment:
Honestly, I think men are in a bit of a catch-22 situation. Women often want a particular thing (e.g. a certain kind of manliness or assertiveness), but those same things might be viewed by some as problematic.
Here’s my response:
I could probably find some men who would tell you you’ll only be caught in a Catch-22 if you’re drawn to communities, subcultures or social circles where women who find that certain something women often want problematic. Like, there are entire communities where all women might find a certain kind of manliness or assertiveness problematic. We don’t need to talk about those spaces because they’re the kind people in them who are miserable stick around for a long time only if they’re sorry bastards. And that often doesn’t have anything to do with them not being able to get laid.
The standard advice is something like move to a rural area, go to a Church, etc., where apparently all the women are so traditionalist you’re guaranteed to find yourself among more established and less ambiguous sex/gender roles, and you’re set. Never mind that’s an overly simplistic depiction of what those spaces are really like, that advice is impractical for anyone who doesn’t want to have to change everything about themselves to find a partner.
But to be fair I think there is something to changing one’s perspective. I’ve met men in person, or come across them online, who by all appearances refuse to let themselves be attracted to the kind of women who don’t find normal, decent behaviour from men problematic. They complain not even about a lack of sex, but a lack of romance, affection or acknowledgement. Yet they refuse to let themselves spend time away from their community/subculture/scene so as to find a partner who would be attracted to them. Alternatively, often these men acknowledge being more assertive or manly is something they could do, and that would make them attractive. But that would be wrong, because being assertive like that is unvirtuous, and so only men willing to deign to be assertive, thus becoming @$$holes, get laid.
I’d call this internalized misandry but in a lot of cases I think passive-aggressive men are just rationalizing their own anxieties or fears. This leads to a lot of Nice Guy syndrome, which I think deserves more public sympathy than it gets. Nice Guy syndrome is a societal problem distinct from Inceldom which has also grown, and which has eaten more male-coded spaces as the toxic Manosphere has grown. I think Inceldom is gross, and I don’t get why one would opt choose that as some kind of political self-identity of all things. Crudely put, a lot of these men are cucking themselves.

Maybe I’m rationalizing or underrating the miserable suffering of loneliness or sexlessness, as I’m on a SSRI right now and I’m in a long-distance polyamorous relationship. I think that might get a lot of guys calling me a super-cuc. But there are plenty of men who just happen to be involuntarily celibate, but don’t identify with that, and so don’t let it destroy their own psychology. And all of us are doing fine. It doesn’t swallow our identities and every waking thought, and a lot of men nearing thirty doing alright in this state have had sex only a handful of times in their lives.

 

A pattern I notice is the kind of men who end up constantly woeful and wistful with a romanticized version of romantic/sexual relationship and Nice Guys lacking self-awareness is they’re constantly orbiting political or politicized social scenes. They often say they do so because it’s part of fighting the good fight. Some of them are open about the fact getting laid is a significant bonus attracting them to the community. They say this in private exclusively in the company of other men. This happens in all gender-balanced communities more than most men would care to admit, but if the group norms in practice entail more gender equity, it’s likely this doesn’t happen as often anyway. Of course, there are endless horror stories of that One Social Justice Scene That Was Full Of Sexual Harassment/Assault, so your mileage may vary.

You can get some of these men to admit they’re fighting the good fight to get laid in moments of frustration. Or at least he’s been trying to get laid, and he admit he has been fighting the good fight for so long and so hard he feels like he deserves a break in getting laid. And then there are the guys who are tragically unaware of how much their activity in the community is clearly driven by an urge to get laid, one way or another.

 

What I’ve noticed is through a combo of internalized misandry or anti-male feminism, and rationalizing of one’s own character flaws, involuntarily celibate men become so resentful because they don’t realize so strong a desire to pursue so narrow and base an end isn’t how to achieve it. Sexual and affective fulfillment are part of a whole package coming with romantic relationships. Romantic relationships are one only one kind, among those like bonds with friends, family and colleagues. I wouldn’t be surprised if some of the men whose reality is so warped by their lack of access to sex are completely isolated, and don’t have good, strong or loving relationships with anyone (close to them) in their lives. Oftentimes when a man desperately goes into a social scene seeking only sex, affection, attention and status, not just women but everyone can smell it on him. Most often I think it’s just if a man is so focused on that one thing, he’ll clearly ignore anything and everything else which might make him more attractive.

I notice the men in all kinds of social scenes who don’t necessarily get the most laid, but have the best relationships, are the ones who would be just as involved or embedded in the community even if they weren’t, or when they weren’t, receiving attention from women (or, presumably, men/non-binary people if they’re in a gay/queer scene. I wouldn’t know). I think why they have the best loving relationships with their partners is because the authenticity and the realization of their own values is what makes them attractive.

Post-script: I have more thoughts on the subject of Inceldom I may not get a chance to write up. For one example of a good man who for a long time was involuntarily celibate and suffered for it but didn’t let that destroy himself, see this post by quantum computing scientist and blogger Scott Aaronson.

The Case for Respecting Courts as Institutions

This might sound like I’m a coward selling out my moral integrity, but I for one am relieved we have courts to settle disputes through arbitration even in the cases when I personally disagree with a court’s ruling. When I see friends decry a court ruling, I can’t tell if they go on respecting the court’s ruling because they believe with the principles behind the rule of law, or because there’s nothing practical they can do to change the outcome. People should by all means express through any and all mediums their opposition to a court ruling, because it’s only through the exercise of free speech amendments ultimately get made to past rulings to undo misjudgement.

It seems to me, though, people denigrate the current standing of courts, e.g., being too liberal or too conservative, in a way undermining faith in the institutional process to begin with. Most people I see act as though a Supreme Court being sympathetic to one’s own ideology is synonymous with the court reaching objectively good conclusions. Contemporary societies are so complicated decisions scaled to the whole of society quickly go beyond good and evil. Having an independent court system as a buttress against tyranny of the mob for when even representative branches of government themselves fail that standard is important. People are failing to consider the counterfactual outcome: a society without the sorts of checks and balances we have now, however superficially democratic, will slowly accrete gross injustices against a minority, until it accelerates to dole out gross injustice to everyone. This is the story of every totalitarian state in history. To not beware this possibility seems to me to put too much faith in ideology over the value of longer-lasting institutions; a democratic system of checks and balances; and the rule of law. It seems worthy to stick to this standard, because even decades of disappointment in institutions providing social mediocrity is less harmful than risking the violence and oppression that would pour forth from incorrigible zealots of any stripe.

How I Think About Animal Wefare

How I think about anti-speciesism seems to be that if some creature is at or above some threshold for caring about, its welfare is an end in itself and abusing them is unconscionable. Where I set the bar for “worth caring about” may shift overtime, but that there is a bar or threshold seems central to how I feel about this. There isn’t a ceiling though. Two different species above the threshold may differ in how much I weight their relative well-being, though. So, I could value one human as equal to an arbitrary number of chickens, but there’s still duties not to commit certain kinds of harm to chickens like raising and killing them under torturous conditions so we can eat them. I don’t know if this technically makes me speciesist or not. My intuitions seem more deontologist than consequentialist, or are at least negative-leaning.

Crucial Considerations for Terraforming as an S-Risk

Summary: in this essay, I go over the concerns of how terraforming may pose an s-risk for wild animals. I explore from a perspective of suffering reduction both the political and technological factors influencing scope, intensity and probability of s-risks from terraforming or other space colonization efforts spreading biological life throughout the solar system. I also explore a possible strategy for mitigating s-risks from terraforming. Finally,  I evaluate how much of a priority s-risks from terraforming should be relative to other s-risks, and recommend for prioritizing s-risks further. 

Painting a Picture: a Potential Future of Accelerated Terraforming

A lot of suffering reducers are concerned about the possibility of space colonization spreading wild animals who primarily live lives full of nothing but suffering. From ‘Will Space Colonization Multiply Wild Animal Suffering?‘ by Brian Tomasik:

Scientists and futurists often encourage humanity to spread out into the galaxy, in order to reduce risks to human survival, build new civilizations, and bring life to desolate planets. One fairly realistic proposal is the terraforming of Mars, in which the Red Planet would be made increasingly hospitable to life, until plants and animals could be sent to live there. More speculatively, some have advocated “directed panspermia” — disbursing genetic and microbial seeds of life into space in the hopes that a few will land on and populate other planets. Finally, some futurists imagine virtual worlds in which computational agents would interact; these would sometimes include animals in the wild.

In these scenarios, animals are proposed to be used as a means to help humans, as a vehicle for spreading life to the stars, or as part of realistic virtual environments. What is almost always ignored is that these animals would have feelings, and many would suffer enormously as a result of being created into Darwinian ecosystems in which predation, disease, and premature death are endemic. Theologians ask why a good God would create so much cruelty in nature. Future humans should likewise ponder whether it’s morally acceptable to create new ecosystems containing vast amounts of suffering, or whether they should pause and seek more compassionate alternatives if they embark upon space colonization.

One case of space colonization potentially spreading wild animal suffering that could also begin in the span of decades is the terraforming and subsequent human colonization of Mars by individuals like Elon Musk and his company Spacex. While many people may be skeptical a single company or individual, no matter how many billions of dollars they have, will alone be able to terraform or colonize space. However, visionary technologists like Elon Musk can advance the state of technological development for terraforming and colonizing other planets, which in turn could inspire nation-states or future supra-national political entities to colonize space. Nation-states or a future world government will have many more resources to martial to terraform and colonize other planets. While state actors may be perceived as not scientifically and technologically innovative, as long as non-state actors make theoretical and technological progress on the human potential to terraform and colonize other planets, state actors merely need to copy their work and implement it at a scale making terraforming genuinely possible. Indeed, state and non-state organizations have a long history in the United States of cooperating to make progress on advancing humanity’s reach into space. With companies like Spacex working with multiple countries to get satellites into space, and the possibility of governments and private companies working together to mine asteroids in the near- or medium-term future, space colonization between state and non-state actors takes on a global dimension.

Efforts in earnest by national governments to colonize Mars have begun with the United Arab Emirates recent announcement of their visionary plan to colonize Mars with 600,000 people in 100 years. While one might naively expect countries leading the world economically and scientifically like China or the United States would sooner announce plans to colonize Mars than the United Arab Emirates (UAE). However, the UAE is also an absolute monarchy with the seventh-largest oil reserves in the world, making its government exactly the kind with not only vast economic resources but the political will and ability to channel them into massive, long-term missions like colonizing Mars. The UAE’s early entrance into the into the game of interplanetary colonization could be an omen of things to come. As technology advances, national economies grow, and some resources on Earth become scarcer, economic warfare to colonize Mars and other planets could break out. This international competition could of course take on a more thoroughly political dimension like the Space Race. While a zero-sum competition to colonize Mars and the rest of the solar system might seem absurd and irrational, a glance at history shows us with transformative technology which may pose an existential risk humanity and our governments haven’t been as careful as we could or should have been with regards to the goal of minimizing the chances of extinction. We should not by default expect better from humanity in terms of rationally pursuing the goal of preventing risks of astronomical suffering either. What’s more, conditions in international politics could set in whereby great powers will be incentivized to invest a huge amount of resources to colonize space as part of economic and political competition. Under these conditions even rational state actors may have difficulty opposing their incentive gradient despite their incentives being unaligned with their values. From the Wikipedia article on the Prisoner’s Dilemma:

In international political theory, the Prisoner’s Dilemma is often used to demonstrate the coherence of strategic realism, which holds that in international relations, all states (regardless of their internal policies or professed ideology), will act in their rational self-interest given international anarchy. A classic example is an arms race like the Cold War and similar conflicts. During the Cold War the opposing alliances of NATO and the Warsaw Pact both had the choice to arm or disarm. From each side’s point of view, disarming whilst their opponent continued to arm would have led to military inferiority and possible annihilation. Conversely, arming whilst their opponent disarmed would have led to superiority. If both sides chose to arm, neither could afford to attack the other, but both incurred the high cost of developing and maintaining a nuclear arsenal. If both sides chose to disarm, war would be avoided and there would be no costs.

Although the ‘best’ overall outcome is for both sides to disarm, the rational course for both sides is to arm, and this is indeed what happened. Both sides poured enormous resources into military research and armament in a war of attrition for the next thirty years until the Soviet Union could not withstand the economic cost. The same logic could be applied in any similar scenario, be it economic or technological competition between sovereign states.

Especially in a future where nuclear weapons and other extremely powerful technologies if internationally developed for military purposes would logically end in mutually assured destruction, space colonization as a form of economic warfare may become an attractive non-violent alternative for international competition. All of this could create strong incentives worldwide to advance the development of the technology to not only make space colonization and terraforming viable, but to find ways to make it happen on shorter timescales. Even in a future when a world government may form as a singleton to coordinate an end to international space races, in the interim the global advancement of terraforming and space-colonization technology may create a culture in which terraforming and space colonization are both seen as highly desirable. Thus a singleton representative of human interests may be incentivized to develop technologies to accelerate the civilizational potential to terraform and colonize other planets.

Differential Technological Development and the Hedonistic Imperative

An idea in the fields of global coordination and existential risk studies is that of differential technological development, first put forward by philosopher Nick Bostrom. A proposal for reducing wild animal suffering put forward by Bostrom’s fellow pioneer of transhumanism, David Pearce, is that of the Hedonistic Imperative and the Abolitionist Project. The development of emerging technology, like new bio- and nano-technologies, which would accelerate the feasibility of terraforming other planets are in the same fields of technology which would accelerate the feasibility of the Hedonistic Imperative the Abolitionist Project. In a world where suffering reducers face daunting and steep odds of spreading values focused on prioritizing moral circle expansion and s-risk reduction, and thus fail to cause decision-makers in charge of space colonization to care about the impact of terraforming on wild animal welfare, a redundancy measure would be to:

  1. Identify the likely paths by which biotechnology, nanotechnology and other fields would be advanced to accelerate the timeline for the feasibility of the Hedonistic Imperative and the Abolitionist Project vs. to accelerate the timeline for the feasibility of terraforming Mars and other planets.
  2. Advance a campaign of differential technological development in biotechnology, nanotechnology and other fields advocating to invest resources in projects aimed at accelerating the feasibility of the Hedonistic Imperative and the Abolitionist Project at a rate at least commensurable with the rate at which the feasibility of terraforming is advanced in the same technological fields.

This strategy could be implemented as a redundancy measure acting under the assumption terraforming and other space-faring projects potentially creating astronomical amounts of suffering aren’t prevented, but additional technologies which prevent otherwise astronomical amounts of suffering are developed in lockstep and incorporated into said projects. Ideally, this strategy would allow s-risk reducers to keep track of technological advancements around the world accelerating the feasibility of terraforming, and respond accordingly in conducing the advancement of technologies accelerating the feasibility of the Hedonistic Imperative and the Abolitionist Project as well.

Conclusion: Advanced AI as a Tempering Consideration

All of the above considerations might also be rendered moot by advanced AI, such as Seed AI undergoing recursive self-improvement to become superintelligent, being created before the technologies which would make either terraforming or the Hedonistic Imperative feasible. Indeed, even given the strong possibility of the worldwide political will for humanity to colonize and terraform bodies throughout our solar system and beyond, unaided by advanced AI, it’s  the consensus in effective altruism is superintelligent AI could be created within a century, while without the benefit of advanced AI it’s commonly thought the processes of both terraforming Mars (let alone other planets) and the Hedonistic Imperative could take several centuries to successfully bring about. In conversation with Brian Tomasik on this subject, he conferred that on those timescales he also expects biological life and intelligence will be replaced by digital/machine alternatives. Ultimately, none of the considerations in this post should be enough to sway s-risk reducers away from prioritizing values spreading, moral circle expansion, global coordination and AI alignment predicated on the assumption machine intelligence as a technology will transform life on Earth before the spreading of biological life to other celestial bodies becomes a practical consideration. Nonetheless, under the assumption timelines for advanced AI development are long, the considerations in this essay would deserve relatively more attention from s-risk reducers. While the pursuit of a fully blown project focused on mitigating s-risks from terraforming isn’t at this stage justified, I’d like to see a medium-depth dive into strategies for differential technological development to prevent s-risks from the spread of biological life beyond Earth in the scenario timelines for superintelligent AI are long (i.e., on the scale over well over a century).

Appendix: Lessons Learned in Writing This Essay

In writing this essay, I brought together concepts developed from several separate intellectual communities associated with the effective altruism movement, such as: AI alignment, prioritization research, existential risk reduction, transhumanism, rationality, and anti-speciesism/anti-substratism. One thing I realized is ideas such as differential technological development, the importance of global coordination, and the applicability of game theory more often applied to mitigating risks of human extinction are also critical for the mitigation of risks of astronomical suffering as well. S-risk reducers benefit in multiple ways from cooperating with others in common intellectual projects, and in doing so s-risk reducers can also influence differential intellectual progress consideration of s-risk reduction throughout communities focused on the scientific advancement of emerging technologies. In this vein, in addition to following closely the output of organizations like the Foundational Research Institute, I recommend s-risk reducers follow the output of the Future of Humanity Institute and the rationality community on sites like LessWrong for new crucial considerations and key insights for dealing with astronomical stakes, and contribute to the common intellectual goals shared by all far-future focused effective altruists.

Extreme Risks Due to Uncertainty

Summary: I describe ‘Extreme Risks Due to Uncertainty’, how various types of uncertainty contribute to the risk of an ongoing moral catastrophe, discuss their ramifications for effective altruism and provide some examples and features of these risks.

Two types of existential risks are extinction risks and risks of astronomical future suffering (s-risks). AI alignment as a field and the rationality community emphasize astronomical stakes. Indeed there’s a logically valid argument a misaligned machine superintelligence poses the likeliest case for a worst-case scenario humanity or its descendants have no hope of bouncing back from. However, to a lesser extent the astronomical stakes argument applies to other existential risks from other types of technology as well. Yet sometimes effective altruists seem to apply a rhetorical force from the ethos of astronomical stakes on AI alignment that isn’t done for any other x-risks. Nonetheless, AI alignment is thought to pose everything from the immediate extinction of humanity;  s-risks; and, if poorly aligned but not threatening extinction, will stultify humanity’s will from being carried out throughout the light cone.

It seems the categories used by effective altruists to identify the nature of risk reduction projects are:

  • Global catastrophic risks (GCR), or existential risk (x-risk) reduction, more generic terms.
  • AI alignment, indicating a more exclusive focus on the impact transformative AI will have for/on humanity, life and civilization.
  • S-risk reduction, a focus on mitigating cases or risks of astronomical suffering. Arguably factory farming and wild animal suffering are not risks but ongoing cases of catastrophic suffering.

A paper I’ve seen shared by effective altruists before is this one on the possibility of an ongoing moral catastrophe.

Given how neglected they are by everyone else, I imagine many effective altruists would argue their causes are cases of ongoing moral catastrophes going unacknowledged. While in a sense that’s fair, the sense in which we should think about the possibility of an ongoing moral catastrophe is as unknown solutions to open problems past the boundaries of present-day science and philosophy. These are risks of potential catastrophe due to uncertainty.

An example of this is not having enough knowledge we can be confident in the sentience or moral relevance of non-human animals according to some worldviews. One can think of the philosophical uncertainty of what to value at all, and how much that should consider non-human well-being. However, most value systems seem to have the possibility to give weight to non-human animal well-being. Even then, there’s the question of how much animals’ experiences or preferences matter relative to that of humans. That’s a matter of moral uncertainty. Then, assuming a system which assigned proportionate moral weight to all species appropriately given some data, we need to find out what that data is. This is a matter of questions like what species have morally relevant experiences, which many effective altruists believe can be answered empirically. (Many philosophers would argue questions about sentience are also a matter of philosophical uncertainty, but given shared assumptions about what’s at all possible with empirical science, questions about sentience seem tractable.)

There are a few theoretical clusters of questions/problems which, depending on how they’re answered, will dramatically alter the moral weight assigned to possible trajectories of the long-run future. Some examples:

  • How much moral weight to assign to experiences like suffering vs. happiness; what else contributes to well-being; or if things in addition to have intrinsic moral value (e.g., justice, art, culture, etc.) are examples of moral uncertainty.
  • How if sentience once as close to objectively defined and measurable as possible, and which species have a real subjective experience of pain/pleasure, are partly questions of empirical uncertainty.
  • Questions of population ethics, philosophy of identity, philosophy of consciousness, cause prioritization models and meta-ethics have practical ramifications. However, as a matter of course these questions are so abstract compared to what effective altruists mostly focus on they may be more appropriately termed matters of “philosophical uncertainty”.
  • The ease and validity of how to accept resolutions to intellectual problems in general, whether they’re in our own normative ethics or they have implications for x-risk and AI alignment, are matters of disagreement on the best epistemology. What is the best epistemology to use to resolve a problem is called “model uncertainty”. Doing morally relevant epistemology is a new enough phenomenon in effective altruism there doesn’t seem to be a consistent term for working on epistemological problems directly, whether that be some kind of cause prioritization, decision theory, or other theoretical research. Another term could be “epistemic uncertainty”.

It seems the answers to these questions will have huge implications for cause prioritization. So, in addition to the existing buckets like “x-risk”, “AI alignment” and “s-risk, I’d posit another: catastrophic risk due to uncertainty. Here are some things the ‘risks due to uncertainty’ have in common:

  • Unlike x-risks, which mostly have a technological or political cause, these risks due to uncertainty are all based on knowledge. That means the evaluation of the moral
    value of an outcome can hinge entirely on changing one’s beliefs.
  • Risks due to uncertainty have in common is unlike philosophical problems throughout most of history, resolving philosophical problems effective altruists cares about is a race against the clock. Whether to colonize space, or how to align transformative AI, are questions with answers of practical importance to effective altruism. Breaking these big questions down into factors which can be more easily addressed is the purpose of much research in effective altruism.
  • The professionalized organization of neglected theoretical research to look more like what’s typical of science is different than most philosophy. Unlike philosophical questions from Ancient Greece that remain unanswered today, effective altruism has to find satisfactory resolutions to issues of uncertainty it faces on timescales as short as several decades, as they’re necessary to make irreversible decisions which will determine how valuable outcomes in the long-run future are.

I have thought of some terms which could describe this concept: “uncertainty risks”; “moral risks (m-risks); “epistemic risks”; or “knowledge-based risks”. None of these are great. “Catastrophic risk due to (moral/empirical/model/epistemic) uncertainty” is a definition which broadly captures all instances of the idea I’ve given above. “Knowledge-based risks” is the most accurate short term that sounds accurate to me, that also sounds the least silly. Suggestions for a better descriptive term are requested.