Extreme Risks Due to Uncertainty

Summary: I describe ‘Extreme Risks Due to Uncertainty’, how various types of uncertainty contribute to the risk of an ongoing moral catastrophe, discuss their ramifications for effective altruism and provide some examples and features of these risks.

Two types of existential risks are extinction risks and risks of astronomical future suffering (s-risks). AI alignment as a field and the rationality community emphasize astronomical stakes. Indeed there’s a logically valid argument a misaligned machine superintelligence poses the likeliest case for a worst-case scenario humanity or its descendants have no hope of bouncing back from. However, to a lesser extent the astronomical stakes argument applies to other existential risks from other types of technology as well. Yet sometimes effective altruists seem to apply a rhetorical force from the ethos of astronomical stakes on AI alignment that isn’t done for any other x-risks. Nonetheless, AI alignment is thought to pose everything from the immediate extinction of humanity;  s-risks; and, if poorly aligned but not threatening extinction, will stultify humanity’s will from being carried out throughout the light cone.

It seems the categories used by effective altruists to identify the nature of risk reduction projects are:

  • Global catastrophic risks (GCR), or existential risk (x-risk) reduction, more generic terms.
  • AI alignment, indicating a more exclusive focus on the impact transformative AI will have for/on humanity, life and civilization.
  • S-risk reduction, a focus on mitigating cases or risks of astronomical suffering. Arguably factory farming and wild animal suffering are not risks but ongoing cases of catastrophic suffering.

A paper I’ve seen shared by effective altruists before is this one on the possibility of an ongoing moral catastrophe.

Given how neglected they are by everyone else, I imagine many effective altruists would argue their causes are cases of ongoing moral catastrophes going unacknowledged. While in a sense that’s fair, the sense in which we should think about the possibility of an ongoing moral catastrophe is as unknown solutions to open problems past the boundaries of present-day science and philosophy. These are risks of potential catastrophe due to uncertainty.

An example of this is not having enough knowledge we can be confident in the sentience or moral relevance of non-human animals according to some worldviews. One can think of the philosophical uncertainty of what to value at all, and how much that should consider non-human well-being. However, most value systems seem to have the possibility to give weight to non-human animal well-being. Even then, there’s the question of how much animals’ experiences or preferences matter relative to that of humans. That’s a matter of moral uncertainty. Then, assuming a system which assigned proportionate moral weight to all species appropriately given some data, we need to find out what that data is. This is a matter of questions like what species have morally relevant experiences, which many effective altruists believe can be answered empirically. (Many philosophers would argue questions about sentience are also a matter of philosophical uncertainty, but given shared assumptions about what’s at all possible with empirical science, questions about sentience seem tractable.)

There are a few theoretical clusters of questions/problems which, depending on how they’re answered, will dramatically alter the moral weight assigned to possible trajectories of the long-run future. Some examples:

  • How much moral weight to assign to experiences like suffering vs. happiness; what else contributes to well-being; or if things in addition to have intrinsic moral value (e.g., justice, art, culture, etc.) are examples of moral uncertainty.
  • How if sentience once as close to objectively defined and measurable as possible, and which species have a real subjective experience of pain/pleasure, are partly questions of empirical uncertainty.
  • Questions of population ethics, philosophy of identity, philosophy of consciousness, cause prioritization models and meta-ethics have practical ramifications. However, as a matter of course these questions are so abstract compared to what effective altruists mostly focus on they may be more appropriately termed matters of “philosophical uncertainty”.
  • The ease and validity of how to accept resolutions to intellectual problems in general, whether they’re in our own normative ethics or they have implications for x-risk and AI alignment, are matters of disagreement on the best epistemology. What is the best epistemology to use to resolve a problem is called “model uncertainty”. Doing morally relevant epistemology is a new enough phenomenon in effective altruism there doesn’t seem to be a consistent term for working on epistemological problems directly, whether that be some kind of cause prioritization, decision theory, or other theoretical research. Another term could be “epistemic uncertainty”.

It seems the answers to these questions will have huge implications for cause prioritization. So, in addition to the existing buckets like “x-risk”, “AI alignment” and “s-risk, I’d posit another: catastrophic risk due to uncertainty. Here are some things the ‘risks due to uncertainty’ have in common:

  • Unlike x-risks, which mostly have a technological or political cause, these risks due to uncertainty are all based on knowledge. That means the evaluation of the moral
    value of an outcome can hinge entirely on changing one’s beliefs.
  • Risks due to uncertainty have in common is unlike philosophical problems throughout most of history, resolving philosophical problems effective altruists cares about is a race against the clock. Whether to colonize space, or how to align transformative AI, are questions with answers of practical importance to effective altruism. Breaking these big questions down into factors which can be more easily addressed is the purpose of much research in effective altruism.
  • The professionalized organization of neglected theoretical research to look more like what’s typical of science is different than most philosophy. Unlike philosophical questions from Ancient Greece that remain unanswered today, effective altruism has to find satisfactory resolutions to issues of uncertainty it faces on timescales as short as several decades, as they’re necessary to make irreversible decisions which will determine how valuable outcomes in the long-run future are.

I have thought of some terms which could describe this concept: “uncertainty risks”; “moral risks (m-risks); “epistemic risks”; or “knowledge-based risks”. None of these are great. “Catastrophic risk due to (moral/empirical/model/epistemic) uncertainty” is a definition which broadly captures all instances of the idea I’ve given above. “Knowledge-based risks” is the most accurate short term that sounds accurate to me, that also sounds the least silly. Suggestions for a better descriptive term are requested.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s