Summary: in this essay, I go over the concerns of how terraforming may pose an s-risk for wild animals. I explore from a perspective of suffering reduction both the political and technological factors influencing scope, intensity and probability of s-risks from terraforming or other space colonization efforts spreading biological life throughout the solar system. I also explore a possible strategy for mitigating s-risks from terraforming. Finally, I evaluate how much of a priority s-risks from terraforming should be relative to other s-risks, and recommend for prioritizing s-risks further.
Painting a Picture: a Potential Future of Accelerated Terraforming
A lot of suffering reducers are concerned about the possibility of space colonization spreading wild animals who primarily live lives full of nothing but suffering. From ‘Will Space Colonization Multiply Wild Animal Suffering?‘ by Brian Tomasik:
Scientists and futurists often encourage humanity to spread out into the galaxy, in order to reduce risks to human survival, build new civilizations, and bring life to desolate planets. One fairly realistic proposal is the terraforming of Mars, in which the Red Planet would be made increasingly hospitable to life, until plants and animals could be sent to live there. More speculatively, some have advocated “directed panspermia” — disbursing genetic and microbial seeds of life into space in the hopes that a few will land on and populate other planets. Finally, some futurists imagine virtual worlds in which computational agents would interact; these would sometimes include animals in the wild.
In these scenarios, animals are proposed to be used as a means to help humans, as a vehicle for spreading life to the stars, or as part of realistic virtual environments. What is almost always ignored is that these animals would have feelings, and many would suffer enormously as a result of being created into Darwinian ecosystems in which predation, disease, and premature death are endemic. Theologians ask why a good God would create so much cruelty in nature. Future humans should likewise ponder whether it’s morally acceptable to create new ecosystems containing vast amounts of suffering, or whether they should pause and seek more compassionate alternatives if they embark upon space colonization.
One case of space colonization potentially spreading wild animal suffering that could also begin in the span of decades is the terraforming and subsequent human colonization of Mars by individuals like Elon Musk and his company Spacex. While many people may be skeptical a single company or individual, no matter how many billions of dollars they have, will alone be able to terraform or colonize space. However, visionary technologists like Elon Musk can advance the state of technological development for terraforming and colonizing other planets, which in turn could inspire nation-states or future supra-national political entities to colonize space. Nation-states or a future world government will have many more resources to martial to terraform and colonize other planets. While state actors may be perceived as not scientifically and technologically innovative, as long as non-state actors make theoretical and technological progress on the human potential to terraform and colonize other planets, state actors merely need to copy their work and implement it at a scale making terraforming genuinely possible. Indeed, state and non-state organizations have a long history in the United States of cooperating to make progress on advancing humanity’s reach into space. With companies like Spacex working with multiple countries to get satellites into space, and the possibility of governments and private companies working together to mine asteroids in the near- or medium-term future, space colonization between state and non-state actors takes on a global dimension.
Efforts in earnest by national governments to colonize Mars have begun with the United Arab Emirates recent announcement of their visionary plan to colonize Mars with 600,000 people in 100 years. While one might naively expect countries leading the world economically and scientifically like China or the United States would sooner announce plans to colonize Mars than the United Arab Emirates (UAE). However, the UAE is also an absolute monarchy with the seventh-largest oil reserves in the world, making its government exactly the kind with not only vast economic resources but the political will and ability to channel them into massive, long-term missions like colonizing Mars. The UAE’s early entrance into the into the game of interplanetary colonization could be an omen of things to come. As technology advances, national economies grow, and some resources on Earth become scarcer, economic warfare to colonize Mars and other planets could break out. This international competition could of course take on a more thoroughly political dimension like the Space Race. While a zero-sum competition to colonize Mars and the rest of the solar system might seem absurd and irrational, a glance at history shows us with transformative technology which may pose an existential risk humanity and our governments haven’t been as careful as we could or should have been with regards to the goal of minimizing the chances of extinction. We should not by default expect better from humanity in terms of rationally pursuing the goal of preventing risks of astronomical suffering either. What’s more, conditions in international politics could set in whereby great powers will be incentivized to invest a huge amount of resources to colonize space as part of economic and political competition. Under these conditions even rational state actors may have difficulty opposing their incentive gradient despite their incentives being unaligned with their values. From the Wikipedia article on the Prisoner’s Dilemma:
In international political theory, the Prisoner’s Dilemma is often used to demonstrate the coherence of strategic realism, which holds that in international relations, all states (regardless of their internal policies or professed ideology), will act in their rational self-interest given international anarchy. A classic example is an arms race like the Cold War and similar conflicts. During the Cold War the opposing alliances of NATO and the Warsaw Pact both had the choice to arm or disarm. From each side’s point of view, disarming whilst their opponent continued to arm would have led to military inferiority and possible annihilation. Conversely, arming whilst their opponent disarmed would have led to superiority. If both sides chose to arm, neither could afford to attack the other, but both incurred the high cost of developing and maintaining a nuclear arsenal. If both sides chose to disarm, war would be avoided and there would be no costs.
Although the ‘best’ overall outcome is for both sides to disarm, the rational course for both sides is to arm, and this is indeed what happened. Both sides poured enormous resources into military research and armament in a war of attrition for the next thirty years until the Soviet Union could not withstand the economic cost. The same logic could be applied in any similar scenario, be it economic or technological competition between sovereign states.
Especially in a future where nuclear weapons and other extremely powerful technologies if internationally developed for military purposes would logically end in mutually assured destruction, space colonization as a form of economic warfare may become an attractive non-violent alternative for international competition. All of this could create strong incentives worldwide to advance the development of the technology to not only make space colonization and terraforming viable, but to find ways to make it happen on shorter timescales. Even in a future when a world government may form as a singleton to coordinate an end to international space races, in the interim the global advancement of terraforming and space-colonization technology may create a culture in which terraforming and space colonization are both seen as highly desirable. Thus a singleton representative of human interests may be incentivized to develop technologies to accelerate the civilizational potential to terraform and colonize other planets.
Differential Technological Development and the Hedonistic Imperative
An idea in the fields of global coordination and existential risk studies is that of differential technological development, first put forward by philosopher Nick Bostrom. A proposal for reducing wild animal suffering put forward by Bostrom’s fellow pioneer of transhumanism, David Pearce, is that of the Hedonistic Imperative and the Abolitionist Project. The development of emerging technology, like new bio- and nano-technologies, which would accelerate the feasibility of terraforming other planets are in the same fields of technology which would accelerate the feasibility of the Hedonistic Imperative the Abolitionist Project. In a world where suffering reducers face daunting and steep odds of spreading values focused on prioritizing moral circle expansion and s-risk reduction, and thus fail to cause decision-makers in charge of space colonization to care about the impact of terraforming on wild animal welfare, a redundancy measure would be to:
- Identify the likely paths by which biotechnology, nanotechnology and other fields would be advanced to accelerate the timeline for the feasibility of the Hedonistic Imperative and the Abolitionist Project vs. to accelerate the timeline for the feasibility of terraforming Mars and other planets.
- Advance a campaign of differential technological development in biotechnology, nanotechnology and other fields advocating to invest resources in projects aimed at accelerating the feasibility of the Hedonistic Imperative and the Abolitionist Project at a rate at least commensurable with the rate at which the feasibility of terraforming is advanced in the same technological fields.
This strategy could be implemented as a redundancy measure acting under the assumption terraforming and other space-faring projects potentially creating astronomical amounts of suffering aren’t prevented, but additional technologies which prevent otherwise astronomical amounts of suffering are developed in lockstep and incorporated into said projects. Ideally, this strategy would allow s-risk reducers to keep track of technological advancements around the world accelerating the feasibility of terraforming, and respond accordingly in conducing the advancement of technologies accelerating the feasibility of the Hedonistic Imperative and the Abolitionist Project as well.
Conclusion: Advanced AI as a Tempering Consideration
All of the above considerations might also be rendered moot by advanced AI, such as Seed AI undergoing recursive self-improvement to become superintelligent, being created before the technologies which would make either terraforming or the Hedonistic Imperative feasible. Indeed, even given the strong possibility of the worldwide political will for humanity to colonize and terraform bodies throughout our solar system and beyond, unaided by advanced AI, it’s the consensus in effective altruism is superintelligent AI could be created within a century, while without the benefit of advanced AI it’s commonly thought the processes of both terraforming Mars (let alone other planets) and the Hedonistic Imperative could take several centuries to successfully bring about. In conversation with Brian Tomasik on this subject, he conferred that on those timescales he also expects biological life and intelligence will be replaced by digital/machine alternatives. Ultimately, none of the considerations in this post should be enough to sway s-risk reducers away from prioritizing values spreading, moral circle expansion, global coordination and AI alignment predicated on the assumption machine intelligence as a technology will transform life on Earth before the spreading of biological life to other celestial bodies becomes a practical consideration. Nonetheless, under the assumption timelines for advanced AI development are long, the considerations in this essay would deserve relatively more attention from s-risk reducers. While the pursuit of a fully blown project focused on mitigating s-risks from terraforming isn’t at this stage justified, I’d like to see a medium-depth dive into strategies for differential technological development to prevent s-risks from the spread of biological life beyond Earth in the scenario timelines for superintelligent AI are long (i.e., on the scale over well over a century).
Appendix: Lessons Learned in Writing This Essay
In writing this essay, I brought together concepts developed from several separate intellectual communities associated with the effective altruism movement, such as: AI alignment, prioritization research, existential risk reduction, transhumanism, rationality, and anti-speciesism/anti-substratism. One thing I realized is ideas such as differential technological development, the importance of global coordination, and the applicability of game theory more often applied to mitigating risks of human extinction are also critical for the mitigation of risks of astronomical suffering as well. S-risk reducers benefit in multiple ways from cooperating with others in common intellectual projects, and in doing so s-risk reducers can also influence differential intellectual progress consideration of s-risk reduction throughout communities focused on the scientific advancement of emerging technologies. In this vein, in addition to following closely the output of organizations like the Foundational Research Institute, I recommend s-risk reducers follow the output of the Future of Humanity Institute and the rationality community on sites like LessWrong for new crucial considerations and key insights for dealing with astronomical stakes, and contribute to the common intellectual goals shared by all far-future focused effective altruists.