Post-Mortem: 2023 Stanford Existential Risks Conference
Introduction
The Stanford Existential Risks Initiative (SERI) hosts an annual conference focused on, you guessed it, existential risks. I believe this conference has previously been exclusively online, with this year being the first with hybrid in-person/virtual format. The conference spanned two days, 4/21/23 and 4/22/23 with the first day being primarily in-person presentations streamed to virtual participants, and the second day being primarily virtual presentations. I had a conflict and was only able to attend the first day, but in hindsight I regret this and in the future I’ll definitely be clearing two full days to pay attention to this conference. Maybe I’ll even travel for it.
This was my first academic conference. During my time as an undergraduate I authored and published a few robotics papers but soon after decided the life of an academic researcher was not for me. I really didn’t know what to expect from this event. This post is a summary of my takeaways from that first conference experience, and will probably be most useful for others who have either never attended an academic conference or at least not attended this one.
Conference Content
This blog is focused on Global Catastrophic Risks (GCRs), which as they’re most commonly defined is a category of things that cause a lot of people to die rather suddenly, while existential risks most commonly refer to events that cause everyone to die. While existential risks are a narrow subset of GCRs, the fields are so small and there is so much overlap in content, confusion about definitions, and uncertainty about the future that a vast majority of the work in either “field” ends up being directly relevant to the other.
I only recognized a couple of the speakers on the agenda from my previous expeditions into GCR-relevant academic research, but I thought highly of them and attending the conference virtually was low cost (free! and I could bail at any point if I felt it wasn’t worth my time). I was pleasantly surprised to find that every presentation was both high quality and extremely interesting to me. The day was very information dense, with presentations giving way to fast-paced Q&A’s that were insightful more often than not. There were short breaks throughout the day that helped me maintain my sanity, but it was an extremely valuable 8 hours.
While I would have preferred reading the underlying paper to get the deepest understanding of each presentation, the value of the conference for me was in the curation. These presentations were selected both for quality and relevance of content. Just by the nature of being presented in a room full of associated scholars who could ask questions or point things out, I can also have confidence that they represent something like the frontier of the field. If they were walking back down a commonly tread path or bumping into existing research, it seems very likely this would have been acknowledged in that context. This rapid fire format of content curated within these constraints gave me a sort of overview of the current state of the field that I’m not sure how I could have gotten otherwise. Maybe more importantly, it gave me a list of researcher names and research topics to associate with the current frontier of the field that acts as fuel for my own independent research moving forward.
SERI has a YouTube channel, so I’m hopeful they will post recordings of the presentations and I can catch up on the second day that I missed.
If I had attended in person, you could have added networking to this value proposition. There were meals and coffee breaks built into the conference agenda, as well as the swapcard app for scheduling 1on1s that seem like they would have made it easy to dig deeper into any particular topic I was interested in. This, again, is not something I can get by plowing through academic papers. There were also some online mechanisms for interactions like that, but I didn’t have time to try them out. That being said, I’m excited about the Gather platform that they used for this, having investigated it myself previously, and I wouldn’t be surprised if many people had valuable 1on1 virtual interactions as well.
Risk Cascades
The central theme of the conference was that of risk cascades: That the manifestation of one particular risk (as we typically categorize them) can lead to another, and perhaps another, with the downstream effects being greater than the initial risk. An example would be the supply chain and government response impacts of COVID perhaps exacting a greater toll on the population than the virus itself.
This is of special interest to the field because these interactions are harder to study than well categorized individual risks, like a supervolcanic eruption, while also seemingly being neglected in the research. I share the intuition that these interactions within our complex civilizational system will lead to the majority of impact from future catastrophes, and it’s one of the reasons I focus on GCRs. They would have unprecedented events in the modern world, and I’m really not sure at all what knock-on effects they would have, so lets figure out how to prevent them in the first place! This appreciation for how risks manifest in a complex system also undergirds the Metacrisis thesis of The Consilience Project that they should be publishing in written form soon.
The conference included examples of early attempts to map out specific risk cascades, but mostly seemed to act as an invitation to the field to think more about these. What cascades are possible or likely? How can we not only identify them, but also easily refer to them and study them? I will be paying close attention to this direction of research, and agree with the implied stance of the conference organizers that it represents perhaps the most fruitful current direction for academics in the field.
Questioning Strong Longtermism
Within the world of Effective Altruism (EA), most discussion of existential risk I see comes associated with a philosophical worldview called longtermism. The hyper-abbreviated summary is that because humanity could survive for a long time, and/or spread among the stars, and/or create digital consciousnesses, there will likely be many more lives in the future than there are today, and they will likely be capable of experiencing much more than we can today. This puts essentially all of the “expected value” (utilitarian perspective) into the future, and means we should be willing to do a lot to preserve this value. Even small chances of extinction events would be worth spending a lot to prevent because extinction would preclude this value.
While this perspective seemingly dominates conversations about existential risk within the EA world, I’m pretty confident that this is an artifact of the movement’s philosophical roots, and the particular utilitarians that founded it.
The conference included some critiques of this worldview and its usefulness. I agree with most of them, and had encountered some before. I’m going to use the remainder of this post as an excuse to articulate those that resonate with me, but first I’d just like to call attention to this critique being part of the value proposition of this conference. By existing mostly outside of the EA sphere of influence it can act as a separate perspective in a way that something like an EA Global event cannot. I’m hopeful that I’ll discover even more events with different perspectives, but being aware of as many as possible seems fantastic.
Most people aren’t hardcore utilitarians
Collapsing human value into one or a few metrics is not how most people think about the world. In what I consider to be a subjective domain, this makes the tight coupling between this particular moral worldview and efforts to mitigate x-risks/GCRs off-putting to many. This reduces the number of people who work on or support these efforts.
We’re extremely uncertain about the odds of extinction, but GCRs seem highly probable
The argument uses “infinite” expected value to justify expensive action to prevent low probability risks… but as described in the section on risk cascades, we don’t have a good grasp on the upper bound of that risk.
My best effort forecast to date had a ~20% risk of an event killing >10% of the population within a span of 5 years before 2050, and a ~1% chance of an event reducing the human population below 5,000 before 2050. These aren’t low probabilities, and the second contains much, much more uncertainty! I’ve seen people get hung up on their disagreement with the longtermist justification for extreme actions, and never get to the part where they realize there’s something like a 1/5 chance of >1 billion people dying from some event in their lifetime.
Preventing GCRs is intuitively appealing to most people
Most people don’t want other people to die! Pandemics and asteroid impacts and nuclear wars don’t need complex moral arguments to convince people it’s important to prevent or mitigate them. I’m afraid longtermism has mostly muddied the water on otherwise clear priorities for humanity. Yes, you can definitely make the case that these areas have been historically underfunded and poorly prioritized… but my worldview doesn’t really allow for a missing complex utilitarian argument to be the reason for that. As people who believe this risks are worth studying and acting against, let’s show our work. How likely are they? How bad are they likely to be? What can we do about them, and what would it cost to do so? How do we best communicate these ideas? How do we drive those costs down?
I care deeply about answering all of those questions, and am happy to work with anyone else interested in them. It doesn’t really matter to me what their moral justification is, or even if they have one that they can articulate. “I don’t want lots of people to needlessly die”, is plenty good enough for me. This leads me to want to unbundle GCR research and mitigation efforts from moral arguments. Whatever humanity decides it wants to do for itself out of its panoply of worldviews and priorities, it needs to be informed with accurate information and useful options. In my opinion, both of these categories are grossly deficient for GCRs right now, so let’s get to work and invite anyone else interested to work with us.
Th SERI conference better enabled me to do this, and seemed to represent the efforts of a large group to do the same. That’s a win for humanity in my book, so thank you to those that made it possible and cheers to many more events like it.