We believe that progress in AI could lead to a radical transformation of our society within the next century. We want to contribute positively to that development. In particular, we are interested in:
- Cooperative AI. How can increasingly autonomous systems cooperate peacefully with one another?
Global priorities research
We think that our grants will have the highest impact if we focus on shaping the long-term course of civilization. However, it’s unclear what this implies in practice. We want to fund work that informs our prioritization. In particular, we are interested in:
- Drivers of catastrophe. What sources of large-scale harm in the future should we focus on?
- Crucial considerations. What are we missing when thinking about the long-term future? (e.g., related to infinite ethics, extraterrestrial civilizations )