Below are some of the areas in which we are currently exploring grantmaking opportunities, though we are open to any grants which are focused on reducing long-term suffering, and would like to further diversify our grant portfolio.

­

Digital sentience welfare

We’re interested in grants for advocacy related to artificial sentience. The NYU Mind, Ethics, and Policy Program is one of our grantees who does work in this area. (More)

­

Cooperative AI

We’re interested in promoting cooperation and avoiding conflict between AI systems, and in improving the cooperative intelligence of AIs.

The Center on Long-Term Risk, the Cooperative AI Foundation and the Foundations of Cooperative AI Lab at CMU are grantees of ours in this area.

­

Reducing long-term risks from malevolent actors

Malevolent actors—characterized by elevated narcissistic, psychopathic, sadistic, spiteful, or other dark traits—in positions of power could negatively affect humanity’s long-term trajectory. Malevolent actors with access to advanced technology, in particular transformative AI, could substantially increase existential risks and long-term suffering. (More)

­

Reducing risks from fanatical and extremist ideologies

Fanatical and extremist ideologies—such as Nazism, communism, or certain strands of religious fundamentalism—have contributed to most of history’s most catastrophic conflicts. We’re interested in reducing the influence of such damaging ideologies and want to promote a more peaceful, democratic, enlightened, and compassionate world.

­

Responsible technology companies

We make investments in socially beneficial technology companies. Our focus is mostly on AI, but we are exploring adding other companies to our portfolio. Anthropic is one of our investments in this area.

  ­

Less developed areas that we’re also interested in funding

  • AI governance / security work, with particular relevance to s-risk.
  • Work on reducing incidental future suffering.
  • Mapping out strategic disagreements between people working on reducing s-risks, exploring worldview diversification, and locating potential points of compromise.
  • Soliciting external input for how to best reduce long-term suffering, including critiques on our current work and priorities. This could take the form of essays, surveys, or interviews, for instance. We are open to compensating external people for such work.
  • Research on the optimal spending rate in light of potentially increased future strategic clarity, AI timelines, and other considerations. 
  • Whatever else we’re missing, including developing entirely new projects and cause areas: Improving the long-term future is fraught with uncertainty, though there’s a tendency to habituate to this fact over time. It’s entirely possible that our current efforts are misguided.  

­

Non-grant opportunities / hiring

  • We’re interested in hiring for positions in grantmaking, operations, outreach, career advice, and exploring AI venture investing. Please get in touch with your CV if you are interested.
  • Several of our grantees and portfolio companies are hiring.

­

Concluding notes

Most of our grants so far have been to research organizations inside and outside of academia. However, we are interested in funding a large variety of efforts, including organizations or individuals seeking funding for more practically oriented projects. 

Note that the most effective opportunities to reduce s-risks may not be in any of the above areas. Depending on your unique circumstances, skills, and talents, other careers such as working in AI, politics, or becoming a public intellectual may be much more promising than anything listed above.

See here for an introduction to s-risks.

You can contact us at info@polaris-ventures.org.