This fellowship supports postdoctoral researchers conducting technical research to reduce existential risks from advanced AI, emphasizing interpretability, alignment, formal analysis, and cybersecurity.
Funder: Future of Life Institute
Due Dates (Anticipated): January 2027 (Full application deadline, projected)
Funding Amounts: $80,000 annual stipend plus $10,000 research fund; typically 1-year postdoctoral support, renewable.
Summary: Supports postdoctoral researchers conducting technical research to reduce existential risks from advanced AI.
Key Information: Applicants must secure a mentor at their host institution and commit to AI existential safety research.
This fellowship, run by the Future of Life Institute in partnership with the Beneficial AI Foundation, aims to support promising postdoctoral researchers focused on AI existential safety. The program seeks to foster a collaborative research community dedicated to analyzing and mitigating the most probable ways advanced AI could cause existential catastrophes, such as human extinction or irreversible harm to humanity's potential. Fellows are expected to participate in annual workshops and events to facilitate networking and knowledge exchange.
The fellowship emphasizes technical research, including but not limited to:
Research that solely addresses non-existential risks (e.g., fairness in recidivism prediction, general AI competence) is outside the scope unless directly tied to existential risk reduction.