This fellowship funds PhD research on technical solutions to minimize existential risks from advanced AI, supporting work in interpretability, alignment, verification, and AI safety.
Funder: Future of Life Institute
Due Dates (Anticipated): November 2026 (Full application deadline, projected)
Funding Amounts: Tuition and fees for 5 years, $40,000 annual stipend (US/UK/Canada), $10,000 research fund per fellow
Summary: Supports PhD students conducting research to minimize existential risks from advanced AI, with comprehensive funding and community engagement.
Key Information: Forecasted opportunity; dates and details may shift—check the program page for updates.
The Vitalik Buterin PhD Fellowship in AI Existential Safety, administered by the Future of Life Institute in partnership with the Beneficial AI Foundation, provides comprehensive support to PhD students pursuing research focused on reducing existential risks posed by advanced artificial intelligence. The fellowship covers tuition and fees for up to five years, a competitive annual stipend, and a dedicated fund for research-related expenses, including travel and computing. Fellows also gain access to a vibrant research community, networking events, and workshops centered on AI existential safety.
The program specifically supports technical research that analyzes how AI could lead to existential catastrophes and develops solutions to mitigate such risks. Areas of interest include interpretability, verification, safe objective alignment, and cybersecurity of AI systems. The fellowship aims to foster a research community free from financial conflicts of interest and expects fellows to uphold ethical standards in their subsequent career choices.