The AI Safety Fund supports global research projects advancing responsible and safe development of advanced AI systems, focusing on urgent challenges like evaluation, biosecurity, cybersecurity, and governance.
Funder: Frontier Model Forum
Due Dates (Anticipated): January 2027
Funding Amounts: Grants typically range from $150,000 to $400,000; overall fund exceeds $10 million; project durations generally 1 year.
Summary: Supports independent research and projects that advance responsible AI development and address urgent AI safety and security challenges.
Key Information: Next round of funding is anticipated for January 2027; topics may include biosecurity, cybersecurity, and evaluation of advanced AI capabilities.
The AI Safety Fund (AISF) is a global initiative managed by the Frontier Model Forum, established to accelerate and expand research in AI safety and security. The fund aims to support independent research targeting urgent challenges in the safe development and deployment of frontier AI models, including the minimization of risks to public safety, the advancement of responsible AI practices, and the creation of standardized, third-party evaluations of advanced AI systems. The AISF is backed by leading AI companies (Anthropic, Google, Microsoft, OpenAI) and major philanthropic organizations, with a fund size exceeding $10 million.
AISF grants focus on narrowly-scoped, high-impact projects that address critical bottlenecks in AI safety, such as methodologies for evaluating dangerous capabilities, biosecurity and cybersecurity risks, transparency, model alignment, and governance infrastructure. The fund welcomes proposals from researchers worldwide, especially those affiliated with academic institutions, research organizations, NGOs, and social enterprises.