Summer Research Fellowship

A 3-month interdisciplinary program connecting researchers with AI safety mentors.

Work on projects at the intersection of your field and AI safety. This program focuses on selecting excellent researchers and matching them with mentors suited to their experience and goals. The program is targeted toward those with research experience, coming from diverse fields, looking to transition into AI safety.

Duration:
3 months
(June -September)
Cohort:
~20 fellows
Stipend:
$4,000/month
+ housing support
Location:
US or UK
(in-person, remote for exceptional candidates

Applications for 2025 are closed

Program Structure & Benefits

The fellowship includes:

  • 8-week pre-program reading group on AI risks (April-May, remote)
  • Multi-day opening and closing retreat with cohort and AI safety researchers
  • 3 months full-time research with dedicated mentor
  • Shared office space with regular speaker talks and community events
  • Final symposium presentation in September

Financial support:

  • $4,000/month stipend (full-time commitment)
  • $1,000/month housing allowance
  • All meals provided at office (lunch, dinner, snacks)
  • Fellowship-related travel costs covered
  • Visa support letters available

Past fellows have gone on to positions at AI safety labs, UK AISI, academia, and independent research.
View talks from previous cohorts on our our YouTube page.

Who Should Apply

The fellowship is for researchers motivated to contribute to AI safety with expertise in fields studying complex and intelligent systems.

Relevant fields include but are not limited to:

  • Mathematics
  • Neuroscience and cognitive science
  • Dynamical systems theory
  • Physics and physics of information
  • Philosophy (particularly philosophy of science, mind, or ethics)
  • Political and economic theory
  • Social and legal theory
  • Ecology and evolutionary biology
  • Linguistics
  • Media studies

While aimed at PhD and postdoctoral researchers, we welcome applicants with substantial research experience regardless of credentials. We accept applicants from all countries.

You do not need a specific project in mind when applying. We help match fellows with mentors and develop projects during the interview process.

Cooperative AI Track

Since 2025 we offer a dedicated cooperative AI track supported by the Cooperative AI Foundation. Up to 6 fellows focus on research reducing multi-agent risks and improving cooperative intelligence of advanced AI systems.

Research areas include:

  • Understanding and evaluating cooperation-relevant capabilities
  • Multi-agent interactions and emergent behavior
  • Information asymmetries and transparency
  • Fundamental research on cooperation in complex systems

We especially welcome researchers from game theory, multi-agent systems, behavioral economics, organizational psychology, network science, political science, anthropology, and biology studying collective behavior.

Application Process

Applications open in winter for the following summer. The process includes:

Stage 1: Written application

  • CV/résumé
  • Personal statement (600-800 words on research background and motivation)
  • Past work samples (optional but recommended)

Stage 2-4: Interviews and project development

Multiple interview rounds to discuss research interests, develop project proposals, and match with mentors. Cooperative AI track applicants have an additional review stage with CAIF.

Alexandre variengien
For me, PrincInt is a fertile space where ideas have slack, creating potential to bring truly new perspectives to the field of AI safety.

Alexandre Variengien

Independent Researcher, ex-Technology Specialist at the EU AI Office

Magdalena wache
The PIBBSS fellowship was a great environment to do research. I found the research strategy coaching provided by the PrincInt team very beneficial, and I particularly enjoyed bouncing ideas around with the other fellows.

Magdalena Wache

Researcher at Fraunhofer Institute for Secure Information Technology

Nischal
PrincInt provided me an incredibly open and supportive environment to start thinking about AI safety and related topics, and fostered an environment where a broad range of ideas were welcomed and encouraged. My work and research direction has been strongly shaped by my experience with PrincInt and I still enjoy the community formed during my time with them.

Nischal Mainali

PhD candidate in computational neuroscience in Burak Lab

Generic profile image
PrincInt is one of the few communities (carefully designed and cultivated) to be truly interdisciplinary in all the ways that are relevant to long-term AI safety. If you’re wondering how the brain compares to modern foundation models, you will be in good company. If you want to understand how law and policy can adapt to mitigate near and long-term harms from AI, you will find instructive collaboration here. If you want to investigate how AI’s behavior compares to human cognitive tendencies and fallacies, or even other forms of biological and social intelligence, you will find expertise in each of these domains at PrincInt. The exciting interdisciplinary ideas and community sparked by the PrincInt environment is uncommon and so necessary for truly impactful AI safety research. The long-term answers for AI safety will come from a coming together of these fields, and PrincInt is one of the few environments that is designed to help us get there.

2025 PIBBSS Fellow

Timeline

Applications typically close in late January. Final decisions by end of March.