AI Safety · Technical + Governance

AI safety is one of
the most urgent
problems right now.

Not enough people are working on it. We help researchers, engineers, and policymakers pivot into AI safety — through training, research programs, and a community built for the long term.

30+
AI Safety practitioners trained
3
University AI Safety clubs seeded
2
Active global partnerships
2
Tracks — Technical and Governance
Cohort 2 — Applications open now
Technical AI Safety
AI Governance
Research Programs
Field Building
India → Global
Cohort 2 Open
Interpretability
Policy Memos
Multi-Agent Safety
Technical AI Safety
AI Governance
Research Programs
Field Building
India → Global
Cohort 2 Open
Interpretability
Policy Memos
Multi-Agent Safety
Why it matters

AI is advancing faster than the field studying its risks.

The number of people working seriously on AI safety is a fraction of what the problem demands. The talent exists. The pipeline doesn't.

01
The field is undersized for the stakes.

Roughly 1,100 people globally work on AI safety full-time. The capabilities field is growing 30–40% per year. The gap is not closing — it's widening. More researchers, more policymakers, more governance professionals are needed now.

02
Most of the world's talent is outside the pipeline.

The existing AI safety pipeline is concentrated in the US and UK. India has 1.4 billion people, world-class engineering talent, and significant policy weight — and almost no structured entry point into AI safety work.

03
We close the gap. Technically and at the policy level.

AI safety requires both technical researchers who understand how models fail and policymakers who can translate that into governance. Building one without the other produces incomplete solutions. We run both tracks, together.

What we do

One pipeline. Three stages. Two tracks.

We don't run isolated courses. We run a structured pipeline — from first exposure to original contribution. Everyone starts at reading groups. The path forward depends on how deep you want to go.

01
Entry point
Reading Groups

Weekly sessions open to anyone curious about AI safety. Paper walkthroughs, discussions, guest speakers. No prerequisites. This is where people discover the field, understand the open problems, and decide if they want to go deeper.

Those who want to go deeper join the fundamentals course
02
Core program
Fundamentals Course

A structured cohort covering the foundations of AI safety — what it is, why it matters, and where the open problems are. Participants choose a track based on their background and goals.

Technical track
For engineers and researchers. Alignment, interpretability, multi-agent systems, evals. Output: writeups, EA Forum posts, LessWrong articles, early tools.
Policy track
For lawyers, policymakers, social scientists. AI governance, regulation, institutional design. Output: policy memos, governance briefs, research notes.
Top graduates from the fundamentals course are invited to apply
03
Advanced
Research Fellowship

A deeper program for those ready to produce original work. Fellows work on specific research questions with mentor support, building toward a contribution to the field — a paper, a tool, a policy brief, or a role at a safety organisation.

Our work

AI Safety India Community — our first project.

India is where we started. Not because it's the only place that matters, but because the gap here is among the largest in the world — and we're from here.

Active project — India
AI Safety India Community

India produces the world's largest concentrations of AI builders. We run the only structured program routing that talent into safety-focused research and governance. Cohort 1 is complete. Cohort 2 is open.

Our alumni are working in AI safety organisations globally. We've seeded three university AI Safety clubs. We're building the SPAR research pipeline from India and collaborating with ENAIS and AI Safety Atlas internationally.

Apply to Cohort 2 →
30+
AI Safety practitioners trained
3
University AI Safety clubs seeded across India
2
Active international partnerships — ENAIS, AI Safety Atlas
1
Proposal submitted to Cooperative AI Foundation for India research node
About

Built by people working on the problem, not observing it.

Aditya Raj
Founder · AI Safety Researcher

Building the pipeline the global AI safety field is missing — starting with India. Active researcher and field-builder, currently a SPAR Fellow working on technical AI safety.

SPAR Research Fellow — technical AI safety researcher
Bluedot Impact Alumni — AI safety fundamentals
Jailbreak Hackathon — Top 30 globally (Grayswan)
Ran Cohort 1 — 30 researchers trained, 3 university clubs launched
"AI safety is urgent. The people who will solve it don't all live in San Francisco. We're building the infrastructure to find them, train them, and place them where it matters."

AI Safety Collective is a global organisation. AI Safety India Community is our current project. As the model is validated, we expand to other underserved geographies where technical talent exists and the pipeline doesn't.

We are actively building partnerships with global AI safety organisations, funders, and researchers. If you're working on the same problem, we want to talk.

India — Active Southeast Asia — Next
Get involved

Three ways to work
with us.

Whether you want to learn, collaborate, or support — there's a place for you in this work.

For researchers
Apply to Cohort 2
10 weeks. Technical or governance track. Open to students, engineers, and policy professionals. Applications close soon.
For organisations
Partner with us
Co-develop curriculum, host fellows, connect your research agenda to the India pipeline. We're open to serious collaborations.
For funders
Support the work
We are building the top-of-funnel the global AI safety field is missing. Read our proposal or get in touch to discuss.