AGI Safety Fundamentals

IMPORTANT: Since the starting of AI Alignment McGill (AIAM), the Effective Altruism McGill-run AGI Safety Fundamentals reading groups were discontinued. If you want to hear about future AI safety reading groups or events in McGill, fill this form.

Apply for our AGI Safety Fundamentals reading groups (DISCONTINUED)

AI Alignment

A seven-week program (with an optional project) starting in February. For updates, please subscribe to our mailing list and join our Discord.

The AI Alignment reading group is a student-led semester-long program examining approaches to AI Alignment. The program especially focuses on the value alignment problem—how we can robustly align an AI to understand and follow human values. Topics include AI interpretability and explainability, inverse reinforcement learning, recursive reward modeling, and how to pursue a career in AI safety.

Participants are divided into groups of 4-6 people. Each week, each group and its discussion facilitator will meet for 1.5 hours to discuss the readings and exercises, which take ~2 hours.

You can view a list of topics in this program at https://www.agisafetyfundamentals.com/ai-alignment-curriculum.

Applications close January 27th, 11:59 pm ET.

Alignment 201

A nine-week program (7 weeks of readings, then two weeks focused on developing a literature review and/or research proposal) starting in February. For updates, please subscribe to our mailing list and join our Discord.

The AI Alignment 202 reading group examines approaches to AI Alignment in greater depth than its 101 counterpart. Weeks 6 and 7 branch into three parallel tracks: participants can choose whether to focus on Eliciting Latent Knowledge (ELK), Agent Foundations, or the Science of Deep Learning.

Participants are divided into groups of 4-6 people, matched based on their prior knowledge about ML. Each week, each group and its discussion facilitator will meet for 1.5 hours to discuss the readings and exercises, which take ~2 hours.

You can view a list of topics in this program at https://www.agisafetyfundamentals.com/alignment-201-curriculum.

Applications close January 27th, 11:59 pm ET.


Recommended AI Safety Content