Dylan Hadfield-Menell, Research Advisor

Assistant professor at MIT in Artificial Intelligence

Co-Founder and Chief Scientist of Preamble
Expert in Cooperative Inverse Reinforcement Learning

Adam Gleave

Adam Gleave is an artificial intelligence PhD candidate at UC Berkeley working with the Center for Human-Compatible AI. His research focuses on adversarial robustness and reward learning, and his work on adversarial policies was featured in the MIT Technology Review and other media outlets.

Justin Shovelain, Ethics and Safety Advisor

Co-founder of Convergence

AI safety advisor to Causal Labs and Lionheart Ventures

Has worked with MIRI, CFAR, EA Global, and Founders Fund

Dr Anders Sandberg, Information Hazards Policy Advisor

Fellow, Ethics and Values at Reuben College, Oxford

Senior Researcher, Future of Humanity Institute, Oxford

Romesh Ranawana, Commercialisation Advisor

Serial entrepreneur, AI technologist, programmer and software architect with more than 20 years of deep tech development experience, and a highly experienced technology chief executive.

Member of the Board of Management of the University of Colombo School of Computing and founding chairman of the SLASSCOM AI Center of Excellence (AICx).

Co-founder of SimCentric Technologies and Co-Founder and CTO of Tengri UAV

Charles Pattison, Finance Advisor

Charles has 15 years experience working in capital markets, from pricing derivatives to investment in listed or unlisted equities. He currently works at a large Asia-based equity-focused fund.


Rebecca Gorman, Co-Founder and CEO 

Rebecca grew up as a tech hobbyist in Silicon Valley and as a technologist has pursued a lifelong dedication to finding ways of making technology serve users’ true values. While working as a real estate agent in the Valley, she continued to engage with start-ups and pursue AI research. She spent the pandemic developing her alignment research ideas into papers with Dr Stuart Armstrong (then of the Future of Humanity Institute at the University of Oxford) and other researchers and getting Aligned AI ready for launch.

Dr Stuart Armstrong, Co-Founder and Chief Research Officer

Previously a Researcher at the University of Oxford’s Future of Humanity Institute, Stuart is a mathematician and philosopher and the originator of the value extrapolation approach to artificial intelligence alignment. He has extensive expertise in AI alignment research, having pioneered such ideas as interruptibility, low-impact AIs, counterfactual Oracle AIs, the difficulty/impossibility of AIs learning human preferences without assumptions, and how to nevertheless learn these preferences. Along with journal and conference publications, he posts his research extensively on the Alignment Forum. 

Jessica Cooper, Technical Alignment Research Scientist

Jessica Cooper runs the San Francisco branch of Aligned AI. Previously she worked as a research scientist at the University of St Andrews, building deep neural networks for digital pathology. She holds a BA in Fine Art, an MSc in Advanced Computer Science, and a PhD in AI Interpretability.

Dr Adam Bell, IP Counsel

Adam Bell holds a D.Phil. in Biochemistry from Oxford University and a J.D. from the University of California. He worked at the Viral and Rickettsial Disease Laboratory (VRDL) in California; was patent counsel and interim board secretary for AcelRx, Inc, patent counsel for Durect, Inc., patent attorney for Incyte Genomics. Presently Adam
serves on the board of Scottish Bioenergy Ltd., IPLEGALED, Inc., and WABESO Enhanced Enzymatics, Inc. When not working you will find Adam attempting to ski, climb and flyhelicopters.

Oliver Daniels-Koch, Technical Alignment Research

ML practitioner and former intern at Charles River Analytics. At Charles River, his research focused on using Pearlian causal model for explainable, competency-aware reinforcement learning agents.

Scroll to top