Team

Rebecca Gorman, Co-Founder and CEO 

Rebecca grew up as a tech hobbyist in Silicon Valley and as a technologist has pursued a lifelong dedication to finding ways of making technology serve users’ true values. While working as a real estate agent in the Valley, she continued to engage with start-ups and pursue AI research. She spent the pandemic developing her alignment research ideas into papers with Dr Stuart Armstrong (then of the Future of Humanity Institute at the University of Oxford) and other researchers and getting Aligned AI ready for launch.

Dr Stuart Armstrong, Co-Founder and Chief Research Officer

Previously a Researcher at the University of Oxford’s Future of Humanity Institute, Stuart is a mathematician and philosopher and the originator of the value extrapolation approach to artificial intelligence alignment. He has extensive expertise in AI alignment research, having pioneered such ideas as interruptibility, low-impact AIs, counterfactual Oracle AIs, the difficulty/impossibility of AIs learning human preferences without assumptions, and how to nevertheless learn these preferences. Along with journal and conference publications, he posts his research extensively on the Alignment Forum. 

Jessica Cooper, Research Scientist & Northern California Branch Lead

Jessica Cooper runs the San Francisco branch of Aligned AI. Previously she worked as a research scientist at the University of St Andrews, building deep neural networks for digital pathology. She holds a BA in Fine Art, an MSc in Advanced Computer Science, and a PhD in AI Interpretability.

Fazl Barez, Research Scientist

Fazl previously worked on interpretable machine learning at Amazon and Huawei. He is the founder of Edinburgh Alignment & Safety Hub and the co-founder of Edinburgh Effective Altruism.  

Oliver Daniels-Koch, Technical Alignment Research Intern

ML practitioner and former intern at Charles River Analytics. At Charles River, his research focused on using Pearlian causal model for explainable, competency-aware reinforcement learning agents.


Advisors

Dr Anders Sandberg, Information Hazards Policy Advisor

Fellow, Ethics and Values at Reuben College, Oxford

Senior Researcher, Future of Humanity Institute, Oxford

Dylan Hadfield-Menell, Research Advisor

Assistant professor at MIT in Artificial Intelligence

Co-Founder and Chief Scientist of Preamble
Expert in Cooperative Inverse Reinforcement Learning

Romesh Ranawana

Serial entrepreneur, AI technologist, programmer and software architect with more than 20 years of deep tech development experience, and a highly experienced technology chief executive.

Member of the Board of Management of the University of Colombo School of Computing and founding chairman of the SLASSCOM AI Center of Excellence (AICx).

Co-founder of SimCentric Technologies and Co-Founder and CTO of Tengri UAV

Adam Gleave

Adam Gleave is an artificial intelligence PhD candidate at UC Berkeley working with the Center for Human-Compatible AI. His research focuses on adversarial robustness and reward learning, and his work on adversarial policies was featured in the MIT Technology Review and other media outlets.

Justin Shovelain, Ethics Advisor

Co-founder of Convergence

AI safety advisor to Causal Labs and Lionheart Ventures

Has worked with MIRI, CFAR, EA Global, and Founders Fund

Charles Pattison

Charles has 15 years experience working in capital markets, from pricing derivatives to investment in listed or unlisted equities. He currently works at a large Asia-based equity-focused fund.

Scroll to top