The world's most advanced alignment platform for AI.
Stay informed on the level of bias in your foundation models
Measure bias in foundation models you use
Don't get surprised by biased model behaviour
Foundation model APIs update their models regularly. Stay informed on changes in model bias
Safe, precise, and controllable AI
Safer AI is more usable AI. If you can’t trust an AI to do what you intend for it to do, then it is not applicable for business-critical situations. That's why we're putting safety and alignment at the heart of everything we do.
Frontier AI
We are using novel mathematical and theoretical techniques to fundamentally reinvent and improve AI.
Our focus is to create the next step-change in machine learning: teaching AIs to hold human-like concepts, helping to overcome fundamental issues across the industry.
Featured media
Research
Please find below our latest research papers and articles
28 SEPT 2023
Solving goal misgeneralisation
Goal misgeneralisation is a key challenge in AI alignment - the task of getting powerful AIs to align their goals with human intentions and human morality.
19 Jun 2023
Concept extrapolation: A conceptual primer
This article is a primer on concept extrapolation - the ability to take a concept, a feature, or a goal that is defined in one context and extrapolate it safely to a more general context.
20 Mar 2022
Recognising the importance of preference change
As AI becomes more powerful and a ubiquitous presence in daily life, it is imperative to understand and manage the impact of AI systems on our lives and decisions.
04 MAY 2022
Missing mechanisms of manipulation in the EU AI Act
The European Union (EU) Artificial Intelligence (AI) Act proposes to ban AI systems that ”manipulate persons through subliminal techniques or exploit the fragility of vulnerable individuals.