What is Alignment?

Algorithms are shaping the present and will shape the future ever more strongly. It is crucially important that these powerful algorithms be aligned – that they act in the interests of their designers, their users, and humanity as a whole. Failure to align them could lead to catastrophic results.

Our long experience in the field of AI safety has identified the key bottleneck for solving alignment: concept extrapolation.

What is Concept Extrapolation?

Algorithms typically fail when they are confronted with new situations – they go out of distribution. Their training data will never be enough to deal with all unexpected situations – thus an AI will need to safely extend key concepts and goals, similarly – or better – to how humans do it.

This is concept extrapolation, explained in more details in this sequence. Solving the concept extrapolation problem is both necessary and almost sufficient for solving the whole AI alignment problem.

Aligned AI

Aligned AI is a benefit corporation dedicated to solving the alignment problem – for all types of algorithms and AIs, from simple recommender systems to hypothetical superintelligences. The fruits of this research will then be available to companies building AI, to ensure that their algorithms serve the best interests of their users and themselves, and do not cause them legal, reputational, or ethical problems.

We are hiring!


Sign up to receive updates:

Scroll to top