We must understand and control AI.

At the Practical AI Alignment and Interpretability Research Group, we develop theoretical frameworks and applied methodologies for uncovering and inducing interpretable algorithms in deep learning models.

Black box AI is bad AI.

Interpretability research is diversified investment; regardless of the timeline, ethical AI hinges on understanding and control.

Let’s work together to make this revolutionary technology more transparent.

Causality is key.

Interpretability is a nascent experimental field.

We want to turn it into a rigorous science grounded in precise mathematical concepts.