We’re currently hiring for a lead product engineer.

Humans need better AI interpretability.

Modern AI systems are not inherently interpretable. Humans struggle to understand or predict how they will behave in response to unknown data.

AI Interpretability is a nascent research area still mostly confined to academia, and experts are rare. There are some existing interpretability methods, but they are limited in scope, subject to human bias, and sometimes flat-out unreliable. In any case, most ML teams do not understand them well.

We are scientists and engineers who design novel interpretability algorithms for tomorrow’s AI systems, and integrate them with AI pipelines today.

We give AI devs access to state of the art interpretability with no engineering overhead, and enable ML-ops companies to better serve high-stakes domains where objective interpretability is crucial.

What we care about:

  • Interpretability algorithms should produce consistent outputs regardless of any random initialisation. Interpretability must be reproducible.
  • Dataset-based evaluation can’t tell us everything about how models will generalise to unseen data. We aim to extract what the model has learned directly.
  • Hyperparameters should always be theoretically motivated. It’s not enough that some configuration works well in practice. (Or, even worse, that it’s tweaked to get a result that looks sensible to humans.) Heuristics aren’t enough – we find out why.
  • Future-proof methods make minimal assumptions about model architectures and data types. We’re building interpretability for next year’s models.
  • Robust interpretability, failure mode identification and knowledge discovery should be a default part of all AI development. We will put an interpretability system in the pipeline of every leading AI lab.

Contact us: hello at leap-labs.com