ABOUT
Built by Zero One Research
ModelLens is an AI-powered model explainability platform. We help data scientists, compliance teams, and business stakeholders understand what machine learning models have learned — without writing a single line of code.
Our mission
Machine learning models are making decisions that affect people every day — loan approvals, medical diagnoses, hiring recommendations, fraud detection. Yet most of these models are black boxes, even to the teams that build them.
We believe that every model should be explainable. ModelLens bridges the gap between model weights and human understanding, using an AI reasoning engine called Perun to translate neural network behaviour into plain-language insights that anyone can act on.
How ModelLens works
Upload
Drop in a trained PyTorch, scikit-learn, Keras, or ONNX model. Add feature names and class labels.
Explore
Perun automatically probes your model with thousands of synthetic queries — mapping decision boundaries, sensitivity patterns, and feature importance.
Understand
Chat with your model in natural language. Ask why it makes specific predictions, what features matter, and how to change outcomes. Every answer is grounded in real model behaviour.
Powered by Perun
Perun is the reasoning engine behind ModelLens. It combines automatic model probing with an AI agent that can call analysis tools on demand — computing feature attributions, counterfactuals, partial dependence plots, and more. Every answer is backed by actual model computations, not LLM guesses.
The name comes from Perun, the Slavic god of thunder and lightning — symbolising illumination and the revelation of hidden truths. Just as lightning reveals what darkness conceals, Perun reveals what your model has learned.
EU AI Act readiness
The EU AI Act becomes fully enforceable on August 2, 2026. High-risk AI systems must provide transparent, documented explanations of their decisions. ModelLens helps you meet these requirements:
- ✓Article 13 (Transparency) — Plain-language explanations of model behaviour
- ✓Article 11 (Technical documentation) — Exportable compliance reports with architecture details, feature importance, and decision rules
- ✓Article 14 (Human oversight) — Interactive counterfactual analysis for human reviewers
- ✓Article 10 (Data governance) — Structured fairness assessments with bias metrics
- ✓Article 12 (Record-keeping) — Immutable audit trail of all explanation requests
Who uses ModelLens
Data Scientists
Understand colleague's models quickly, debug unexpected predictions, document model behaviour.
Compliance Teams
Generate EU AI Act documentation, run fairness audits, maintain audit trails without writing code.
Business Stakeholders
Ask plain-English questions about model decisions, understand risk factors, build trust in AI.
MLOps Teams
Automate model documentation via SDK, compare model versions, integrate into CI/CD pipelines.
Get in touch
Have questions, want a demo, or interested in the Enterprise plan?