Talk to your ML model.
In plain English.
Upload a PyTorch or sklearn model. Ask questions about its behaviour. Get explanations, feature importance and edge cases.
Free tier — 2 models, 30 chats/month. No credit card required.
How it works
Upload your model
Drop a .pt, .keras, .h5, .pkl, .joblib, or .onnx file. Add feature names, class names, and an optional scaler.
Automatic analysis
Perun maps decision boundaries, finds edge cases, and measures per-feature importance — no code needed.
Chat with Perun
Ask "what features matter most?" or "why did it predict X here?" and get plain-English answers.
Supported formats
PyTorch
.pt .pth
Keras / TensorFlow
.keras .h5
scikit-learn
.pkl .joblib
ONNX
.onnx
100+ teacher models
included
THE TECHNOLOGY
Learning from the experts to understand any model
Learns from parent models
Perun is trained on hundreds of expert models — learning how model architecture, weights, and activations relate to real-world behaviour and decision patterns.
Understands prediction decisions
By studying how parent models reason, Perun can explain why your model predicts what it does — tracing each decision back to specific features and learned representations.
Analyses any model's behaviour
Upload your own PyTorch or sklearn model and Perun applies its learned understanding to map decision boundaries, surface edge cases, and quantify feature importance — no retraining required.
EU AI ACT COMPLIANCE
Be ready before the deadline
The EU AI Act becomes fully enforceable on August 2, 2026. High-risk AI systems must provide transparent, documented explanations of their decisions. Non-compliance carries fines up to EUR 35 million or 7% of global revenue.
Article 13 — Transparency
Perun auto-generates plain-language explanations of model behaviour, satisfying the requirement that AI systems are 'sufficiently transparent to enable deployers to interpret output.'
Article 11 — Technical Documentation
Export structured compliance reports with model architecture details, feature importance, decision rules, and faithfulness audit results — ready for regulatory review.
Article 14 — Human Oversight
Interactive counterfactual analysis and what-if scenarios enable human reviewers to challenge and verify model decisions before deployment.
POWERED BY PERUN
Perun is the reasoning engine behind ModelLens, using the weight2text approach to convert neural network behaviour into knowledge that can be queried in natural language.