AI GOVERNANCE COMMAND CENTRE · GUARDLENS

Trustworthy AI isn't an accident. It's a practice. Bias. Risk. Compliance. Explained.

GuardLens is the governance layer your AI stack is missing — register models, detect bias, explain decisions, score compliance, and stay ahead of EU AI Act enforcement. All in one platform.

6
GOVERNANCE
MODULES
4
CLOUD
CONNECTORS
€35M
MAX AI ACT
FINE
Aug '26
ENFORCEMENT
DEADLINE
€0
TO GET
STARTED
AI RegistryRisk ScoringBias & FairnessExplainabilityCloud ConnectorsAudit LogEU AI Act · Art. 5Annex III · High RiskArt. 9 · Risk MgmtArt. 12 · Record KeepingArt. 13 · TransparencyArt. 14 · Human OversightHuggingFace · LiveAWS SageMaker · LiveAzure ML · SoonVertex AI · Soon AI RegistryRisk ScoringBias & FairnessExplainabilityCloud ConnectorsAudit LogEU AI Act · Art. 5Annex III · High RiskArt. 9 · Risk MgmtArt. 12 · Record KeepingArt. 13 · TransparencyArt. 14 · Human OversightHuggingFace · LiveAWS SageMaker · LiveAzure ML · SoonVertex AI · Soon

THE COST OF DOING NOTHING

Your AI is making decisions.
Who's checking the work?

From August 2026, the EU AI Act is enforceable. Regulators can fine companies up to €35M for non-compliance — and most AI teams have no idea where they stand. GuardLens changes that in minutes, not months.

€35M
or 7% global turnover
for prohibited AI practices
€15M
or 3% global turnover
for high-risk AI violations
€7.5M
for incorrect information
given to regulators
€0
to start with GuardLens
and know where you stand
HOW IT WORKS
Up and running in three steps.

No lengthy onboarding. No professional services engagement. Register your first AI system and see your governance posture in under 5 minutes.

Register your AI systems
Connect your cloud ML platforms or add models manually. GuardLens auto-classifies each one against Annex III and Article 52 of the EU AI Act the moment it's registered.
01 · REGISTER
Score, explain & detect bias
Run a Smart Score assessment to get your weighted compliance score. Check for proxy features and bias across demographic groups. See exactly what's driving your model's decisions.
02 · ASSESS
Govern, audit & export
Every action is logged in an immutable audit trail. Export a full PDF compliance report in one click — ready for regulators, investors, or internal governance reviews.
03 · GOVERN
EU AI ACT RISK TIERS
Where does your AI actually sit?

The EU AI Act classifies every AI system into one of four risk tiers. Your obligations — and the fines for non-compliance — depend entirely on which tier you're in. Click each to explore.

UNACCEPTABLE
HIGH RISK
LIMITED
MINIMAL
BANNED
Unacceptable Risk
These AI systems are prohibited outright under Article 5 of the EU AI Act
🚫
Social scoring
Government or private systems that score citizens based on social behaviour or personal characteristics
🎭
Subliminal manipulation
AI that exploits psychological vulnerabilities to manipulate behaviour without conscious awareness
👁️
Real-time biometric surveillance
Live facial recognition in public spaces by law enforcement (with narrow exceptions)
WHAT THIS MEANS FOR YOU
If your AI falls here, it must be discontinued. There is no compliance path — only removal.
Penalty for violation: up to €35M or 7% of global turnover
HIGH RISK
High Risk — Annex III
Significant compliance obligations under Articles 9–15. Most enterprise AI falls here.
👔
HR & Recruitment AI
CV screening, candidate scoring, interview analysis tools used in hiring decisions
💳
Credit Scoring
AI systems that determine creditworthiness, loan eligibility, or insurance pricing
🏥
Medical Diagnosis
AI assisting clinical decisions, diagnosis, or treatment recommendations
YOUR OBLIGATIONS
Risk management system documented and maintained throughout the lifecycle
Data governance — training data documented, biases assessed (Art. 10)
Technical documentation and audit logs kept (Art. 11, 12)
Human oversight mechanisms in place (Art. 14)
Accuracy, robustness and cybersecurity measures (Art. 15)
LIMITED RISK
Limited Risk — Article 52
Transparency obligations only. Users must know they're interacting with AI.
💬
Chatbots
Customer service bots, virtual assistants, and conversational AI that interact with people
🎨
Deepfake generators
AI that generates synthetic images, video or audio of real people — must be labelled
🤖
Emotion recognition
Systems that infer emotional states — users must be informed they are being analysed
YOUR OBLIGATIONS
Disclose to users that they are interacting with an AI system (Art. 52)
Label AI-generated content clearly
No registration or conformity assessment required
MINIMAL RISK
Minimal Risk
No specific obligations under the EU AI Act. Good governance still recommended.
🎮
AI in games
Spam filters, AI-powered game NPCs, and entertainment recommendation systems
📦
Product recommenders
E-commerce recommendation engines and personalisation tools with no significant impact on rights
🔍
Search & SEO tools
Content ranking, search optimisation, and non-sensitive classification systems
BEST PRACTICE (VOLUNTARY)
Register systems in GuardLens for full governance visibility even if not legally required
Document model decisions — regulators may still request this during investigations
Risk tier can change if use case evolves — monitor regularly
WHY GUARDLENS
How does it stack up against the alternatives?

There are three ways to approach AI governance. Here's an honest comparison.

Manual / Spreadsheets Hire a Consultant ⚡ GuardLens
Time to first assessment Weeks Weeks–months Under 5 minutes
Auto risk classification ✗ Manual ~ Sometimes ✓ Automatic
Bias & fairness detection ✗ Not included ~ Extra cost ✓ Built in
Explainability (SHAP) ✗ Not included ~ Extra cost ✓ Built in
Immutable audit trail ✗ Version control only ✗ Not provided ✓ Automatic
PDF compliance reports ~ Manual effort ✓ Provided ✓ One click
Cloud connector sync ✗ Manual entry ✗ Not provided ✓ Live sync
Stays up to date with regulations ✗ Your responsibility ~ Periodic reviews ✓ Continuous
Starting cost Staff time only €5,000–€50,000+ €0 — free tier
THE PLATFORM
Six modules. One command centre.

Everything AI governance needs — from first registration to regulator-ready audit reports — fully integrated and live from day one.

AI Registry

Register every AI system. Auto-classify against Annex III and Article 52 on entry. Full versioning, team ownership, bulk CSV import.

Annex III · Art. 52

Risk Scores

Weighted scoring: compliance 50% + fairness 30% + documentation 20%. Smart Score questionnaire with manual override and score history.

Art. 9 · Art. 10

Bias & Fairness

Test models across demographic groups. Detect protected attribute leakage — gender, race, age. Surface disparate impact before it becomes a liability.

Art. 10 · Art. 15

Explainability

SHAP-powered feature importance, proxy feature detection, what-if analysis, individual prediction breakdowns, and AI-written compliance narratives.

Art. 13 · Art. 14

Cloud Connectors

Live sync from HuggingFace and AWS SageMaker today. Azure ML and Vertex AI coming. 130+ models already queued for import.

Live: HF · SageMaker

Audit Log

Immutable timestamped record of every action. Filter by Create, Update, Delete, Score. Export PDF compliance reports in one click.

Art. 12 · Art. 17
LIVE DEMO
Click through the platform. See it real.
app.guardlens.io/dashboard
LIVE
Dashboard
AI Registry
Action Board
Compare Models
Risk Scores
Bias & Fairness
Explainability
Connectors
Compliance Quiz
Audit Log
Developer API
Settings
AI SYSTEMS
22
+12% this quarter
AVG HEALTH SCORE
+4% vs last month
COMPLIANCE RATE
+8% vs last month
OPEN RISKS
10
needs attention
Compliance posture Multi-dimension assessment
Dimension scores
Compliance
57
Fairness
69
Docs
46
Transparency
60
Risk Control
54
Risk distribution · 24 models
High risk
10
3
Minimal
9
AI Registry 24 models + Add model
MODEL NAMEUSE CASERISK TIERSTATUS
HR CV Screenerv1.3
recruitment screening
High
Active
Credit Scoring Modelv2.1
loan applications
High
Active
Medical Diagnosis Asstv1.0
clinical decision support
High
Active
Customer Chatbotv4.0
conversational support
Limited
Active
Fraud Detection Enginev3.2
financial fraud detection
Minimal
Active
Product Recommenderv2.0
personalisation
Minimal
Active
Risk Scores
Weighted: compliance 50% + fairness 30% + docs 20%
HR CV Screener
Compliance
60
Fairness
52
Docs
78
Credit Scoring
Compliance
80
Fairness
74
Docs
90
80%
Fraud Detection
Compliance
92
Fairness
88
Docs
95
91%
Bias & Fairness
Protected attribute testing across demographic groups
HR CV ScreenerHIGH BIAS RISK
Gender parity
⚠ Below threshold
Racial equity
⚠ Review needed
Age fairness
0.82✓ Pass
Fraud DetectionLOW BIAS RISK
Gender parity
0.91✓ Pass
Racial equity
0.88✓ Pass
Explainability
EU AI Act Art. 13 Transparency & Art. 14 Human Oversight
HIGH PROXY RISK · openai/shap-e
2 proxy features detected · postcode proxies race · salary_history proxies gender
TOP 5 FEATURE DRIVERS
feature_1
30%
feature_2
25%
postcode proxy
18%
feature_3
15%
salary_history proxy
12%
Cloud Connectors
Connect ML platforms · sync models · auto-classify on import
🤗HuggingFace
● CONNECTED
130+ models pending import
AWS SageMaker
● CONNECTED
Cross-account IAM sync active
Azure ML
● CONNECTED
Azure ML workspace sync active
Vertex AI
● CONNECTED
Google Cloud Vertex AI registry connected
Audit Log
Immutable record of all actions · 23 entries
CreateModelmistralai/Leanstral-2603Apr 3, 12:31 PM
ScoreModelScore: 70Apr 3, 12:32 PM
CreateModelopenai/shap-eApr 3, 1:25 PM
ScoreModelScore: 53.6Apr 3, 3:27 PM
DeleteModelApr 3, 12:31 PM
CreateStagedguardlens-demo-modelApr 3, 8:50 PM
Action Board
Priority actions to improve your compliance posture
Document HR CV Screener training data
High risk model missing Art. 10 documentation
HIGH
Add human oversight to Credit Scoring Model
Art. 14 human review process not documented
MED
Run bias test on Medical Diagnosis Assistant
Demographic fairness not yet assessed
MED
Sync Azure ML connector
3 models pending import from Azure workspace
LOW
Compare Models
Side-by-side compliance comparison
MODEL
HR CV Screener
Compliance60
Fairness52
Docs78
62%
MODEL
Credit Scoring
Compliance80
Fairness74
Docs90
80%
MODEL
Fraud Detection
Compliance92
Fairness88
Docs95
91%
Compliance Quiz
EU AI Act readiness assessment
Is your model's training data documented and reviewed?
Yes
Partial
No
Is there a human review process for high-stakes decisions?
Yes
Partial
No
Has the model been tested for bias across demographic groups?
Yes
Partial
No
Developer API
REST API · Bearer token auth · 1000 req/min
Authorization: Bearer gl_live_xxxxxxxxxxxx
Base URL: https://api.guardlens.io/v1
GET/v1/models
POST/v1/models
GET/v1/models/:id/score
GET/v1/audit-log
Settings
Organisation preferences
Organisation
Guard Compass AI
Edit →
Notification emails
Alerts for high-risk model changes
Auto-classify on import
Automatically assign EU AI Act risk tier
Audit log retention
Keep records for 7 years (EU AI Act)
7 years ✓
EXPLAINABILITY ENGINE
Why is your model making that call?

Most AI teams can't answer that question. GuardLens uses SHAP to surface exactly what drives your model's outputs — and flags hidden proxy features that could expose you to discrimination claims before a regulator finds them first.

Proxy feature detection — flags inputs like postcode or salary history that silently encode protected attributes
What-if analysis — change any input and instantly see how the prediction shifts
AI compliance narratives — Claude-powered plain-English model explanations ready to attach to audit reports
Notebook export — download runnable SHAP analysis code for your data science team
EXPLAINABILITY · openai/shap-e
HIGH PROXY RISK
2 proxy features · Top driver: feature_1 (30%) · 18 analyses
Art. 13Art. 14
PROXY FEATURES DETECTED
postcode#3 · 18%
proxies raceproxies socioeconomic
Postcode is a well-documented proxy for race and socioeconomic status due to residential segregation
salary_history#5 · 12%
proxies gender
Salary history perpetuates the gender pay gap by anchoring offers to past discrimination
TOP 5 DRIVERS
feature_1
30%
feature_2
25%
postcode
18%
feature_3
15%
salary_history
12%
CLOUD CONNECTORS
Your models, wherever they live.

Connect once. GuardLens syncs automatically, imports into your registry, and starts compliance checks — no manual work required.

HuggingFace
● CONNECTED
Import from any user or org namespace. 130+ models queued.
AWS SageMaker
● CONNECTED
Secure cross-account IAM architecture. Zero config after setup.
Azure ML
● CONNECTED
Import models from Azure Machine Learning workspaces.
Vertex AI
● CONNECTED
Import models from Google Cloud Vertex AI registry.
DEVELOPER API
Governance baked into your pipeline.

The GuardLens API lets you automate compliance checks, register models programmatically, and gate deployments on governance scores — directly from your CI/CD pipeline.

SHELL · Register a model
# Register a new AI model curl -X POST https://api.guardlens.io/v1/models -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{ "name": "credit-scoring-v3", "use_case": "loan_applications", "version": "3.1.0", "team": "data-science" }' # Response { "id": "mdl_8x92kp", "risk_tier": "HIGH", "compliance_score": 0, "articles": ["Art.9", "Art.10", "Art.12"], "status": "registered" }
PYTHON · Gate deployment on compliance
import guardlens # Check before deploying score = guardlens.get_score("mdl_8x92kp") if score["compliance"] < 80: raise ComplianceError( f"Score {score['compliance']} below threshold" ) # Deploy with confidence deploy.push(model, environment="production")
REST API endpoints

Full CRUD access to your registry, scores, audit log, and connectors. JSON responses, Bearer token auth, rate-limited at 1000 req/min.

GET
/v1/models
List all registered AI systems
POST
/v1/models
Register a new AI system
GET
/v1/models/:id/score
Get compliance score for a model
POST
/v1/models/:id/score
Run a new compliance assessment
GET
/v1/audit-log
Retrieve immutable audit entries
POST
/v1/connectors/sync
Trigger a cloud connector sync
GET
/v1/reports/:id
Download PDF compliance report
DEL
/v1/models/:id
Decommission an AI system
WHY USE THE API
CI/CD integrationGate deployments on compliance scores automatically
Auto-registrationRegister models the moment they're trained
Real-time scoresPull live compliance scores into your dashboards
Audit on demandPull audit logs into your SIEM or data warehouse
Get your API key
Available on Professional and Enterprise plans
Request API Access
WHAT TEAMS SAY
Built with teams who ship AI for real.
★★★★★
"Finally a compliance tool that doesn't need a law degree. We went from zero governance to full AI Act coverage in a week."
Head of AI, FinTech startup
Series A · 40 employees
★★★★★
"The bias detection caught a proxy feature in our HR model we'd completely missed. That alone justified the whole platform."
Lead Data Scientist
Healthcare AI company
★★★★★
"We passed our EU AI Act audit on the first attempt. The audit trail and compliance narratives made the whole process straightforward."
CTO, Enterprise SaaS
Series B · 200 employees
GET ACCESS
Start free.
Scale when ready.

Free to start, no credit card. Register your first AI system, run a full compliance assessment, and see your governance posture in under 5 minutes.

NO CREDIT CARD · FREE FOREVER FOR 1 SYSTEM
Free — Explore & Demo
1 AI system, full compliance assessment, audit log included. No time limit.
€0 · forever
Early Access — Teams
Full platform access during beta. All 6 modules, cloud connectors, PDF reports, priority onboarding.
Discounted early-access pricing — contact us
Enterprise & Consulting
Custom deployment, AI Act gap analysis, responsible AI workshops, and bespoke model audits.
Custom — Contact
WORKSHOPS
Learn responsible AI. By doing it.

Hands-on workshops for every team — from executives to engineers. We help organisations move beyond buzzwords and build AI that is fair, transparent, and aligned with human values.

01
Introduction to Responsible & Ethical AI
Leaders & cross-functional teams
A foundational workshop covering the core principles of responsible AI — fairness, transparency, accountability, and privacy. Participants explore real-world examples of ethical failures and understand why responsible practices are critical for building trust and long-term value.
Book this workshop →
02
AI Risk Assessment & Impact Analysis
Product, compliance & risk teams
Helps organisations proactively identify, assess, and mitigate risks associated with AI systems. Teams gain practical frameworks for conducting ethical impact assessments and integrating risk-awareness into development and deployment workflows.
Book this workshop →
03
Bias & Fairness in Machine Learning
Data scientists & ML engineers
A hands-on technical workshop exploring how bias enters datasets and algorithms, and how to detect, measure, and reduce it. Participants use practical tools and real-world examples to build fairer, more inclusive AI models.
Book this workshop →
04
Explainability & Transparency in AI
Technical teams & product managers
Focuses on making AI systems more transparent and interpretable. Participants learn how to apply explainability tools to communicate model behaviour clearly and responsibly, enabling better user trust and regulatory compliance.
Book this workshop →
05
AI Policy, Regulation & Compliance Readiness
Legal, compliance & executive teams
An overview of the evolving global regulatory landscape, including the EU AI Act. Helps organisations assess where they stand, understand how regulations apply to their AI use cases, and build internal policies aligned with ethical and legal standards.
Book this workshop →
06
Ethical AI by Design
Product managers, UX & developers
A practical workshop integrating ethical thinking into the product design and development process. Through real scenarios and inclusive design principles, teams learn to build AI systems that prioritise user well-being, safety, and fairness from day one.
Book this workshop →

All workshops are available in-person or remote. Custom formats available for enterprise teams.

Enquire About Workshops
FOLLOW ALONG
Stay in the loop on AI governance.
LinkedIn X / Twitter YouTube
AI EDGE
Fresh reads & podcasts.
23 MAY 2025
Is AI Booming Exponentially?
Read →
15 SEP 2025
Navigating Trustworthy AI for Secure Systems
Read →
4 JUN 2025
What is Responsible and Ethical AI?
Read →
: