Security is how we build, not something we add afterwards.
Every architectural decision starts with data protection. We build private AI platforms for businesses — your data, your context, your people's intelligence. The security model has to earn that trust at every layer.
Four things that are always true
Your data is never used to train AI models. Anthropic's commercial API terms prohibit using your data for model training. Your conversations and context remain yours.
You own your inputs, outputs, and the intelligence your people build. We provide the architecture. The data and the intelligence it generates belong to you and your people.
Conversation data is stored in EU-region infrastructure. Your data resides in European data centres. International transfers (AI processing) are covered by appropriate legal safeguards.
Explicit retention periods — nothing kept indefinitely. Active data is retained for 12 months. Abandoned sessions are deleted after 30 days. Immediate deletion is available on request.
How AI processing works
Every platform we build uses Anthropic's Claude API for AI processing. Here is how we govern that relationship:
- No training on your data. Anthropic's commercial terms explicitly exclude API data from model training. This is contractual, not optional.
- System prompt hardening. Every platform deployment includes structured system prompts with input sanitisation boundaries, designed to mitigate prompt injection and ensure the AI operates within its defined scope.
- AI interaction is always disclosed. In accordance with EU AI Act Article 50, every user is informed that they are interacting with an AI system before any conversation begins. No exceptions.
- No automated decisions with legal or significant effects. AI outputs inform human decisions — they do not replace them. Platform qualification, challenge mapping, and framework recommendations are advisory. All substantive outcomes involve human review.
Technical security measures
The platform is built on a defence-in-depth model. Security is enforced at the network, application, and data layers.
Encryption
- In transit: TLS 1.2+ enforced on all connections. No fallback to weaker protocols.
- At rest: AES-256 encryption across all database storage (Supabase infrastructure).
Data isolation
- Row-Level Security (RLS) enforced at the database level for multi-tenant data isolation. Each client's data is cryptographically separated — not just logically partitioned.
- Service-role key isolation. No client-side code has direct database access. All queries route through authenticated server-side functions with scoped permissions.
Session and transport security
- OWASP-compliant session management: HttpOnly, Secure, and SameSite=Strict cookie attributes enforced.
- Content Security Policy (CSP) headers restrict resource loading to trusted origins.
- Security headers: X-Content-Type-Options (nosniff), X-Frame-Options (DENY), Referrer-Policy (strict-origin-when-cross-origin), and Permissions-Policy enforced across all responses.
Data ownership by design
Every platform we build separates data into three distinct layers, each with its own ownership model. This is not a feature — it is the foundation of the architecture.
Data layer
Your raw data — documents, conversations, context. This is yours. Always exportable in standard formats: PostgreSQL, JSON, CSV, markdown. Always deletable on request.
Architecture layer
The platform structure, agent configuration, and framework logic. This is leased — it powers your platform while the subscription is active. If you leave, the architecture is withdrawn. Your data stays.
Intelligence layer
The knowledge your people build through using the platform — how they work, what they contribute, the patterns they develop. This is portable. It belongs to your people and travels with them.
There is no lock-in beyond the value of the trained intelligence. If the platform stops being useful, you leave with everything you brought and everything your people built. The only thing that stays with us is the architecture — and that is exactly how it should work.
What we do, specifically
Security practices are only meaningful if they are specific. Here is what we actually do:
- Three-layer security audit trail. Every deployment passes through design-time review, build-time verification, and post-build independent check. No single person can deploy unchecked infrastructure.
- Independent dual verification on all infrastructure deployments. Changes to production require review from a second pair of eyes — human or automated — before they reach live systems.
- API key lifecycle management with quarterly rotation schedules and documented incident response procedures. Keys are rotated proactively, not reactively.
- Credential compartmentalisation. Separate keys per service, per environment tier (development, staging, production). A compromised key in one context cannot access another.
- Pre-commit scanning for secret exposure. Every code commit is automatically scanned for credentials, API keys, and sensitive patterns before it reaches the repository.
On incident maturity: In February 2026, our pre-commit scanning detected a partial API key prefix in a configuration file. It was contained within minutes, rotated within the hour, and post-incident analysis confirmed zero unauthorised usage. We mention this because mature security is not about never having incidents — it is about detecting them immediately, containing them decisively, and learning from them systematically.
Our partners and their certifications
We build on infrastructure from providers who meet enterprise-grade security standards.
Supabase
Database hosting and authentication. EU data centres.
- ISO 27001
- SOC 2 Type II
- EU data residency
Netlify
Website hosting, serverless functions, and CDN.
- SOC 2 Type II
- GDPR compliant
Anthropic
AI processing via Claude API. Commercial terms prohibit training on API data.
- SOC 2 Type II
- ISO 27001
- EU-US Data Privacy Framework
What we are working towards
We are a young company building to enterprise standards from day one, not retrofitting security after the fact. Here is where we are on the road to formal certification:
- Cyber Essentials certification — in progress. Aligning operational practices with the NCSC framework.
- ISO 42001 alignment — our existing AI readiness methodology, originally developed for the Innovate UK programme, maps directly to the ISO 42001 AI management system standard. We are formalising this alignment.
- Third-party penetration testing — planned for production scale. We will engage independent security researchers to test our infrastructure before scaling beyond early clients.
We will update this page as each milestone is reached. We would rather be honest about where we are than claim certifications we have not yet earned.
Security inquiries
For security questions, vulnerability disclosure, or to request our detailed security documentation, contact us at security@horizonintelligence.ai.
We respond to all security inquiries within 48 hours.
Last updated: 25 April 2026