Your compliance team will thank you
Last updated: March 2026
We build software with AI assistance — and we govern it the way regulated industries expect. Every change requires human approval before it reaches production. We define clear data boundaries, enforce least-privilege permissions, and add AI-specific controls: prompt injection defense, hallucination prevention, and full observability. All aligned with SOC 2, GDPR, HIPAA, and the EU AI Act.
Governance
Human review at every gate
No code reaches production without human approval. Every high-impact action — code commits, infrastructure changes, data migrations — requires explicit sign-off from a senior engineer. We maintain full audit trails for every decision.
This applies to all AI-assisted output: code generated by language models, configurations suggested by agents, documentation produced from structured prompts. The tool doesn't matter — the gate is the same.
Principle
Least privilege across every layer
Every permission in our delivery process is scoped to the minimum required for the task — and revoked when the task is done. This applies uniformly across the stack.
Infrastructure
Deployment roles can write to a specific storage bucket and invalidate a specific CDN distribution. Nothing else. We use OIDC federation for CI/CD — no long-lived access keys.
CI/CD pipelines
Build pipelines have read access to source code and write access to deployment targets. They cannot modify IAM policies, access production databases, or read unrelated secrets.
APIs and endpoints
Each endpoint accepts only the HTTP methods and input shapes it needs. Strict server-side validation enforces field types, lengths, and allowed characters — required by PCI DSS and reinforced by SOC 2 Type II controls.
AI tooling
When AI tools connect to your systems, each connection is individually authorised and restricted to specific resources. A tool analysing your database schema cannot access your CI pipeline. Permissions are task-scoped and time-limited.
Data access
Engineers and tools access client data on a need-to-know basis. PII is classified and handled according to the engagement's compliance context — GDPR, HIPAA, or PCI DSS scope as applicable.
Foundation
DevSecOps from build to deploy
AI-specific controls don't replace the fundamentals — they build on them. Our standard delivery pipeline includes security at every stage.
Dependency vulnerability scanning
Every build runs an automated audit against known CVE databases. Critical or high-severity vulnerabilities block deployment. We pin major versions, keep runtime dependencies minimal, and diff the lock file on every change.
Security headers
Every HTTP response includes HSTS, Content Security Policy, X-Frame-Options DENY, strict Referrer-Policy, and a Permissions-Policy that disables unused browser APIs. We validate against external scanners before launch.
Server-side input validation
All user input is validated server-side. We strip HTML and control characters, enforce field-specific length limits, reject malformed data, and use honeypot fields for bot detection. Client-side validation is never trusted alone.
Secrets management
API keys, credentials, and tokens are stored in encrypted configuration or dedicated secrets management services. Nothing is committed to source control. Infrastructure secrets use encrypted-at-rest configuration with provider-level key management.
Static analysis and linting
Code is checked for quality and security issues before merge. We enforce consistent patterns that reduce the surface area for common vulnerabilities — injection, type confusion, unsafe data handling.
Confidentiality
Your data stays within defined boundaries
When we use AI in delivery, we define clear data boundaries before the engagement starts.
Controlled data flows
Contact information — names, emails, phone numbers — is processed within our own services and never reaches external AI providers. Client source code and business logic are not sent to third-party AI services without explicit contractual agreement.
PII classification and handling
We classify personally identifiable information according to the engagement's regulatory context: GDPR data subject categories for EU engagements, HIPAA PHI definitions for healthcare, PCI DSS cardholder data scope for payment systems.
No PII in logs
Application logs never contain full user content, prompts, or PII. We log metadata only — message length, session identifier, response time, error type. This supports GDPR's data minimisation principle and SOC 2 logging controls.
EU data residency
All infrastructure runs within the EU. Observability and monitoring data stays in EU-hosted services. When a third-party service's data residency doesn't meet your requirements, we flag it during scoping — not after the work is done.
AI tooling models
How we handle AI tools in your project
Every engagement is different. We adapt our AI tooling to your data governance requirements, offering three models that keep your proprietary data under your control.
Model A
Your corporate AI licenses
We work within your existing subscriptions — GitHub Copilot, Claude, OpenAI Codex, or others. We help fine-tune configurations, manage seat allocation, and control costs so your team gets maximum value from the tools you already pay for.
Model B
On-premise LLM in your datacentre
We help you run LLMs on your own infrastructure. We configure the model, train it on a curated mix of your existing codebase and industry best practices, and ensure nothing leaves your network. Full air-gap capability when required.
Model C
Our managed AI licenses
We use our own LLM subscriptions with contractual guarantees: your data is not shared with other clients, is not used to train models, and cannot be leaked. We enforce zero-retention API agreements and provide full audit trails on request.
AI layer
AI-specific controls
On top of standard DevSecOps, we apply controls designed for the risks AI tooling introduces.
Prompt injection defense
We validate all inputs against injection pattern blocklists in English and Portuguese. For non-Latin scripts, we translate to English first and then validate — so attacks that exploit language-skewed safety filters don't bypass our perimeter.
Hallucination prevention
Models answer only from approved fact sheets. When the model doesn't have the information, it says so and defers to a human. We don't allow models to use their training data for company-specific or client-specific claims.
LLM observability
Every interaction is traced through an observability platform: prompts, completions, token usage, latency, and hallucination risk scores. We review traces for anomalies and provide full audit logs on request — supporting SOC 2 Type II audit trail requirements.
Budget and consumption controls
We cap tokens per response, limit conversation sessions, and enforce monthly spending ceilings at the provider level. If the AI budget is exhausted or the provider API fails, we fall back to static responses — no silent failures, no uncontrolled costs.
Operations
Monitoring, logging, and incident response
Security doesn't stop at deployment. We monitor in production and respond to what the monitoring reveals.
Web Application Firewall
We deploy WAF rules as a perimeter layer in front of APIs and application endpoints. Rate-based rules throttle automated abuse. Managed rule sets block known attack patterns. WAF complements application-level validation — it stops volumetric traffic before it reaches your code.
Structured logging and log enrichment
Application logs are structured, timestamped, and enriched with contextual metadata: session identifiers, request types, response times, and error categories. Enriched logs feed into alerting rules and make incident investigation faster.
Active and passive monitoring
Active monitoring checks availability and response correctness on a schedule. Passive monitoring watches production logs for anomalies: invocation spikes, error rate increases, unusual token consumption, or unexpected geographic patterns.
Incident response
When monitoring catches an anomaly, we have defined escalation paths. Rate-limited endpoints return clear error responses. Exhausted AI budgets trigger fallback modes. Every incident produces a log trail for post-mortem analysis and regulatory reporting.
Frameworks we work with daily
We don't just list frameworks — we apply specific controls from each one. Here are the standards we work with across engagements.
GDPR
Data protection and privacy for EU engagements. Data subject rights, lawful basis, data minimisation, and breach notification.
ISO 27001
Information security management. Risk assessment, access controls, incident management, and continuous improvement.
EU AI Act
AI governance and risk classification. High-risk obligations enforceable August 2, 2026. Transparency, documentation, and human oversight.
SOC 2 Type II
Security, availability, and confidentiality controls. Audit trails, access management, change control, and continuous monitoring.
PCI DSS
Payment card data protection. Input validation, encryption, access control, and network segmentation for cardholder data environments.
HIPAA
Protected health information safeguards. Administrative, physical, and technical controls for healthcare data handling.
Also: HITECH, FDA 21 CFR Part 11, FCA, ePrivacy Directive, CCPA/CPRA, WCAG 2.2 AA, UK GDPR, Data Protection Act 2018, OWASP Top 10 for LLM Applications, OWASP Top 10 for Agentic Applications.
Your code, your IP
Human review of all AI-assisted output establishes clear authorship and copyright eligibility. We maintain code provenance — who generated it, who reviewed it, when it shipped — from the first line to production deployment.
We built this website ~85% with AI assistance under these same controls. The dependency scans, the security headers, the input validation, the human review gates, the log enrichment, the least-privilege IAM — every control on this page was applied to build this page.