Healthcare
From Discovery to Production:
A Medical Device Monitoring Platform
Pacemakers and oxygen devices send continuous readings from patients in the field. We built the platform that ingests that data, detects anomalies, and alerts physicians — PHI-compliant, from scratch, in 9 months.
What the platform does
Connected medical devices — pacemakers, oxygen bottles, and other life-critical equipment — send continuous readings from patients in the field. The platform ingests that telemetry in real time, validates it, and processes it against clinical rules.
When a reading is anomalous — a pacemaker rhythm outside expected parameters, an oxygen level dropping below threshold — the system generates an alert. Physicians review those alerts on a dashboard, analyse the patient's data, and decide whether to escalate.
Every step in this chain must be auditable, encrypted, and available around the clock. A missed alert, a corrupted reading, or a gap in the audit trail could directly affect patient care. That's the weight of what this platform carries.
Medical Devices
Pacemakers, oxygen bottles
Ingestion
Validate + encrypt
Processing
Clinical rules engine
Alert Engine
Anomaly detection
Physician Dashboard
Review + escalate
The challenge
The client was building a new connected medical device and needed a cloud-native platform from scratch to support it. Thousands of devices would stream telemetry continuously. The platform had to ingest, process, and store that data with full PHI (Protected Health Information) compliance and 24/7 availability.
This wasn't inventory data or analytics. It was pacemaker readings and oxygen levels. A delayed alert or a gap in the audit trail could directly affect patient safety. Every architectural decision had to account for that.
They needed a team that could own the entire delivery — architecture, backend engineering, infrastructure, DevOps, and compliance — all under one roof. A self-contained unit that could move fast without depending on external teams or handoffs.
The constraints were tight. Nine months from first line of code to production. A distributed team across multiple time zones. And regulatory requirements that left no room for shortcuts: every piece of patient data had to be encrypted, auditable, and access-controlled from day one.
Discovery
We started with a focused discovery phase before writing any production code. The goal was to understand the clinical domain deeply enough to make architectural decisions that would hold up under regulatory scrutiny.
We mapped the full domain model for device telemetry: what data each device type would emit, how often, and in what format. A pacemaker sends different readings than an oxygen bottle, at different intervals, with different alert thresholds. We defined the PHI classification boundaries — which data elements counted as Protected Health Information and which did not — because this distinction drives encryption, access control, and audit requirements across the entire system.
We worked directly with the client's regulatory team to validate the compliance model. The output was a shared specification that every subsequent architectural decision would reference.
Why every data point must be accounted for
When a pacemaker sends a reading, that reading must be stored immutably. You need to know exactly what was received, when it arrived, and who accessed it afterwards. Healthcare compliance requires a definitive, tamper-proof audit trail — not just the current state of a patient record, but the full history of every change.
We solved this with Event Sourcing. Every state change is stored as an immutable event. Nothing is overwritten. Nothing is deleted. The complete history is always available for audit, and the system can reconstruct the state of any record at any point in time.
The platform also needed to handle two very different workloads simultaneously: heavy, continuous writes from thousands of devices streaming telemetry, and fast reads from physician dashboards that need to display patient data and alerts in real time. We used CQRS (Command Query Responsibility Segregation) to separate these paths — high-throughput append-only writes on one side, pre-computed read projections optimised for the dashboard on the other.
Access control was equally critical. In a PHI environment, every user's access must be granular and auditable — a nurse sees different data than a cardiologist, and every access is logged. We used Keycloak for identity management, giving us role-based access control and single sign-on integrated with the client's existing authentication infrastructure.
Building for reliability
The platform runs 24/7 because physicians depend on it. If the system goes down, alerts don't get generated. If alerts don't get generated, a physician might miss an anomalous pacemaker reading. Every infrastructure decision was made with that in mind.
We built the platform as containerised microservices on Kubernetes (AKS), so each service could scale independently. When device load spiked — thousands of devices streaming data simultaneously, firmware updates triggering batch uploads — the ingestion services scaled up without affecting the alerting or dashboard layers.
Every service-to-service communication was encrypted by default using mutual TLS. We deployed a service mesh (Istio) that also gave us circuit breakers to isolate failures before they cascaded, and canary deployments so we could roll out changes to a fraction of traffic before committing.
We treated infrastructure as code. All deployment configuration lived in Git and was automatically reconciled using GitOps (FluxCD). Every environment — dev, staging, production — was reproducible from a single commit. Cloud resources were provisioned with Terraform, application packaging with Helm. No manual steps. No drift between environments. Infrastructure changes went through the same pull request and review process as application code.
From device to dashboard
A pacemaker reading arrives at the platform. It needs to be stored, processed against clinical rules, and — if anomalous — surfaced as an alert on a physician's dashboard within seconds. The data journey crosses multiple storage layers, each chosen for a specific reason.
Device telemetry is diverse — a pacemaker sends different data than an oxygen bottle, and the format can change between firmware versions. We stored raw telemetry in a flexible document database (CosmosDB) that could ingest varied payloads without rigid schema migrations. Structured data — patient records, device registrations, alert configurations — lived in a relational database (SQL Server). Frequently accessed data was cached (Hazelcast) so physician dashboards loaded fast even under heavy read traffic.
We instrumented the full data journey with monitoring and tracing. If a reading took too long to process, if an alert failed to generate, or if a dashboard query slowed down, the operations team saw it immediately. Prometheus collected real-time metrics across every service. Jaeger traced each request from the moment device data arrived through ingestion, processing, alert generation, and storage — essential for understanding where latency lived in a system with dozens of services.
Distributed team coordination
The team was distributed across multiple time zones, which added a coordination challenge on top of the technical complexity. We addressed this through the self-contained team model: architecture, backend engineering, infrastructure, DevOps, and compliance all sat within the same unit. No external dependencies. No handoff queues. When a decision needed to be made, the people who could make it were already in the room.
We established clear async communication patterns and overlapping working hours for cross-timezone collaboration. Architectural decisions were documented in decision records. Sprint ceremonies were time-zone-aware. The GitOps workflow meant that infrastructure changes were visible and reviewable by anyone on the team regardless of location — the Git history was the single source of truth.
Iterative delivery through MVPs
We delivered the platform through a series of progressive MVPs rather than a single big-bang release. Each MVP targeted a specific clinical and technical risk before we moved to the next phase.
Ingestion + PHI
Can we receive and encrypt pacemaker data?
Queries + Dashboards
Can physicians query fast enough to act?
Alerts + Identity
Can we detect anomalies and control access?
Production
Go live — no surprises
The first MVP answered the most fundamental question: can we ingest pacemaker readings at scale and store them with the required PHI encryption and access controls? The second validated whether physicians could query patient data fast enough to act on alerts — testing the CQRS read projections under realistic load. Later MVPs layered in anomaly detection and alert generation, resilience testing under failure scenarios, and the identity management integration that controlled who could see which patient data.
By the time we reached production, every critical path — from device ingestion to physician alert — had been tested and refined across multiple iterations. Each MVP built confidence with the client's regulatory and product teams. When we went live, there were no surprises.
9
Months to production
1000s
Connected devices supported
100%
PHI compliance from day one
24/7
Platform availability
Tech stack
Building a connected health platform?
We'll help you get from architecture to production — PHI-compliant and built for scale.