Clinical AI

Thymos Health — AI Medical Chatbot for Conservative Care

PHI-safe RAG-powered AI chatbot integrated with Oyster EHR — built to handle real patient data in production.

0 PHI sent to third-party models
1st Attempt — passed security review
EHR Oyster integration live
12 wks Architecture to clinical launch
Overview

Thymos Health had a clear AI use case: an intelligent medical chatbot to support their conservative care platform. But every time the team tried to move from concept to production, the same wall appeared — how do you let an AI model interact with real patient data without creating a PHI liability?

Their sandbox prototype worked well in isolation. But the moment real patient records entered the picture, legal and compliance concerns shut it down. They needed someone who could own the compliance architecture, not just the AI build.


The Challenge

One wall blocked every production attempt

Thymos Health needed a PHI-safe AI chatbot that could answer patient questions using their actual EHR data — without sending protected health information to third-party AI models or failing enterprise security reviews.

Every production attempt hit the same compliance wall
  • The sandbox prototype worked — but real patient records triggered legal concerns every time
  • PHI isolation and tokenization had to be built before any AI feature could connect to live data
  • They needed someone to own the compliance architecture, not just the AI build
EHR integration with emergency detection built into the chatbot
  • Oyster EHR integration required scoped, context-aware data retrieval — not raw record access
  • Emergency detection had to be embedded in the response pipeline, not added as an afterthought
  • Every patient interaction required an immutable audit trail from the first API call

Our Solution

PHI-safe AI chatbot — architecture first, features second

Built a RAG-powered AI chatbot with scoped PHI retrieval integrated directly with Oyster EHR, emergency detection with auto-escalation, and full HIPAA-compliant AWS infrastructure — passed enterprise security review on first attempt.

Safety layer built before any AI touched live data
Before a single AI feature was connected to live patient data, we built the safety layer.. PHI isolation, tokenization, encryption in transit and at rest. Strict access controls scoped to patient context. Only then did we connect the AI layer.
Gemini Flash for fast clinical responses
The chatbot was built on Google Gemini Flash for fast, cost-efficient clinical responses.. Deployed on AWS with HIPAA-compliant infrastructure. Integrated directly with Oyster EHR — pulling the right patient context for each query without exposing raw PHI to the model. Emergency detection built into chatbot logic — automatically flagging responses that indicate urgent care needs. Clinical escalation workflow for critical patient inputs.

Outcomes

Results

0
PHI sent to third-party models
1st
Attempt — passed security review
EHR
Oyster integration live
12 wks
Architecture to clinical launch

PHI-safe AI chatbot live in production — real patient data, zero compliance gaps

RAG pipeline connected to Oyster EHR with scoped, context-aware retrieval

Emergency detection layer built in — auto-escalation on critical patient inputs

Full HIPAA-compliant AWS infrastructure with audit trails on every interaction

Passed enterprise security review — BAA-ready from day one

Related Solution
8-Week Healthcare AI Development
Explore →
Get Started

Ready to build AI the right way?

Tell us your AI use case — we'll map a compliant path to production in 30 minutes.

8-Week Healthcare AI Development