Narrative: Health Coach
The AI Companion That Knows Your File
The Story
María sees that her glucosa is “en revisión.” The blue indicator on the results screen tells her it is not in the optimal zone, but it is not alarming either. Still, the number nags at her. She does not want to call her doctor for something that might be nothing. She does not want to search online and end up on a forum post from 2014 about diabetes complications.
She opens the Health Coach. The floating button in the bottom-right corner of every screen. The input field reads: “Preguntale a tu Health Coach…”
She types: “Mi glucosa subió, debo preocuparme?”
Three animated dots appear. Four seconds later:
“Tu glucosa está en 105 mg/dL, un poco por encima de tu rango óptimo (70-100). Comparada con tu última medición de 98, subió un 7%. Este cambio está dentro de la variabilidad normal de tu cuerpo, así que no es motivo de preocupación inmediata. Tu plan actual incluye actividad física 5 veces por semana y dieta baja en azucar – seguir con eso es lo más importante. Si quieres, podemos revisar tu tendencia completa.”
Inside the response, three elements are highlighted:
- Glucosa – blue chip, tappable, navigates to her glucosa detail screen
- actividad física – green chip, tappable, shows her current exercise plan
- revisar tu tendencia completa – outlined chip, tappable, opens the trend comparison view
María taps “Glucosa.” She sees the 3-zone bar, the trend chart, the harm anchor at 126. She is at 105. The harm anchor is far away. She closes the detail view and returns to the chat. She understands her result. She did not panic. She did not call the clinic. She spent 45 seconds.
Principle: Contextual Intelligence, Not Generic AI
Any general-purpose AI can answer health questions. So can a search engine. The answer to “should I worry about glucose of 105?” from any general-purpose AI will be: “A glucose of 105 mg/dL is considered slightly above normal. You may want to consult your doctor. Here are some dietary recommendations…” Generic. Cautious. Useless for María specifically.
ADEN’s Health Coach answers YOUR health questions. It knows:
- María’s 30 biomarker values and their trend lines
- Her 6 pharmacogenomic gene statuses (CYP2C19, CYP2D6, CYP2C9, VKORC1, SLCO1B1, and SLC16A11)
- Her 3 polygenic risk scores
- Her current adherence rate to her health plan
- Her medication and supplement list
- Her last consultation summary and active orders
- Her previous Health Coach conversations
When María asks about her glucosa, the Health Coach does not give a textbook answer. It gives HER answer: her specific value, her specific trend, her specific RCV assessment, her specific plan. The response references data that only exists inside ADEN’s engine.
This is the fundamental difference between a chatbot and a copilot. A chatbot retrieves information from a knowledge base. A copilot has read your entire medical file and synthesizes it into a response that applies to you and only you.
The JTBD captures it precisely: “Quiero hablar con alguien que entienda mis resultados y me explique sin tecnicismos.” Not “someone who knows about health.” Someone who knows about MY health.
Keyword Highlighting: Making AI Actionable
Research in AI health apps (2024) demonstrated that highlighting keywords in chip format inside AI responses converts passive text into interactive elements. Users who received chip-formatted responses engaged with 3.2x more follow-up actions than users who received plain text.
ADEN’s Health Coach implements three chip types:
Blue chips (#0f2fc7) – Biomarkers. Any biomarker mentioned in the response becomes a tappable chip that navigates to the detail screen. “Tu [Glucosa] está en 105 mg/dL.” The chip is not decorative. It is a navigation shortcut. The patient who wants more context on any mentioned biomarker gets it in one tap.
Green chips (#10b981) – Supplements and actions. Supplements, medications, and lifestyle actions mentioned in the response become green chips. “Tu plan incluye [Omega 3] y [actividad física].” Tapping a supplement shows its purpose, dosage, and interactions. Tapping an action shows the relevant section of the patient’s plan.
Outlined chips – CTAs. Actions the patient can take become outlined chips: “[Agendar consulta],” “[Ver tu plan],” “[Revisar tu tendencia].” These are not suggestions buried in prose. They are tappable buttons embedded in the conversation flow.
The design rule: every Health Coach response must contain at least one actionable chip. If the AI generates a response with no actionable element, it fails the output filter. The patient should never reach the end of a response and think “ok, but what do I do now?”
The Safety Net
The Health Coach runs on Layer 3 of the engine architecture: AI-powered synthesis using a large language model, augmented by the full patient context. Layer 3 is powerful. It can generate nuanced, personalized health narratives that no rule-based system could produce.
Layer 3 can also fail.
AI can hallucinate. It can generate a recommendation that sounds plausible but has no clinical basis. It can reference a study that does not exist. It can suggest a supplement interaction that is biologically incorrect. Five different models were tested against ADEN’s engine during validation, and all five fabricated supplementation formulas. Out of 26 AI-proposed rules, only 1 was validated. Zero numerical multipliers were correct.
This is why the Health Coach does not operate alone.
Every response generated by Layer 3 passes through Layers 1 and 2 before reaching the patient:
Layer 3 (AI) generates response
--> Layer 1 (76 deterministic rules) checks clinical accuracy
--> Layer 2 (longitudinal data) validates trend claims
--> Anti-nocebo filter checks language
--> Keyword chip generator creates interactive elements
--> Response delivered to patient
If Layer 3 fails or produces an unfounded claim, the patient does not see an error screen. They see a graceful degradation: “Tus datos básicos están disponibles. El análisis detallado llegara pronto.” Layers 1 and 2 continue to work – the patient still receives their alerts, their collision warnings, their trend data. The AI enriches. The math protects.
The stat: 0% unfounded hypotheses in 612 evaluated interpretations. Not because the AI is perfect, but because the verification layer is.
Escalation: Knowing When to Step Back
The Health Coach is not a doctor. It must never pretend to be one. The disclaimer “No reemplaza la consulta médica” is permanently visible – not buried in settings, not dismissed after first view, but present in every session as a subtle, persistent reminder.
More importantly, the system detects when a question exceeds its scope and escalates proactively:
Critical values. If María’s latest lab results include a value that triggers a clinical alert (e.g., potassium > 6.0), the Health Coach does not attempt to reassure. It says: “Este resultado necesita atención médica. Te recomiendo contactar a tu médico hoy.” The CTA is not a chip – it is a prominent button: [Agendar consulta].
Diagnostic requests. If María asks “tengo diabetes?”, the Health Coach does not diagnose. It responds: “No puedo hacer diagnósticos. Lo que puedo decirte es que tu glucosa está en 105 mg/dL, por encima de tu rango óptimo. Tu médico puede evaluarte con más detalle. Quieres agendar una consulta?” Honest about limits. Helpful within them.
Emotional distress. If the input contains language patterns suggesting anxiety or distress (“estoy asustada,” “me voy a morir,” “no puedo dormir pensando en esto”), the Health Coach shifts to an empathetic, grounding tone: “Entiendo que te preocupa. Tus resultados no indican una emergencia. Si la preocupación persiste, hablar con tu médico puede darte la tranquilidad que necesitas.” It does not minimize the emotion. It does not over-reassure. It acknowledges and offers a path forward.
Out-of-scope topics. If the patient asks about a condition, medication, or symptom not covered by their ADEN data, the fallback is clear: “No puedo responder eso. Para medicamentos o diagnósticos, habla con tu médico.” Two CTAs: [Agendar consulta] and [Reformular pregunta]. The patient is never left without a next step.
The AI is honest about its limits. This builds more trust than pretending to know everything. A system that says “I don’t know, but here’s who does” earns more credibility than one that guesses.
Decisions
| # | Decisión | Chosen | Rejected | Rationale |
|---|---|---|---|---|
| 1 | AI response format | Natural language with keyword chips | Structured cards, bullet lists, medical report style | Conversational format feels like talking to a person; chips add interactivity without sacrificing readability |
| 2 | Chip types | 3 types (biomarker/blue, action/green, CTA/outlined) | Single chip type, no chips, inline links | 3 types create visual hierarchy; color coding maps to existing ADEN design system |
| 3 | Response time target | 4 seconds including context retrieval | 1 second (too fast for quality), 10+ seconds (too slow for conversation) | 4 seconds feels like a thoughtful pause, not a delay; includes full pipeline (context + prompt + filter + chips) |
| 4 | Disclaimer placement | Persistent in every session, subtle but visible | First-use only, buried in settings, dismissible | Medical-legal requirement; persistent placement prevents normalization bias (“I forgot it said that”) |
| 5 | Escalation mechanism | Proactive AI detection + prominent CTA | User-initiated only, no escalation, generic “see a doctor” | Proactive detection catches cases the patient would not escalate themselves; prominent CTA reduces friction to action |
| 6 | Fallback when AI fails | Graceful degradation to L1+L2 data with “analysis coming soon” | Error screen, retry button, blank response | Patient always gets something useful; the system never appears broken |
| 7 | Voice input | Dictation button (speech-to-text, then sends as text) | Real-time voice conversation, no voice | Dictation is accessible and familiar; real-time voice is a different product with different expectations |
| 8 | Conversation history | Persistent, searchable, read-only replay | Ephemeral (deleted after session), editable | History lets patients revisit explanations; read-only prevents confusión about what the AI actually said |
Engine Connection
The Health Coach is Layer 3’s human interface. It is the surface through which the engine’s most sophisticated capability – AI-powered clinical synthesis – reaches the patient.
The 4-second response pipeline:
t=0.0s Patient sends message
t=0.2s Context retrieval: 30 biomarkers, 6 genes, 3 PRS,
adherence, meds, supplements, last consultation, active orders
t=0.5s Prompt construction: patient question + full context +
anti-nocebo guardrails + output format spec
t=3.2s AI generation complete (LLM)
t=3.4s Layer 1 verification: clinical accuracy check against
76 deterministic rules
t=3.6s Layer 2 validation: trend claims cross-referenced with
longitudinal data
t=3.7s Anti-nocebo filter: language scan for prohibited terms,
replacement with approved alternatives
t=3.9s Chip generation: biomarkers, actions, and CTAs extracted
and formatted as interactive elements
t=4.0s Response delivered to patient
The cost: approximately $0.01 per interaction. This makes the Health Coach economically viable for every patient, every day, without usage caps or premium tiers. A patient who asks 3 questions per day costs $0.03 in AI inference. A patient who asks 10 questions during an anxious week costs $0.10. At these economics, the Health Coach is not a luxury feature – it is a default companion.
The cost structure also means the system can be generous with context. Instead of truncating patient data to save tokens, the full clinical profile is included in every prompt. The AI has the complete picture every time it responds. This is why the responses feel personalized rather than generic – the model has access to everything that makes María’s health profile unique.
Compare this to a general-purpose AI: the patient would need to type “I’m a 38-year-old woman from Medellín, my glucose is 105, it was 98 last time, I’m on simvastatin, I have a CYP2C19 variant, my doctor told me to exercise 5 times a week…” every single time they want a contextualized answer. The Health Coach knows all of this already. The patient types 8 words. The system fills in the rest.
Conversational Quality
The Health Coach is neither a general assistant nor a text processor. It is a domain-specific companion with deep contextual knowledge of one person’s health. The correct architecture for a health AI: narrow scope, deep context, deterministic safety net.
The conversational quality is essential. The assistant should feel like a calm, competent person – never robotic, never condescending, never uncertain in a way that makes the user uncertain. The Health Coach follows this standard:
- Calm. The response to “debo preocuparme?” never starts with “ALERTA” or “Es importante que sepas.” It starts with the fact: “Tu glucosa está en 105 mg/dL.”
- Competent. The response references specific values, specific trends, specific plans. Not “you might want to consider dietary changes” but “tu plan incluye actividad física 5 veces por semana y dieta baja en azucar.”
- Honest. When it does not know, it says so. When the question exceeds its scope, it escalates. “No puedo responder eso” is not a failure – it is a signal of integrity.
The chip system follows the same principle. Modern design language increasingly uses tappable, contextual elements embedded in content. The chips are the Health Coach’s versión of this pattern: structured interactivity embedded in natural conversation, not bolted on as separate UI.
The Health Coach is not trying to replace the doctor. It is trying to be the knowledgeable friend who sits with you at the kitchen table, looks at your lab results, and says: “Here is what this means. Here is what you can do. And here is when you should call your doctor.” Every patient deserves that friend. Most do not have one.
ADEN makes one for $0.01.