Recently, Oura introduced our first custom AI model for women’s health—a new step toward more personalized, evidence-based insights for members. Behind the scenes, Tanvi Jayaraman, MD, and Chris Curry, MD, PhD, were two of the lead clinicians responsible for shaping and training the model, grounding every decision in clinical rigor, safety, and real-world relevance.

Dr. Tanvi is the Clinical Lead of Health AI at Oura, where she bridges medicine, artificial intelligence, and product strategy to advance responsible, trustworthy tools for everyday health, and Dr. Chris is Oura’s Clinical Director of Women’s Health, helping to guide the vision for Oura’s global healthcare programs and partnerships.

In this Q&A, they unpack how the model was built, the safeguards and validation work that went into it, and what “trustworthy health AI” means when you’re working at the intersection of research, technology, and health data. 


Q: Why does women’s health need a different approach to AI than other areas?

Dr. Tanvi Jayaraman | Oura
Dr. Tanvi Jayaraman, Clinical Lead of Health AI at Oura

Dr. Tanvi: Across big tech, we’re seeing a shift from a research‑first, safety‑focused approach to more of a “move fast and ship products” mindset. In many industries, that might feel uncomfortable but ultimately, manageable. In health—and especially in women’s health—that mindset can be dangerous.

Women’s health has long been built on an incomplete evidence base: underfunded research, exclusion from clinical trials, biased diagnostic criteria, and a long history of concerns being minimized or dismissed. Building AI on top of that already‑fragile foundation doesn’t just risk being unhelpful, it risks hard‑coding those failures into the next generation of tools.

For me, “trustworthy health AI” in women’s health means slowing down enough to interrogate the evidence, fill gaps where we can, and be honest about uncertainty where we can’t. It means holding ourselves to a higher bar than “does it work on average?” and asking, “Who could this miss or harm, given the history of how women’s health has been treated?”

Dr. Chris: When an AI system gives health‑related guidance, it’s stepping into territory that has historically failed women. That’s why we believe any model in this space has to be held to a higher bar: clinically grounded, transparent about its limitations, and designed to support—not override—human care.

Q: What is Oura building and how is it different?

Dr. Chris Curry | Oura Ring
Dr. Chris Curry, Clinical Director of Women’s Health at Oura

Dr. Chris: With this model, we’re introducing our first proprietary, fine‑tuned model for women’s health within the Oura Advisor experience. It’s grounded in clinically validated women’s health research that’s been hand‑selected by real humans (ourselves)! I thought about what information I would feel comfortable using to train the residents I work with, what research is shaping care currently, guidelines that are well built and reputable, and trusted work written about health, but for a patient audience. We’ve carefully curated the knowledge behind every answer, so responses are anchored in trusted sources rather than an untraceable mix of blogs and random websites. 

Dr. Tanvi: Releasing it in Oura Labs first is also intentional. Labs is where we can learn alongside our members before we scale—so we can test, listen, and iterate in a transparent way, rather than pushing a new model into everyone’s daily health decisions from day one. I think of Labs as part of our clinical evaluation system. We combine offline testing, expert review, and safety guardrails with real‑world usage signals: where people get stuck, where they feel reassured, where they’re still confused. That loop is how we make sure this isn’t just a clever model in a sandbox, but something that actually supports women in the messy, real contexts they’re using it in.

Q: What makes the foundation of this women’s health model different from general‑purpose AI tools?

Dr. Tanvi: Knowledge curation is only part of the story when it comes to developing an AI model. AI is shaped not just by what it reads, but by the data from which it learns. Within Oura Advisor, a proprietary model like this works best when combined with high quality sensor data, so the model is interpreting questions through the lens of what’s actually happening in that person’s body, not just a generic symptom search. The result is guidance that is more context‑aware, aligned with how women actually experience their health, and delivered within a privacy‑first environment on Oura‑controlled infrastructure.

Dr. Chris: This is especially important for women’s health. There are many areas where women have been understudied—pregnancy, perimenopause, and other hormonally dynamic life stages, for example. By combining vetted women’s‑health research with each member’s individual health experiences, we can start to fill some of those gaps.

The model can surface patterns in sleep, activity, stress, and more, and place symptoms or changes into the context of a broader evidence base. That context does more than inform; it helps people feel seen in their data—especially in parts of health where their experiences have often been minimized or misunderstood.

Q: How do you evaluate whether the model is “good enough” for real people?

Dr. Chris: For us, accuracy is a necessary benchmark, but it’s not all we measure against. Oura Labs is where we treat the real world as an evaluation partner, and we ask a broader set of questions than “Did the model get the facts right?”

We evaluate, for example:

  • Is this answer understandable to someone who is worried, tired, or overwhelmed?
  • Does it acknowledge uncertainty honestly, especially where the science is still evolving?
  • Could it unintentionally lead someone to ignore concerning symptoms or delay care?

Dr. Tanvi: We know we may not get everything right on the first try. What we commit to is being transparent about limitations, learning visibly, and keeping humans firmly in the loop—both in how we review the model and in how we encourage members to use it alongside clinical care, not instead of it.

Q: How does this women’s health model fit into Oura’s broader health AI vision?

Dr. Tanvi: We’ve been clear about our approach to AI, privacy, and architecture at ŌURA–and this model is very much that strategy in action. We’ve been investing for years in the infrastructure, governance, and clinical partnerships needed to support safe health AI, and this women’s health model is the first of many places you’ll see that work show up for members.

Dr. Chris: From my side as a clinician, what’s exciting is that this isn’t “AI for AI’s sake.” It’s a deliberately narrow, clinically grounded model that shows what’s possible when you bring together longitudinal biometrics, vetted women’s health research, and a privacy‑first architecture. We see this as a blueprint: today it’s focused on women’s health, but the same principles—specialized models, strong guardrails, and member control over their data—will guide how we expand into other areas of health over time.

Dr. Tanvi: And that throughline matters for trust. The commitments we made last year—about hosting models on ŌURA‑controlled infrastructure, not using conversations to train public or third‑party systems, and designing for member choice—aren’t marketing lines we dust off for launches. They’re the backbone of how we build. From my perspective, this model is one visible chapter in a longer story we’re writing about what responsible health AI should look like: clinically evaluated, transparent about its limitations, and built on an infrastructure that treats health data as something to be protected. 

Q: How will members discover this model in Oura Advisor, and what kinds of questions should they try first?

Dr. Tanvi: Discoverability is a real challenge in health AI. If something helpful is buried three taps deep, it might as well not exist. That’s why we’ve built this directly into the existing Oura Advisor and Oura Labs experience for eligible members. Instead of asking people to learn a new tool from scratch, we meet them where they already are—looking at their Readiness, their cycle patterns, their sleep—and invite them to ask a question when it naturally arises.

Dr. Chris: When someone enrolls in Women’s Health features and is eligible for testing the new model, they’ll see clear entry points into Oura Labs. From there, we guide them with example prompts, because we know “Ask me anything” can feel overwhelming. Some of my favorite starter questions are ones I hear in clinic all the time, like:

  • “My cycle has become irregular—what could be going on, and what should I talk to my provider about?”
  • “I’m in my second trimester and my sleep has tanked. Based on my patterns, what small changes might help?”
  • “I think I might be entering perimenopause. What should I be watching for in my data, and when is it worth bringing up with my clinician?”
  • “My luteal phase is when I feel most wiped.  Can I adjust my movement and recovery expectations during that window?”

During a visit, there’s rarely enough time to unpack all of that. What this model can do is help women translate what they’re feeling and seeing in their data into a clearer story they can bring into the exam room. If we can make it easy to discover, easy to try once, and genuinely useful in preparing for real‑world care, we think people will keep coming back—not just because it’s novel, but because it makes their actual healthcare a little less exhausting.

Q: Many people are understandably concerned about AI and privacy. How are you approaching data protection and member trust?

Dr. Tanvi: When we talk about “responsible health AI” at Oura, privacy is at the center. That means being deliberate about what data is used, where it lives, and what the model is and isn’t allowed to do. This experimental women’s health model runs on Oura‑controlled, secure infrastructure; conversations are not sold or shared and are not used to train third‑party AI models. Within Oura’s systems, they may only be used to improve your personalized health guidance in line with our Privacy Policy, and the system is built to comply with global privacy standards and global data‑protection regulations.

This privacy-first, owned-and-operated model represents a meaningful step forward in Oura’s vision for private AI, first articulated last year. Built by our clinical and technical teams using webAI’s knowledge-graph technology, the women’s health model demonstrates how deep domain expertise and advanced AI can come together within a privacy-first architecture. Through this partnership, ŌURA is pushing the leading edge of AI innovation—delivering high performance and a seamless user experience without compromising user privacy.

Dr. Chris: Just as importantly, you stay in control. Participation in Oura Labs is entirely optional, and members can choose not to join—or to opt out—at any time. In Advisor’s settings, you can see the Memories saved from your conversations and manage or delete them whenever you’d like, and you can also opt out of Oura’s women’s health features. Our aim is that any advances we make in AI are built on a privacy‑first foundation and preserve your control over your own health information.

Q: How do you see this integrating into clinical care in practice?

Dr. Chris: Oura Advisor is not a doctor. It doesn’t diagnose conditions or prescribe treatments. It’s designed to give clear, evidence‑based context so that when a person does see their clinician, they’re walking in with better questions, a clearer sense of their patterns, and a bit more confidence. Body literacy is a big deal for us at Oura – we want our members to have a willingness to listen to and learn from their bodies. As a practicing OB/GYN, I see it as an education tool that can inform and help people prepare for a doctor’s visits—not a replacement for it.

Dr. Tanvi: Exactly. When I think about the future of women’s health AI, I don’t imagine it replacing a clinician; I imagine it sitting next to a woman at home the night before her appointment. It can help her connect what she’s feeling to what she’s seeing in her data, organize her concerns, and decide what to bring up in a 15‑minute visit. In that sense, the goal isn’t to replace the conversation in the exam room—it’s to bridge the gaps between appointments and change the quality of conversations during them, so women feel more informed, more prepared, and more in control of their own care.

For me, this model is one tool to do that. It can help someone make sense of what they’re feeling and seeing in their data, translate that into clearer questions, and surface what might be most important to raise in a short visit. In that sense, the goal isn’t to replace the conversation in the exam room—it’s to empower the member with the context and language they need so that conversation can be more collaborative, more informed, and more aligned with what matters most to them.

Q: With health AI evolving so quickly, why is a more deliberate, careful approach essential?

Dr. Tanvi: The pace of innovation is exciting—it can expand access to guidance and help people understand and advocate for their own health in ways we couldn’t imagine a few years ago. But in health, if we move fast without clear guardrails, we risk hard‑coding old biases and creating new kinds of harm, especially for women who have already been underserved.

Dr. Chris: With this model, we’re trying to offer a different example. It’s built on clinically grounded data, reviewed by humans, and tested in the real world before we scale. Our goal is for people to trust not just the answers they see on screen, but to understand the way those answers were created, and how we approach this work philosophically. We think the ‘why’ of what we are building is just as important as the ‘what’. That’s the standard we’re holding ourselves to: AI that women can trust in both its voice and its foundations.


About the Oura Experts

Chris Curry, MD, PhD, is Oura’s Clinical Director of Women’s Health, helping shape the company’s global vision for women’s wellness across science, research, product, and healthcare partnerships. Her work ensures that women’s health isn’t treated as a niche topic—but as an essential part of every conversation about health at Oura.

Before joining Oura, Dr. Curry helped pioneer women’s health innovation at Apple, where she led the development of Cycle Tracking and Pregnancy Mode, and played a key role in the Apple Women’s Health Study. Her background bridges medicine, research, and digital health—connecting the dots between clinical expertise and cutting-edge technology.

An educator at heart, Dr. Curry has trained future OB/GYN providers as Assistant Clerkship Director at Boston University and Associate Program Director at Jackson Memorial Hospital, and she continues to teach medical students and residents through her weekly clinical practice at Kaiser.

Tanvi Jayaraman, MD, bridges medicine, artificial intelligence, and product strategy to advance responsible AI in consumer health. Her work ensures that every Oura feature—whether member- or clinician-facing—meets the highest standards of scientific rigor, usability, and trust.

Before joining Oura, Dr. Jayaraman helped lead AI strategy projects at Bain & Company for global diagnostics and pharmaceutical companies, turning early AI pilots into real-world health products. She also advised digital health startups on product, regulatory, and clinical strategy, and previously worked on Apple’s clinical team, shaping early concepts for next-generation digital health tools.

Dr. Jayaraman earned her MD from Stanford University School of Medicine, where she served as Women’s Clinic Lead at Arbor Free Clinic and completed a healthcare leadership fellowship in the Office of the CEO at Stanford Health Care.