How I Used AI to Question My Girlfriend’s Brain Tumor Treatment (Without Losing My Mind)
I Used AI to Fight My Girlfriend’s Brain Tumor: What It Really Looked Like
By An Informed Patient-Partner
· 12 min read
When my girlfriend’s prolactin-secreting brain tumor kept coming back despite “doing everything right,” I felt betrayed by a medical system I’d always trusted. We watched her body change—fatigue, missed periods, bone loss—while lab results, scans, and rushed appointments seemed to tell an incomplete story. That’s when I started using AI tools—not as a miracle cure, but as a way to understand her condition, challenge assumptions, and advocate for safer, smarter care.
This is not a story about AI replacing doctors. It’s about how carefully used AI can help you:
- Decode complex terms like “prolactinoma,” “macroadenoma,” and “recurrence risk.”
- Prepare better questions before a neurology or endocrinology visit.
- Spot conflicting recommendations and ask for clarification.
- Feel less alone in a maze of tests, medications, and side effects.
When “Standard Treatment” Isn’t Enough: Living With a Recurrent Prolactinoma
My girlfriend, Amy, was 25 when her body started sending distress signals: crushing fatigue, months without a period, new anxiety, and test results showing bone density loss more typical of someone decades older. After several misdiagnoses, an MRI finally revealed the culprit—a prolactinoma, a benign pituitary tumor that overproduces prolactin.
Prolactinomas are one of the most common pituitary adenomas. According to the National Center for Biotechnology Information (NCBI) , most are treated first with dopamine agonist medications like cabergoline or bromocriptine, which can shrink the tumor and normalize hormone levels in many patients.
At first, everything looked textbook:
- Start medication to lower prolactin.
- Watch the MRI show a smaller tumor.
- Plan to taper the drug once things stabilized.
But when Amy’s tumor markers and symptoms started creeping back after dose adjustments, the confidence in “standard of care” began to crumble. Different doctors had different interpretations:
- “The labs look fine; give it time.”
- “Maybe the MRI changes aren’t clinically significant.”
- “Surgery is an option, but let’s wait and see.”
“I felt like a walking lab result,” Amy told me. “Everyone had an opinion about my numbers, but few seemed to be looking at me—how I felt day to day.”
That’s the moment many people lose trust in the system—not because doctors are careless, but because complex conditions, short appointment times, and fragmented records can make individualized care extremely hard.
Why I Turned to AI: Not for Miracles, But for Clarity
AI tools became part of Amy’s journey almost by accident. I was already following rapid advances in large language models—systems trained on vast amounts of medical literature, textbooks, and clinical guidelines. Watching her bounce between specialists, I wondered: could AI help us understand the landscape better, even if it couldn’t give us a definitive answer?
I set a few non-negotiable rules from day one:
- AI would never decide treatment. It could only generate questions and summaries.
- Anything AI suggested had to be cross-checked against trusted sources (guidelines, PubMed, institutional websites).
- Amy’s medical team would always have the final say.
Within those guardrails, AI helped in surprisingly practical ways:
- Explaining medical jargon from MRI reports in plain language.
- Summarizing long review articles on dopamine agonist therapy.
- Generating checklists of questions to bring to our endocrinology visits.
- Highlighting when recommendations differed between professional guidelines.
Research on clinical AI is evolving quickly. A 2023 review in npj Digital Medicine highlighted both the promise and limits of large language models in healthcare: they can summarize and explain information well, but they can also “hallucinate” facts and lack access to your real-time medical record. That’s why human oversight isn’t optional—it’s essential.
How We Actually Used AI in a Brain Tumor Journey
Over time, I developed a simple workflow that made AI genuinely helpful without crossing into unsafe territory. Think of it as building a smarter notebook—not an online doctor.
1. Turning Raw Reports into Understandable Language
MRI reports and lab results are full of phrases like “hypoenhancing lesion within the sella” or “borderline elevated prolactin with macroprolactin fraction.” I would paste de-identified text into an AI tool and ask:
“Explain this MRI report in plain English for a non-medical reader. List what is stable, what is better, and what might be concerning, and highlight 5 questions we should ask our endocrinologist.”
The outputs weren’t perfect, but they made it easier for us to walk into the next appointment with a clear, prioritized list of questions.
2. Comparing Treatment Options and Guidelines
We used AI to summarize credible resources rather than rely on random search results. For example:
- Summaries of Endocrine Society clinical practice guidelines on pituitary tumors.
- Explanations of when surgery is typically considered for prolactinomas, based on sources like Mayo Clinic and major academic centers.
I’d ask the AI to:
“Summarize the typical reasons for switching from medication to surgery in a prolactinoma, using information from at least two major academic centers and one guideline. Present them as questions I can ask my endocrinologist.”
This turned abstract possibilities into concrete talking points we could discuss with Amy’s care team.
3. Tracking Symptoms and Patterns
Medication side effects can blur together. We used a simple shared document where Amy logged:
- Daily energy levels and sleep quality.
- Headaches, vision changes, and mood shifts.
- Medication doses and timing.
Every few weeks, I’d feed anonymized, high-level summaries (not raw data with names or dates of birth) into an AI tool and ask it to:
“Identify any visible patterns between dose changes and symptom flares, and suggest 5 neutral, non-leading questions we can ask the doctor about dose timing and side effect management.”
Sometimes it spotted correlations we’d already noticed. Sometimes it reinforced there wasn’t a clear pattern yet—which was still useful to know.
The Hard Parts: Mistrust, Conflicting Advice, and Emotional Overload
Using AI didn’t magically make the journey smoother. In some ways, it made things more complicated—because we now saw just how many grey areas and unanswered questions existed.
Conflicting Medical Opinions
One neurosurgeon leaned toward early surgery, while an endocrinologist preferred long-term medication. AI could summarize typical pros and cons, but it couldn’t answer the question that mattered most: What is best for Amy, given her unique tumor, history, and priorities?
AI is very good at telling you what usually happens. It is much less good at telling you what should happen in your specific case.
We learned to use AI as a way to ask for second opinions from humans, not as a second opinion itself.
Emotional and Cognitive Overload
Information is not always empowering—sometimes it’s overwhelming. There were nights when we had twenty browser tabs open: journal articles, patient forums, AI-generated summaries, and hospital webpages. Instead of feeling in control, we felt more anxious.
We eventually created a simple rule: no more than three major questions per appointment, and no more than one big decision under serious consideration at a time. AI helped us consolidate dozens of worries into a handful of prioritized questions.
The Risk of Overconfidence
Perhaps the biggest danger of AI in health is the illusion of certainty. A well‑phrased paragraph can sound authoritative even when it oversimplifies or misinterprets nuanced data.
Before vs. After: What Actually Changed When We Used AI
AI didn’t cure Amy’s prolactinoma, and it didn’t suddenly turn us into medical experts. But it did change the texture of our interactions with the healthcare system.
In practical terms, here’s how life looked before and after AI became part of our toolkit:
| Without AI Support | With AI as an Assistant |
|---|---|
| Googling symptoms late at night, landing on worst‑case scenarios. | Using AI to summarize high‑quality sources and remind us of typical, not just extreme, outcomes. |
| Leaving appointments unsure what was decided or why. | Bringing a printed or digital list of 3–5 priority questions generated and refined with AI. |
| Feeling powerless when doctors disagreed. | Using AI to understand guideline ranges and formulate non‑confrontational questions like, “Can you help us understand why your recommendation differs from X guideline?” |
| Tracking side effects in scattered notes and memories. | Logging symptoms systematically and asking AI to help organize them into a brief timeline for the doctor. |
None of this changed the biology of Amy’s tumor, but it did change our sense of agency. That matters more than it might sound. Studies in BMJ and other journals have linked patient engagement and clear communication with better adherence to treatment and, in some cases, improved outcomes.
How You Can Safely Use AI to Navigate a Complex Diagnosis
If you or someone you love is facing a brain tumor, hormonal disorder, or any complex chronic illness, you can borrow pieces of our approach without copying it blindly. Here’s a practical, safety‑first blueprint.
Step 1: Define What AI Is For—and What It’s Not
- Use AI to explain terms, tests, and general treatment options.
- Use AI to brainstorm questions for your doctors.
- Use AI to organize information you’ve already received.
- Do not use AI to start or stop medications, override medical advice, or choose a surgeon.
Step 2: Protect Your Privacy
- Remove names, dates, and ID numbers from any text you share.
- Avoid uploading full medical records to general‑purpose tools.
- Look for AI services with clear healthcare‑grade privacy protections if available in your region.
Step 3: Always Cross‑Check AI Against Authoritative Sources
When AI cites facts or treatment approaches, verify them using:
- Major academic centers (e.g., Mayo Clinic , Johns Hopkins ).
- Professional societies (e.g., Endocrine Society ).
- Databases like PubMed or MedlinePlus .
Step 4: Bring AI‑Generated Notes Into the Clinic—Openly
Instead of hiding that you used AI, we found it more productive to be transparent. We’d say something like:
“We used an AI tool to help us understand some of the terminology and come up with questions. Could we go through these together and you tell us what’s most relevant to our situation?”
Most clinicians appreciated that we were engaged and gave them a chance to correct anything misleading.
What the Science Says About AI, Brain Tumors, and Endocrine Care
AI in neuroendocrine and oncology care is still emerging, but several trends are worth noting:
- Imaging analysis: Deep learning models are being studied to help radiologists identify subtle changes in brain MRIs, potentially improving tumor detection and monitoring. Many of these tools are still in research or early clinical deployment and are not meant to replace radiologist interpretation.
- Decision support: Clinical decision support systems can suggest guideline‑based options to clinicians, but regulatory bodies emphasize they must remain assistive, not autonomous.
- Patient education: Large language models show promise for generating plain‑language summaries of complex medical information, but they require human review to avoid inaccuracies.
As of 2026, major organizations like the U.S. Food and Drug Administration (FDA) and the World Health Organization (WHO) stress transparent, human‑centered use of AI in healthcare. That aligns with what we learned firsthand: AI can be powerful, but only when grounded in ethics, evidence, and real human care.
Moving Forward: Letting AI Help Without Letting It Take Over
Amy’s story is still unfolding. Like many people living with a prolactinoma, she continues to navigate follow‑up scans, lab checks, medication adjustments, and the emotional toll of a diagnosis that doesn’t fit neatly into “cured” or “sick.”
What has changed is how alone we feel in that process. AI has become:
- A research assistant that never gets tired of explaining the same term three different ways.
- A scribe helping us capture questions and patterns we might otherwise forget.
- A bridge between dense medical literature and the limited minutes we have with specialists.
It has not become a doctor, a diagnostic oracle, or a way to bypass the hard work of building trust with real clinicians.
If you’re considering using AI to help with a brain tumor or any serious health condition, you might start with three small steps:
- Pick one confusing report or concept and ask an AI tool to explain it in plain language.
- Turn that explanation into 3–5 questions for your next appointment.
- Share openly with your clinician that you used AI and invite them to correct or clarify anything.
You deserve clear information, compassionate care, and a real voice in decisions about your body. AI, used wisely, can help you claim that voice—without pretending it has all the answers.