ML-powered analytics platform for a healthcare client — improved diagnostic accuracy by 31% and reduced manual review time by 60%.
The Challenge
Clinicians were spending 4+ hours daily manually reviewing patient records to identify high-risk cases. The review process was inconsistent between clinicians and the team lacked a systematic way to prioritise which cases needed urgent attention.
Three previous attempts with off-the-shelf BI tools had failed — they couldn't ingest the unstructured clinical notes that contained the most valuable signals.
Our Approach
We partnered with the client's clinical team to define what 'high risk' meant in their specific context — this took four weeks of workshops with clinicians and resulted in a labelled dataset and a feature engineering specification.
We chose an explainable ML approach (gradient boosting with SHAP values) rather than a black-box model, so clinicians could see why a case was flagged — a critical trust requirement for clinical adoption.
The Solution
A custom ML pipeline ingesting structured EHR data and unstructured clinical notes (NLP preprocessing), producing per-patient risk scores updated in real time. A React dashboard presented risk-stratified worklists to clinicians, with SHAP-powered explanations for every flagged case.
Deployed on AWS SageMaker with model retraining triggered automatically on performance degradation.
The Results
31% improvement in diagnostic accuracy measured against the pre-deployment baseline. 60% reduction in manual review time per clinician per day. Zero increase in false negative rate — the key clinical safety metric. Full clinical adoption within 60 days of go-live.
The DebMedia team built our predictive analytics dashboard which increased our accuracy by 31%. They are technically brilliant and genuinely invested in your product.
Key Learnings
Explainability is a Clinical Requirement
A black-box model with 95% accuracy will not be adopted in a clinical setting. Explainable predictions with SHAP values turned a tool into a trusted colleague.
Label Quality Beats Data Volume
Four weeks building a high-quality labelled dataset with clinical input outperformed months of trying to train on low-quality legacy labels.
Deploy to Learn
Getting a working MVP into clinicians' hands in month 3 generated feedback that improved the final model more than any additional training data.
