Step | Process | Detail |
---|---|---|
1 | Review Model Accuracy | Evaluate AI model performance on local data. Look carefully at user-facing metrics (i.e., PPV and NPV) as these affect user engagement. Use this information and case-based examples to craft educational content for the radiologists to help mitigate human-AI bias |
2 | Calculate Optimized Enhanced Detection Rate (EDR) | EDR = (# of AI positive exams, not included in the rad report) / (# of rad reports with the identified pathology). This value represents an improvement in sensitivity and patient care that could be reached by optimally combining the radiologist and AI results |
3 | Identify “WOW” Cases | “WOW” cases are those that could affect patient care or hospital operations as seen through the lens of any of the radiology stakeholders including the radiologist, referring clinician, hospital administrator, patient, or payor |
4 | Categorize Model Pitfalls | AI models will have false positives (FP) and false negatives (FN). Try to categorize the FP and, if possible, the FN cases so these can be used to set radiologist expectations and help mitigate the human-AI bias |
5 | Summarize & Decide | Based on the above data, determine if the model is clinically worthwhile to roll out in your environment |