Introduction: Peer review is an essential process for physicians because it facilitates improved quality of patient care and continuing physician learning and improvement. However, peer review often is not well received by radiologists, who note that it is time intensive, subjective, and lacks demonstrable impact on patient care. Current advances in peer review include the RADPEER system with its standardization of discrepancies and incorporation of the peer review process into the PACS itself. Our purpose was to build on RADPEER and similar systems by using a mathematical model to optimally select the types of cases to be reviewed, for each radiologist undergoing review, based on the past frequency of interpretive error, likelihood of morbidity from an error, financial cost of an error, and time required for the reviewing radiologist to interpret the study. Methods: We compiled 612,890 preliminary radiology reports authored by residents and attendings of a large tertiary-care medical center from 1999 to 2004. Discrepancies between preliminary and final interpretations were classified by severity and validated by re-review of major discrepancies. A mathematical model was then used to calculate, for each author of a preliminary report, the combined morbidity and financial costs of expected errors across three modalities (MRI, CT, and CR) and four departmental divisions (Neuroradiology and Abdominal, Musculoskelatal, and Thoracic Imaging). Results: A customized report was generated for each on-call radiologist which determined the category (modality and body part) with the highest total cost function. A universal total cost based on probability data from all radiologists was also compiled. Conclusion: The use of mathematical models to guide case selection could optimize the efficiency and effectiveness of physician time spent on peer review and produce more concrete and meaningful feedback to radiologists undergoing peer review.