RADPEER™ is a simple tool developed to allow radiologists to do peer review during the course of a day’s work. When a new study is interpreted with an prior study for comparison, a peer review of the accuracy of the interpretation of the previous examination occurs RADPEER™
Pilot Program in 2001 Offered to members in 2002 Electronic version August 2005
RADPEER™ Easy to use QA tool Cost effective Summary statistics and comparisons for each radiologist by modality Summary data for your facility by modality
Four point scoring system: 1Concur with interpretation 2 Difficult diagnosis, not ordinarily expected to be made 3 Diagnosis should be made most of the time 4Diagnosis should be made almost every time/misinterpretation of findings Score of 3 or 4 should be reviewed through the facility’s internal QA process prior to submission to ACR RADPEER ™
e RADPEER™ Electronic version of RADPEER offered in August 2005 Web based, no additional software needed
e RADPEER™
Data submitted to ACR via website Reports for individual radiologists and group available electronically
e RADPEER™ Statistics as of July practices enrolled in program 121 actively submitting data 45 e RADPEER™ 76 paper version Approximately 2700 radiologists
Physician reads and compares image Physician completes scoring card Scores of 1 or 2 Scores of 3 or 4 Peer review committee Review and arbitration Change to 1 or 2Remains 3 or 4 Peer review cards sent quarterly to ACR