Assessment of patient management skills and clinical skills of practising doctors using computer-based case simulations and standardised patients. 2004
BACKGROUND Standardised assessments of practising doctors are receiving growing support, but theoretical and logistical issues pose serious obstacles. OBJECTIVE To obtain reference performance levels from experienced doctors on computer-based case simulation (CCS) and standardised patient-based (SP) methods, and to evaluate the utility of these methods in diagnostic assessment. METHODS The study was carried out at a military tertiary care facility and involved 54 residents and credentialed staff from the emergency medicine, general surgery and internal medicine departments. METHODS Doctors completed 8 CCS and 8 SP cases targeted at doctors entering the profession. Standardised patient performances were compared to archived Year 4 medical student data. RESULTS While staff doctors and residents performed well on both CCS and SP cases, a wide range of scores was exhibited on all cases. There were no significant differences between the scores of participants from differing specialties or of varying experience. Among participants who completed both CCS and SP testing (n = 44), a moderate positive correlation between CCS and SP checklist scores was observed. There was a negative correlation between doctor experience and SP checklist scores. Whereas the time students spent with SPs varied little with clinical task, doctors appeared to spend more time on communication/counselling cases than on cases involving acute/chronic medical problems. CONCLUSIONS Computer-based case simulations and standardised patient-based assessments may be useful as part of a multimodal programme to evaluate practising doctors. Additional study is needed on SP standard setting and scoring methods. Establishing empirical likelihoods for a range of performances on assessments of this character should receive priority.