Comparative effectiveness of instructional design features in simulation-based education: Systematic review and meta-analysis

David A. Cook, Stanley J. Hamstra, Ryan Brydges, Benjamin Zendejas, Jason H. Szostek, Amy T. Wang, Patricia J. Erwin, Rose Hatala
2012 Medical Teacher  
Although technology-enhanced simulation is increasingly used in health professions education, features of effective simulation-based instructional design remain uncertain. Aims: Evaluate the effectiveness of instructional design features through a systematic review of studies comparing different simulation-based interventions. Methods: We systematically searched MEDLINE, EMBASE, CINAHL, ERIC, PsycINFO, Scopus, key journals, and previous review bibliographies through May 2011. We included
more » ... l research studies that compared one simulation intervention with another and involved health professions learners. Working in duplicate, we evaluated study quality and abstracted information on learners, outcomes, and instructional design features. We pooled results using random effects meta-analysis. Results: From a pool of 10 903 articles we identified 289 eligible studies enrolling 18 971 trainees, including 208 randomized trials. Inconsistency was usually large (I 2 4 50%). For skills outcomes, pooled effect sizes ( positive numbers favoring the instructional design feature) were 0.68 for range of difficulty (20 studies; p 5 0.001), 0.68 for repetitive practice (7 studies; p ¼ 0.06), 0.66 for distributed practice (6 studies; p ¼ 0.03), 0.65 for interactivity (89 studies; p 5 0.001), 0.62 for multiple learning strategies (70 studies; p 5 0.001), 0.52 for individualized learning (59 studies; p 5 0.001), 0.45 for mastery learning (3 studies; p ¼ 0.57), 0.44 for feedback (80 studies; p 5 0.001), 0.34 for longer time (23 studies; p ¼ 0.005), 0.20 for clinical variation (16 studies; p ¼ 0.24), and À0.22 for group training (8 studies; p ¼ 0.09). Conclusions: These results confirm quantitatively the effectiveness of several instructional design features in simulation-based education. Participants Health professions learner: a student, postgraduate trainee, or practitioner in a profession directly related to human or animal health, including physicians, dentists, nurses, veterinarians, physical, occupational, and respiratory therapists, and emergency medical technicians and other first responders. Instructional design key features Clinical variation: Variation in the clinical context, for example multiple different patient scenarios (absent if no clinically relevant context was stated). Cognitive interactivity: Training that promotes learners' cognitive engagement using strategies such as multiple repetitions, feedback, task variation, or intentional task sequencing. Curricular integration: Incorporation of the simulation intervention as an integral part (required or formal element) of the curriculum or training program. Distributed practice: Training spread over a period of time. For this review, we counted this as present for interventions that involved 41 day of simulation training. Feedback: Information on performance provided to the learner by the instructor, a peer, or a computer, either during or after the simulation activity. Group (vs independent) practice: Training activities involving two or more learners (as compared with training alone). Individualized learning: Training responsive to individual learner needs (i.e. tailored or adapted depending on performance). Mastery learning: Training model in which learners must attain a clearly-defined standard of performance before qualifying or advancing to the next task. Multiple learning strategies: The number of different instructional strategies used to facilitate learning, such as patient case, worked example, discussion, feedback, intentional sequencing, or task variation. Range of task difficulty: Variation in the difficulty or complexity of the task (explicitly stated). Repetitive practice: The opportunity for more than one task performance. Outcomes Satisfaction. Learners' reported satisfaction with the course. Knowledge: Subjective (e.g. learner self-report) or objective (e.g. multiple-choice question knowledge test) assessments of factual or conceptual understanding. Skills: Subjective (e.g. learner self-report) or objective (e.g. faculty ratings, or objective tests of clinical skills such as computer-scored technique in a virtual reality surgery simulator, or number of masses detected when examining a breast model) assessments of learners' ability to demonstrate a procedure or technique in an educational setting (typically a simulation task). We further classified skills as measures of time (how long it takes a learner to complete the task), process (e.g. global rating scales, efficiency, or minor errors), and product (successful completion of the task, evaluation of the finished product, or major errors that would impact a real patient's well-being). For purposes of meta-analysis we combined process and product skills into a single outcome, non-time skills. Behaviors and patient effects: Subjective (e.g. learner or patient self-report) or objective (e.g. chart audit or faculty ratings) assessments of behaviors in practice (such as test ordering) or effects on patients (such as medical errors). We used a classification system similar to that used for Skills, with time and process measures being counted as behaviors (e.g. procedure time, test ordering, or interviewing technique with real patients) and products being counted as patient effects (e.g. complications, patient discomfort, or procedure completion rates).
doi:10.3109/0142159x.2012.714886 pmid:22938677 fatcat:6rhnghkoxrc7jmkbafdqqnd53y