Which skills really matter? proving face, content, and construct validity for a commercial robotic simulator

Calvin Lyons, David W. Goldfarb, Stephen L. Jones, Niraj Badhiwala, Brian J. Miles, Richard Link, Brian J. Dunkin

Research output: Contribution to journalArticlepeer-review

73 Scopus citations

Abstract

Background: A novel computer simulator is now commercially available for robotic surgery using the da Vinci® System (Intuitive Surgical, Sunnyvale, CA). Initial investigations into its utility have been limited due to a lack of understanding of which of the many provided skills modules and metrics are useful for evaluation. In addition, construct validity testing has been done using medical students as a "novice" group - a clinically irrelevant cohort given the complexity of robotic surgery. This study systematically evaluated the simulator's skills tasks and metrics and established face, content, and construct validity using a relevant novice group. Methods: Expert surgeons deconstructed the task of performing robotic surgery into eight separate skills. The content of the 33 modules provided by the da Vinci Skills Simulator (Intuitive Surgical, Sunnyvale, CA) was then evaluated for these deconstructed skills and 8 of the 33 determined to be unique. These eight tasks were used for evaluating the performance of 46 surgeons and trainees on the simulator (25 novices, 8 intermediate, and 13 experts). Novice surgeons were general surgery and urology residents or practicing surgeons with clinical experience in open and laparoscopic surgery but limited exposure to robotics. Performance was measured using 85 metrics across all eight tasks. Results: Face and content validity were confirmed using global rating scales. Of the 85 metrics provided by the simulator, 11 were found to be unique, and these were used for further analysis. Experts performed significantly better than novices in all eight tasks and for nearly every metric. Intermediates were inconsistently better than novices, with only four tasks showing a significant difference in performance. Intermediate and expert performance did not differ significantly. Conclusion: This study systematically determined the important modules and metrics on the da Vinci Skills Simulator and used them to demonstrate face, content, and construct validity with clinically relevant novice, intermediate, and expert groups. These data will be used to develop proficiency-based training programs on the simulator and to investigate predictive validity.

Original languageEnglish (US)
Pages (from-to)2020-2030
Number of pages11
JournalSurgical Endoscopy
Volume27
Issue number6
DOIs
StatePublished - Jan 1 2013

Keywords

  • Computing
  • Human/robotic
  • Imaging & VR
  • Surgical

ASJC Scopus subject areas

  • Surgery

Fingerprint

Dive into the research topics of 'Which skills really matter? proving face, content, and construct validity for a commercial robotic simulator'. Together they form a unique fingerprint.

Cite this