A vision transformer for decoding surgeon activity from surgical videos

Dani Kiyasseh, Runzhuo Ma, Taseen F. Haque, Brian J. Miles, Christian Wagner, Daniel A. Donoho, Animashree Anandkumar, Andrew J. Hung

Research output: Contribution to journalArticlepeer-review

26 Scopus citations

Abstract

The intraoperative activity of a surgeon has substantial impact on postoperative outcomes. However, for most surgical procedures, the details of intraoperative surgical actions, which can vary widely, are not well understood. Here we report a machine learning system leveraging a vision transformer and supervised contrastive learning for the decoding of elements of intraoperative surgical activity from videos commonly collected during robotic surgeries. The system accurately identified surgical steps, actions performed by the surgeon, the quality of these actions and the relative contribution of individual video frames to the decoding of the actions. Through extensive testing on data from three different hospitals located in two different continents, we show that the system generalizes across videos, surgeons, hospitals and surgical procedures, and that it can provide information on surgical gestures and skills from unannotated videos. Decoding intraoperative activity via accurate machine learning systems could be used to provide surgeons with feedback on their operating skills, and may allow for the identification of optimal surgical behaviour and for the study of relationships between intraoperative factors and postoperative outcomes.

Original languageEnglish (US)
Pages (from-to)780-796
Number of pages17
JournalNature Biomedical Engineering
Volume7
Issue number6
Early online dateMar 30 2023
DOIs
StatePublished - Jun 2023

Keywords

  • Humans
  • Surgeons
  • Robotic Surgical Procedures/methods

ASJC Scopus subject areas

  • Bioengineering
  • Biotechnology
  • Biomedical Engineering
  • Medicine (miscellaneous)
  • Computer Science Applications

Fingerprint

Dive into the research topics of 'A vision transformer for decoding surgeon activity from surgical videos'. Together they form a unique fingerprint.

Cite this