Contact Us
CiiS Lab
Johns Hopkins University
112 Hackerman Hall
3400 N. Charles Street
Baltimore, MD 21218
Directions
Lab Director
Russell Taylor
127 Hackerman Hall
rht@jhu.edu
Last updated: 4/16/18
The goal of this project is to develop an intelligent system that can objectively assess surgical skill using performance data about how surgeons move their hands and connected instruments, as well as how the instruments interact with the surgical workspace.
Robot-assisted minimally invasive surgery (RAMIS) is quickly becoming the prescribed method of treatment for many different routine and non-routine surgical procedures. There is a need to ensure that all robotic surgeons have a minimal level of skill proficiency before they operate on patients. (Pradarelli, et al.) Current methods of skill assessment rely almost exclusively on structured human grading which can be subjective, tedious, time consuming, cost ineffective as raters are practicing physicians. (Curry M, Malpani A, Li R, et al.)
There exists some previous work investigating the relationship between robot dynamics and operator skill. Integrating force sensors and accelerometers into surgical skill assessments, such as peg transfer and similar tasks, allows categorization of surgical skill with reasonable accuracy. By analyzing time-series data of contact forces between surgical tools and workspace, as well as high-frequency vibration patterns, it is possible to extract statistical measures that can be used in both regression and classification machine learning algorithms. This approach has been shown the ability to accurately predict Global Evaluative Assessment of Robotic Skills (G.E.A.R.S.) ratings with high interrater correlation scores. (Brown et al) Having these outputs to agree with the (G.E.A.R.S) metrics is desirable, as this metric is considered clinically relevant. (Goh, Goldfarb, et al.) In a similar vein, work has also been done using the kinematics of robot end effectors to predict robotic surgical skill. (Malpani, et al) By integrating these two data modalities of kinematic and force data, we hope to work towards creating a system capable of automated, objective, and time-effective surgical skill assessment.
The first phase of the project will involve the development of a hardware + software platform using ROS (Python) that collects synchronous streams of motion and physical interaction data (forces and vibrations). This will combine the two previously developed surgical skill assessment platforms created by Dr. Brown and Dr. Malpani.
Our system will interface directly with the da Vinci via ethernet connection and collect kinematic data of both the end effectors and manipulators, as well as discrete time-stamped event data using Intuitive Surgical’s API. Simultaneously, the computer will receive serial data from the smart task board microcontroller, describing the forces exerted on the environment during the skill task. Our ROS platform will need to combine the two sources of data, accounting for the different frequencies of output. The visual portion of our code will also grab video frames from the da Vinci camera and synchronize this footage to the time-series data collection. This video collection will not only allow for a G.E.A.R.S. assessment retroactively (providing ground truth classification labels for algorithm training), but also to help validate the synchronization of data collection.
Figure 1: High level system overview
Once the system has been validated pilot data collection can begin. This will include members of the LCSR group and some clinicians such as Dr. Gina Adrales. This will provide a baseline for skilled and unskilled data. This pilot data can then be analyzed. Previous work has shown that discrete features can be generated from the time series signals to be used for classification or regression based machine learning. These include mean, standard deviation, minimum, maximum, range, RMS, TSS, and time integral. (J. D. Brown et al.) We will create programs to visualize the data and calculate basic statistics for the trial.
The final phase of this project consists of preparing for large-scale data collection. This beings by examining the pilot data for features or metrics that are statistically different between the skilled and unskilled users. We will determine these significant features and suggest their use in future machine learning methods. We will then makes suggestions to change the study IRB based on our findings.
The ultimate goal of this endeavor is to transport our system, comprising of the PC, smart task board, and clip-on sensors to the JHU Minimally Invasive Surgical Teaching Innovation Center, interface with the da Vinci system and monitor setup there to collect data on a large scale from many clinicians of varying skill levels. With this larger set of data, one will be able to more exhaustively evaluate the system’s ability to objectively classify surgical skill.
Curry, Martin, et al. “Objective Assessment in Residency-Based Training for Transoral Robotic Surgery.” The Laryngoscope, vol. 122, no. 10, 2012, pp. 2184–2192., doi:10.1002/lary.23369.
E. D. Gomez et al., “Objective assessment of robotic surgical skill using instrument contact vibrations”, Surgical Endoscopy Interventional Techn., vol. 30, pp. 1419-1431, 2015.
Goh, Alvin C., et al. “Global Evaluative Assessment of Robotic Skills: Validation of a Clinical Assessment Tool to Measure Robotic Surgical Skills.” The Journal of Urology , vol. 187, no. 1, 2012, pp. 247–252., doi:10.1016/j.juro.2011.09.032.
J. D. Brown, C. E. O’Brien, S. C. Leung, K. R. Dumon, D. I. Lee and K. J. Kuchenbecker, “Using Contact Forces and Robot Arm Accelerations to Automatically Rate Surgeon Skill at Peg Transfer,” in IEEE Transactions on Biomedical Engineering, vol. 64, no. 9, pp. 2263-2275, Sept. 2017.
K. Bark et al., “Surgical instrument vibrations are a construct-valid measure of technical skill in robotic peg transfer and suturing tasks”, Proc. Hamlyn Symp. Med. Robot., pp. 50-51, 2012.
Pradarelli, Jason C., Darrell A. Campbell, and Justin B. Dimick. “Hospital Credentialing and Privileging of Surgeons: A Potential Safety Blind Spot.” JAMA 313, no. 13 (April 7, 2015): 1313–14.
Rajesh Kumar, Amod Jog, Balazs Vagvolgyi, Hiep Nguyen, Gregory Hager, Chi Chiung Grace Chen, David Yuh, “Objective measures for longitudinal assessment of robotic surgery training,” The Journal of Thoracic and Cardiovascular Surgery, Volume 143, Issue 3, 2012, Pages 528-534, ISSN 0022-5223
Shackelford, Stacy, Mark Bowyer. “Modern Metrics for Evaluating Surgical Technical Skills.” Current Surgery Reports, vol. 5, no. 10, 2017, doi:10.1007/s40137-017-0187-0.
Here give list of other project files (e.g., source code) associated with the project. If these are online give a link to an appropriate external repository or to uploaded media files under this name space.2018-06