Contact Us
CiiS Lab
Johns Hopkins University
112 Hackerman Hall
3400 N. Charles Street
Baltimore, MD 21218
Directions
Lab Director
Russell Taylor
127 Hackerman Hall
rht@jhu.edu
Last updated: 05/09/2019 - 10:30 AM
We would like to investigate the effect of haptic guidance and brain stimulation on motor learning in Virtual Reality surgical simulators.
The number of robotic-assisted minimally invasive surgeries (RMIS) performed annually is rapidly increasing, and new surgeons must be trained to meet this demand. In the current standard of training, novices often spend many hours completing practice tasks which are graded with observational feedback. Getting this feedback requires trained surgeons to spend time going through the videos or watching in real-time, leading to high operational costs and low efficiency. Furthermore, if not corrected early trainees can develop poor habits that will take longer to break after being ingrained over several days of training.
There is a need for new technologies that can lower these mentorship barriers to and speed up training. Current work being done in the space involves building Virtual Reality simulators on daVinci Research Kits (dVRK) for basic tasks such as suturing (see Figure 1). While these tasks are able to provide real-time feedback visually (in the example, the green colored ring indicates a proficient needle entry), they do not provide any corrective guidance for wrong motions. The goal of this project is to incorporate real-time corrective guidance to these virtual simulators with reference to the optimal path for the task. Coupled with the visual feedback, we will be able to work to determine the efficacy of haptic feedback in teaching RAMIS principles.
The first phase of this project will include the development and implementation of models for force feedback systems. Previous work has led us to consider two methods, forbidden region (Figure 2) and guidance along the optimal path (Figure 3). Using virtual fixtures, the optimal path will be bounded with an allowable region tunnel that will apply a spring force to the dVRK manipulators should the virtual needle deviate from this region. With guidance, forces will be applied to the dVRK manipulators under certain conditions to encourage users to the optimal path based on the task-space error. Defining the conditions of these forces and tuning the parameters will require significant experimentation and testing.
Node 1: Inputs are list of poses on optimal trajectory and current pose of suturing tip. Outputs a goal pose of suturing tip based on desired velocity.
Node 2: Inputs are transformation from current pose of suturing tip to current pose of end effector, current pose of end effector, task state, and goal pose. Task state is current action of the suture, i.e. insertion, handoff. Calculates the goal pose of the end effector. Outputs the task space error as a transformation between current pose of end effector and goal pose of end effector.
Node 3: Inputs are task space error and parameters about force generation (visco-elastic, non-energy-storing, etc). Outputs forces and torques which are sent to a low level controller.
After these models have been tested and tuned, the second phase of the project can begin (it will have been set up in parallel with the first phase). This phase will involve a pilot user study on fellow LCSR members and other novices on the dVRK. Metrics such as time to completion, accuracy and smoothness will be used to compare the different feedback systems (none, forbidden region, guidance) and their effects on performance and learning. This study can be further extended after to include a study evaluating the effect of brain stimulation on robotic surgery training.
* here list references and reading material
Here give list of other project files (e.g., source code) associated with the project. If these are online give a link to an appropriate external repository or to uploaded media files under this name space (2019-09).
GitLab Repo: https://git.lcsr.jhu.edu/atar_cis2
Google Drive: https://drive.google.com/drive/folders/14SCRv1vqzIYPqj8pMy0QRrX2cFIgayOF?usp=sharing
Documentation: https://drive.google.com/drive/folders/1EVcFGfJDSnpfe2z_TSQZeiI6QWZ7xP48?usp=sharing