Contact Us
CiiS Lab
Johns Hopkins University
112 Hackerman Hall
3400 N. Charles Street
Baltimore, MD 21218
Directions
Lab Director
Russell Taylor
127 Hackerman Hall
rht@jhu.edu
Last updated: May 9th, 2019
Ultrasonographers typically experience repetitive musculoskeletal microtrauma in performing their daily occupational tasks which require holding an ultrasound (US) probe against a patient in contorted positions while simultaneously applying large forces to enhance image quality. This project aims to provide ultrasonographers with “robotic powersteering” for US, essentially consisting of a hand-guidable, US probe-wielding robot that does the strenuous probe holding for them once navigated to a point-of-interest.
While this challenge has been approached by previous work, the previously implemented hand-over-hand control algorithms were reportedly sluggish and lacked motion transparency for the user. This work aims to change these algorithms to create a smoother hand-over-hand control experience.
Precise and transparent hand-over-hand control is also an important first step to augmented and semi-autonomous US procedures such as synthetic aperture imaging and co-robotic transmission US tomography.
Typically, ultrasound (US) guided procedures require a sonographer to hold an US probe against a patient in static, contorted positions for long periods of time while also applying large forces [Schoenfeld, et al., 1999]. As a result, 63%-91% of sonographers develop occupation-related musculoskeletal disorders compared to only about 13%-22% of the general population [Rousseau, et al., 2013].
The vision of this work is to provide sonographers with “power-steering” via a hand-guidable robot that they can maneuver to a point-of-interest and then release, having the robot do all the strenuous holding on their behalf. While previous work performed at JHU has shown promising results using a MATLAB implementation and basic filtering [Finocchi 2016; Finocchi, et al., 2017; Fang, et al., 2017], past prototypes have lacked the transparency of power-steering necessary for practical clinical usage.
Therefore, the specific aim of this work is to improve upon the previous robotic ultrasound assist prototypes via C++ implementation and adaptive Kalman filtering to create a more transparent power-steering, cooperative-control experience for sonographers. The result will be evaluated in a user study that quantitatively measures exerted effort during sonography, as well as questionnaires surveying participant-perceived operator workload.
If successful and validated, this work will be an important progression toward mitigating sonographers' susceptibility to work-related musculoskeletal disorders. It also has consequences for all procedures under the umbrella of robotic ultrasound, as the control algorithms developed for this work will underlay and improve all applications built on top of it. Some examples include enforcing virtual fixtures for synthetic aperture procedure, imaging with respiratory gating, replicating a position/force profile for repeatable biopsy procedures, and conducting co-robotic ultrasound tomography scans.
This project will tackle improving the robot motion transparency through the following two approaches.
Previous work by Finocchi [Finocchi 2016; Finocchi, et al., 2017] and Fang [Fang, et al., 2017] used algorithms that primarily focused on filtering the F/T signals received to produce more stable velocities, namely through their use of nonlinear F/T-velocity gains and the 1€ filter used for smoothing hand-guided motion. While they still achieved an adequate result, their work does not consider data sparsity and latency which greatly contributes to the user experience in real-time robotics.
The issue with data sparsity and latency arises primarily from the 6-DoF F/T sensor which sends 100 Hz of data in TCP packets that arrive at 20 Hz (e.g. a packet arrives every 50ms containing the previous 5 samples of F/T data). A naive approach at robotic control could be developed which commands new robot velocities immediately upon receiving an incoming F/T packet, but this would mean the UR5 is commanded at 20 Hz, much lower than its maximum supported rate of 125Hz. This is shown below.
In this work, an adaptive Kalman Filter will be used to generate inter-packet F/T inferences therefore allowing the robot to be commanded at its full 125 Hz potential. It is also suspected that the Kalman Filter can be tuned to help alleviate the effects of TCP latency between when the F/T packet is sent and when the robot is commanded in response. It is worth noting that the filter will be made “adaptive” to improve future predictions by automatically updating its covariance matrices when a new F/T packet arrives based on how well its predictions matched the real measured F/T values. This is shown below.
Previous work by Finocchi [Finocchi 2016; Finocchi, et al., 2017] and Fang [Fang, et al., 2017] used MATLAB programming and client-side software running on the UR5 to relay F/T values to the computer. While they still achieved an adequate result, using an interpreted language such as MATLAB and running unnecessary client-side code introduces latency and overhead which is detrimental to the user experience of any real-time system. In this work, C++ will be used in combination with the open-source CISST/SAW libraries to get data from, and command the UR5 without any client-side code. A simplified diagram is shown below.
As shown, there will be three SAW components listening for data from the robot and F/T sensors respectively and storing them in objects accessible by main.cpp. The main script, in addition to performing component initialization, will essentially be an infinite loop of fetching readings, filtering, and commanding a velocity to the UR5 SAW component. It is worth noting that the CISST/SAW libraries have native support for accessing shared data in a way that prevents race conditions, which is very useful since this program relies on asynchronous, multitask execution and is therefore prone to data corruption.
All individual resources have been uploaded and linked individually above (for instance, instead of giving a link to an entire shared Google Drive of videos, I have shared and linked videos individually where necessary).
A GitHub repository does exist for this project, but is being kept private until all IP considerations have been made toward the end of this project. Please personally contact kevingilboy@jhu.edu for access.