Table of Contents

Robotic Ultrasound Assistance via Hand-Over-Hand Control

Last updated: May 9th, 2019

Summary

Ultrasonographers typically experience repetitive musculoskeletal microtrauma in performing their daily occupational tasks which require holding an ultrasound (US) probe against a patient in contorted positions while simultaneously applying large forces to enhance image quality. This project aims to provide ultrasonographers with “robotic powersteering” for US, essentially consisting of a hand-guidable, US probe-wielding robot that does the strenuous probe holding for them once navigated to a point-of-interest.

While this challenge has been approached by previous work, the previously implemented hand-over-hand control algorithms were reportedly sluggish and lacked motion transparency for the user. This work aims to change these algorithms to create a smoother hand-over-hand control experience.

Precise and transparent hand-over-hand control is also an important first step to augmented and semi-autonomous US procedures such as synthetic aperture imaging and co-robotic transmission US tomography.

Background, Specific Aims, and Significance

Typically, ultrasound (US) guided procedures require a sonographer to hold an US probe against a patient in static, contorted positions for long periods of time while also applying large forces [Schoenfeld, et al., 1999]. As a result, 63%-91% of sonographers develop occupation-related musculoskeletal disorders compared to only about 13%-22% of the general population [Rousseau, et al., 2013].

The vision of this work is to provide sonographers with “power-steering” via a hand-guidable robot that they can maneuver to a point-of-interest and then release, having the robot do all the strenuous holding on their behalf. While previous work performed at JHU has shown promising results using a MATLAB implementation and basic filtering [Finocchi 2016; Finocchi, et al., 2017; Fang, et al., 2017], past prototypes have lacked the transparency of power-steering necessary for practical clinical usage.

Therefore, the specific aim of this work is to improve upon the previous robotic ultrasound assist prototypes via C++ implementation and adaptive Kalman filtering to create a more transparent power-steering, cooperative-control experience for sonographers. The result will be evaluated in a user study that quantitatively measures exerted effort during sonography, as well as questionnaires surveying participant-perceived operator workload.

If successful and validated, this work will be an important progression toward mitigating sonographers' susceptibility to work-related musculoskeletal disorders. It also has consequences for all procedures under the umbrella of robotic ultrasound, as the control algorithms developed for this work will underlay and improve all applications built on top of it. Some examples include enforcing virtual fixtures for synthetic aperture procedure, imaging with respiratory gating, replicating a position/force profile for repeatable biopsy procedures, and conducting co-robotic ultrasound tomography scans.

Deliverables

Technical Approach

This project will tackle improving the robot motion transparency through the following two approaches.

Control System Approach - Kalman Filtering

Previous work by Finocchi [Finocchi 2016; Finocchi, et al., 2017] and Fang [Fang, et al., 2017] used algorithms that primarily focused on filtering the F/T signals received to produce more stable velocities, namely through their use of nonlinear F/T-velocity gains and the 1€ filter used for smoothing hand-guided motion. While they still achieved an adequate result, their work does not consider data sparsity and latency which greatly contributes to the user experience in real-time robotics.

The issue with data sparsity and latency arises primarily from the 6-DoF F/T sensor which sends 100 Hz of data in TCP packets that arrive at 20 Hz (e.g. a packet arrives every 50ms containing the previous 5 samples of F/T data). A naive approach at robotic control could be developed which commands new robot velocities immediately upon receiving an incoming F/T packet, but this would mean the UR5 is commanded at 20 Hz, much lower than its maximum supported rate of 125Hz. This is shown below.

 A naive robot control scheme that only commands the robot when a F/T packet arrives. Since packets arrive at 20 Hz and the robot can be commanded at 125 Hz, this approach does not utilize the robot to its fastest capability.

In this work, an adaptive Kalman Filter will be used to generate inter-packet F/T inferences therefore allowing the robot to be commanded at its full 125 Hz potential. It is also suspected that the Kalman Filter can be tuned to help alleviate the effects of TCP latency between when the F/T packet is sent and when the robot is commanded in response. It is worth noting that the filter will be made “adaptive” to improve future predictions by automatically updating its covariance matrices when a new F/T packet arrives based on how well its predictions matched the real measured F/T values. This is shown below.

 A more advanced robot control scheme that uses adaptive Kalman filtering to predict inter-packet F/T readings so that it can command the robot as fast as possible.

Programmatic Approach - C++ Implementation

Previous work by Finocchi [Finocchi 2016; Finocchi, et al., 2017] and Fang [Fang, et al., 2017] used MATLAB programming and client-side software running on the UR5 to relay F/T values to the computer. While they still achieved an adequate result, using an interpreted language such as MATLAB and running unnecessary client-side code introduces latency and overhead which is detrimental to the user experience of any real-time system. In this work, C++ will be used in combination with the open-source CISST/SAW libraries to get data from, and command the UR5 without any client-side code. A simplified diagram is shown below.

 A simplified code flow diagram showing the asynchronous component listeners and main.cpp which will perform the admittance control, filtering, and robot commanding.

As shown, there will be three SAW components listening for data from the robot and F/T sensors respectively and storing them in objects accessible by main.cpp. The main script, in addition to performing component initialization, will essentially be an infinite loop of fetching readings, filtering, and commanding a velocity to the UR5 SAW component. It is worth noting that the CISST/SAW libraries have native support for accessing shared data in a way that prevents race conditions, which is very useful since this program relies on asynchronous, multitask execution and is therefore prone to data corruption.

Dependencies

Information regarding project dependencies is best organized in table form as shown below.

Milestones and Status

  1. Milestone name: Interfacing of components (UR5, Robotiq, computer)
    • Start Date: 2/1
    • Planned Date: 2/4
    • Expected Date: 2/4
    • Status: Completed
  2. Milestone name: Admittance control (rudimentary)
    • Start Date: 2/6
    • Planned Date: 2/10
    • Expected Date: 2/10
    • Status: Completed
  3. Milestone name: Gravity compensation
    • Start Date: 2/11
    • Planned Date: 2/16
    • Expected Date: 2/16
    • Status: Completed
  4. Milestone name: Kalman Filtering
    • Start Date: 2/18
    • Planned Date: 4/4
    • Expected Date: 4/4
    • Status: Completed
  5. Milestone name: Incorporate 3DoF contact force sensor for probe force compensation
    • Start Date: 3/11
    • Planned Date: 3/27
    • Expected Date: 4/30
    • Status: Unable to proceed, although interfacing was performed, there were unfixable issues with force sensor readings (nonlinear, stochastic relation of forces between the three measured forces)
  6. Milestone name: User Study
    • Start Date: 4/6
    • Planned Date: 4/26
    • Expected Date: 5/3
    • Status: HIRB Accepted, user study postponed due to malfunctioning probe contact force sensor
  7. Milestone name: Virtual Fixtures
    • Start Date: 4/15
    • Planned Date: 5/3
    • Expected Date: 5/3
    • Status: Started at a basic level as a “hack” way to encourage probe contact force for a demo in the absence of a working contact force sensor. More advanced functionality should certainly be implemented moving forward.

Reports and presentations

Project Bibliography

Other Resources and Project Files

All individual resources have been uploaded and linked individually above (for instance, instead of giving a link to an entire shared Google Drive of videos, I have shared and linked videos individually where necessary).

A GitHub repository does exist for this project, but is being kept private until all IP considerations have been made toward the end of this project. Please personally contact kevingilboy@jhu.edu for access.