Last updated: 2015-05-11 11:59am
The goal of this project is to develop visual servoing for intraoperative robotic ultrasound. Using a workstation with a GUI, the doctor would select a predefined volume from a reference image such as CT or MRI that was obtained before the procedure. The robot would subsequently image the patient with an ultrasound probe at the predefined volume of interest. This technique would allow the doctor to obtain accurate anatomy in real time, even when deformations and organ movement cause the reference image to differ from reality. An ImFusion plugin will be developed which constructs a 3D ultrasound volume from individual ultrasound probe B-mode images, then calculates the 3D transformation between the current ultrasound volume and the volume of interest to move a KUKA iiwa lightweight robot to the location corresponding to the volume of interest.
Certain medical procedures require a high level of anatomical precision, such as cancer tumor removal or neurosurgery. Although preoperative imaging modalities such as CT and MRI allow for a high level of contrast and spatial resolution, this data differs from the current anatomy of the patient during surgery due to factors such as breathing. As a result, intraoperative imaging is necessary to obtain the real-time anatomy of a patient’s organs.
Among the different medical imaging modalities, ultrasound has several advantages: it is cheap, portable, provides images in real-time, and does not use ionizing radiation. In comparison, CT and MRI provide much higher contrast than ultrasound, which provides them considerable clinical utility for preoperative imaging. Unlike ultrasound, these provide static images that are difficult to update during surgery. As a result ultrasound is commonly used for intraoperative imaging to obtain real-time anatomical data. Doctors may also use preoperative ultrasound with intraoperative ultrasound for certain procedures. However performing a hand held 2D ultrasound scan is user-dependent because it requires unique skills and training to acquire high-quality images.
Robotic ultrasound is a thriving branch of computer-assisted medical interventional research. Due to the greater precision and dexterity of a robot for positioning of an ultrasound probe, robotic ultrasound can improve patient outcomes and shorten procedure times. By acquiring 2D slices of B-mode scans and tracking the probe location, it is possible to obtain a 3D ultrasound volume. Prior work has focused on the use of an ultrasound probe controlled by a human operator via a telemanipulation system, but this project focuses on the automatic guidance of the ultrasound probe by visual servoing. Visual servoing performs registration of the current input image and a volume of interest to obtain the transformation that will move the robot to the location of the volume of interest. The robot obtains a new image once it has moved and repeats the process until the current image corresponds to the volume of interest.
To combine the advantages of both intraoperative ultrasound and preoperative imaging, a hybrid solution is proposed which allows the doctor to select a particular volume in the reference image and have the robot acquire the ultrasound image at its corresponding location. The robot will perform visual servoing by using three-dimensional registration of ultrasound with a reference image to obtain the transformation between the current probe location and the desired probe location. This transformation will be used to update the position of the robot accordingly to acquire a new ultrasound image until the current probe location is sufficiently close to the desired probe location.
The ImFusion software has extensive medical image processing functionality, with existing modules for calculating the transformation between the 3D volumes. As a result, the main task is to integrate ImFusion modules with the KUKA robot to create a visual servo controller that moves the ultrasound probe.
The specific aims of the project are:
The project consists of two main software components: i.) an image analysis module which compounds a 3D volume from the ultrasound probe images and calculates the transformation between the 3D ultrasound volume and reference image, and ii.) a robot communication module which transfers information between ImFusion and KUKA regarding the current robot state and the commanded movement.
<fs medium>Image Analysis</fs>
A workstation is used to run ImFusion, which is a medical imaging software suite created by research fellows at CAMP. It provides existing C++ modules for three-dimensional ultrasound-MRI and ultrasound-ultrasound registration, as well as functions that obtain the transformation between images and reconstruct a 3D ultrasound volume from planar ultrasound scans with known poses. The software also provides a SDK that allows the addition of new plugins and the development of a graphical user interface.
The ImFusion SDK is used to create a new plugin that communicates with the KUKA robot controller. The plugin takes as input the 2D B-mode images from the ultrasound probe along with the current location of the probe at each position. It stores a 3D reference image, the volume of interest, and a 3D ultrasound volume at the probe’s current location obtained from aligning the 2D B-mode images based on position. Built-in modules are used to obtain the three-dimensional rotation and translation between the ultrasound volume at the probe’s current location and the volume of interest within the reference image. The plugin then communicate instructions to move the ultrasound probe according to the computed transformation.
<fs medium>Robot communication</fs>
The KUKA iiwa robot, which has 7 degrees of freedom, manipulates an ultrasound probe that is mounted on its tool tip using motion plans from the workstation running ImFusion. Since movement and image data must be sent via LAN between the KUKA robot and workstation, a network protocol must be used for transferring information between them. Robot control is provided using the KUKA Sunrise.Connectivity framework, and communication between Sunrise and ImFusion uses ROS to pass messages between the two processes. ROS works using a peer-to-peer topology by running processes on independent nodes and using a master server to pass messages between nodes. ROS provides packages for commonly-used robot functionality, and ROS nodes can be created for Java, C++, and other programming languages.
Since ImFusion only runs on Windows, a possible pitfall was that the core ROS system might have compatibility issues with Windows because it is only designed for Unix-based platforms according to the documentation. An alternative would be to run a ROS server within MATLAB and use OpenIGTLink to communicate between ImFusion and the ROS server, then use a ROS node for Sunrise. This would maintain the ability to use ROS to still be functions, and ImFusion already contains a module for OpenIGTLink. If this approach does not work, then a third alternative is to avoid using ROS completely and create an OpenIGTLink interface for Sunrise that will enable communication with the OpenIGTLink interface for ImFusion. This approach has the benefit of avoiding Windows compatibility issues, but a disadvantage is that ROS provides greater existing functionality than OpenIGTLink.
The final approach was to use an experimental version of ROS called WinRos that allows native development on Windows. Since it compiles with Microsoft Visual Studio, this package is compatible with ImFusion.
* Permission to use ImFusion
* J-Card access to the Robotorium and Mock OR
* Computer with GPU powerful enough for ImFusion
* Ultrasound probe and phantoms
* Kinect sensor
* Reliance on other students to make ROS node for KUKA Sunrise
Updated timeline after checkpoint presentation:
Green - Min deliverables
Yellow - Expected deliverables
Red - Max deliverables
Black - Milestones
Communication between ImFusion and the KUKA robot has been successfully achieved on Windows using ROS. The WinRos MSVC SDK allows ROS binaries to be built from source in Microsoft Visual C++ 2012. This is required for compatibility with the ImFusion SDK. A ROS plugin has been developed for ImFusion which sends a pose message to the KUKA robot consisting of the position and orientation that the robot should move to, and accepts a pose message from the robot with the updated position. This is important for other projects within the CAMP research group so that the reuse of existing ROS modules allows less development effort and code duplication. An existing CAMP Sunrise.Connectivity module was utilized that performs visual servo control of the KUKA robot.
ImFusion has also been used to create a 3D volume from 2D ultrasound probe images with known poses from an existing ImFusion plugin, which then provides transformations to the robot based on the registration of the ultrasound probe volume and the volume of interest within the reference image to which the robot should move. This transformation is then sent to the robot to obtain a new position of the ultrasound probe, where the image registration is repeated until the transformation between the ultrasound probe location and the volume of interest is 1mm or less.
Visual servoing was performed on an ultrasound phantom. The experiment was conducted using ultrasound-MRI and ultrasound-ultrasound registration given an initial ultrasound position, and both rigid registration and freeform registration were used. The visual servoing algorithm was applied until the transformation between the current ultrasound image and the volume of interest were sufficiently small. Next, accuracy of final registration between the ultrasound probe image and the reference image was measured as a function of iterations.
In all cases, the robot accurately obtains the ultrasound scan corresponding to the volume in the preoperative image, as observed from the results of registration of the ultrasound volume with the volume of interest from the reference image. These images are shown in Figure 3. The ultrasound-ultrasound registration gives an image with less misalignment relative to the ultrasound-MRI image.
Accuracy was quantitatively defined using normalized cross-correlation (NCC) for ultrasound-ultrasound registration and linear correlation of linear combination (LC2) for ultrasound-MRI. These metrics measure the intensity-based similarity between two images, with the intuition that the images will become more similar as their alignment becomes more optimal. It was observed that registration accuracy converges to a stable value after a large number of iterations.
Both rigid and freeform give comparable accuracy, which may be due to the ultrasound phantom having a stiff design that prevents the occurrence of deformities.
The results provide a proof-of-concept that ultrasound-based visual servoing is feasible for obtaining the updated anatomy of a patient during surgery. Although prior work has focused on visual servoing using registration between two ultrasound images, the results present evidence that visual servoing between ultrasound and MRI might be feasible.
Since testing was only done using a single ultrasound phantom, future work should involve testing with additional phantoms to see if results are reproducible. Additional testing also needs to be done with more preoperative images.
Future work will involve integrating the existing ImFusion plugin with a Kinect sensor. This would provide visual feedback of the patient when the ultrasound probe is in a location away from the body of the patient. This is necessary because no ultrasound feedback is available when the probe is separated from the patient by air. Another future task is to integrate this project with a current project that also uses visual servoing with ImFusion, but communicates with the KUKA robot using OpenIGTLink. Both of these would require similar image registration functionality and should be used to create a general-purpose module for visual servoing.