Table of Contents

Eye-In-Hand Depth Image Registration for Surgical Robot

Last updated: 5/5/16, 11pm

Summary

Our goal is to provide an image-depth registration for the Robotic ENT Microsurgery System (REMS) robot to assist a surgeon with aligning the robot in preparation for surgery. This will be done using an Intel RealSense image depth camera to obtain a pointcloud and patient CT scans to construct a surface mesh, and using a matching algorithm (such as Iterative Closest Point or Coherent Point Drift) to align the two, allowing for a higher precision alignment of the robot.

Background, Specific Aims, and Significance

The Robotic ENT Microsurgery System (REMS) robot is a surgical robot that uses minimally invasive techniques by utilizing the body’s natural openings to perform head and neck surgery. Currently, the surgeon must manually move and align the robot in preparation for surgery.

The range image camera from Intel provides image and depth data, which we will parse, using the company’s SDK, into point-cloud data.

We want to integrate the power of this range image camera to aid the surgeon in aligning the robot to more accurately and efficiently prepare for surgery.

Deliverables

Technical Approach

The overall technical approach is outlined by this chart documenting the flow of information during a typical run of our software.

Obtain Mesh from CT Scans using 3DSlicer

We obtained a set of CT scans in their raw DICOM data format and used a program called 3DSlicer to convert them to the standard STL mesh format, which we in turn simplified using MeshLab.

Obtain Point Cloud from Camera and Process

Using the Intel RealSense SDK, we were able to pull raw point clouds off the camera, which we processed using the Point Cloud Library (PCL) to downsample the clouds and to remove the background planes. Each intermediate step can be seen below from the full original, dense point cloud, to the downsampled point cloud, to the sampled cloud without its background plane.

Register Point Cloud to Mesh

Once we have a processed point cloud we then attempt to compute the transformation between the cloud and the reference mesh (from the pre-operative CT scan). To get this registration, we use the standard ICP algorithm. We will not discuss ICP in great detail, but it is an algorithm that iteratively approximates the transformation between two objects and (hopefully) improves on each guess. The results for one such application of ICP can be seen below.

Calibration

Being able to calibrate the camera such that we know the pose of the camera relative to the wrist of the robot makes the registration much more useful. In order to do this we set up a system of equations resembling AX=XB where A are the differences between poses of the robot and B are the differences between poses of the calibration object relative to the camera.

Dependencies

Milestones and Status

  1. Mount Camera to Robot
    • Planned Date: 2/28/16
    • Expected Date: 3/21/16
    • Status: Completed
  2. Construct Phantom from CT Scans
    • Planned Date: 3/6/16
    • Expected Date: 3/9/16
    • Status: Completed
  3. Perform Mesh to Point Cloud Registration
    • Planned Date: 3/11/16
    • Expected Date: 3/28/16
    • Status: Completed
  4. Validate Accuracy of Registration
    • Planned Date: 3/25/16
    • Expected Date: 4/1/16
    • Status: Completed
  5. Get Pose Information from Robot
    • Planned Date: 4/1/16
    • Expected Date: 4/8/16
    • Status: Completed
  6. Implement AX = XB Algorithm
    • Planned Date: 4/1/16
    • Expected Date: 4/8/16
    • Status: Completed
  7. Implement Full Calibration
    • Planned Date: 4/8/16
    • Expected Date: 4/30/16
    • Status: Incomplete
  8. Implement Guidance System
    • Planned Date: 4/8/16
    • Expected Date: 4/19/16
    • Status: Not Begun
  9. Decide on Maximum Deliverables
    • Planned Date: 4/8/16
    • Expected Date: 4/15/16
    • Status: Not Begun
  10. Deformable Registration OR Motion Tracking OR Determine Ideal Pose
    • Planned Date: 5/1/16
    • Expected Date: 5/1/16
    • Status: Not Begun

Reports and presentations

Project Bibliography

External Links

Our (public) github repo has our code documentation.

https://github.com/zpaines/EyeInHand