Eye-In-Hand Depth Image Registration for Surgical Robot

Last updated: 5/5/16, 11pm

Summary

Our goal is to provide an image-depth registration for the Robotic ENT Microsurgery System (REMS) robot to assist a surgeon with aligning the robot in preparation for surgery. This will be done using an Intel RealSense image depth camera to obtain a pointcloud and patient CT scans to construct a surface mesh, and using a matching algorithm (such as Iterative Closest Point or Coherent Point Drift) to align the two, allowing for a higher precision alignment of the robot.

  • Students: Joseph Min and Zachary Sabin
  • Mentor(s): Russell Taylor, Yunus Sivimli, and Bernhard Fuerst

Background, Specific Aims, and Significance

The Robotic ENT Microsurgery System (REMS) robot is a surgical robot that uses minimally invasive techniques by utilizing the body’s natural openings to perform head and neck surgery. Currently, the surgeon must manually move and align the robot in preparation for surgery.

The range image camera from Intel provides image and depth data, which we will parse, using the company’s SDK, into point-cloud data.

We want to integrate the power of this range image camera to aid the surgeon in aligning the robot to more accurately and efficiently prepare for surgery.

Deliverables

  • Minimum: (Completed)
    1. Register between pre-operative model and camera point cloud
  • Expected: (Incomplete)
    1. Test registration accuracy on a phantom with a CT image
    2. Provide some type of guidance to robot operator
    3. AX = XB calibration to get camera position relative to robot
  • Maximum: (Incomplete)
    1. Find ideal starting pose for robot and assist in initial setup OR
    2. Track robot motions using camera throughout operation OR
    3. Deformable registration using statistical atlas

Technical Approach

The overall technical approach is outlined by this chart documenting the flow of information during a typical run of our software.

Obtain Mesh from CT Scans using 3DSlicer

We obtained a set of CT scans in their raw DICOM data format and used a program called 3DSlicer to convert them to the standard STL mesh format, which we in turn simplified using MeshLab.

  • 3DSlicer

  • MeshLab

Obtain Point Cloud from Camera and Process

Using the Intel RealSense SDK, we were able to pull raw point clouds off the camera, which we processed using the Point Cloud Library (PCL) to downsample the clouds and to remove the background planes. Each intermediate step can be seen below from the full original, dense point cloud, to the downsampled point cloud, to the sampled cloud without its background plane.

Register Point Cloud to Mesh

Once we have a processed point cloud we then attempt to compute the transformation between the cloud and the reference mesh (from the pre-operative CT scan). To get this registration, we use the standard ICP algorithm. We will not discuss ICP in great detail, but it is an algorithm that iteratively approximates the transformation between two objects and (hopefully) improves on each guess. The results for one such application of ICP can be seen below.

Calibration

Being able to calibrate the camera such that we know the pose of the camera relative to the wrist of the robot makes the registration much more useful. In order to do this we set up a system of equations resembling AX=XB where A are the differences between poses of the robot and B are the differences between poses of the calibration object relative to the camera.

Dependencies

  • Intel RealSense Camera - Completed
  • Camera SDK (Intel RealSense) - In Progress. While we have a unix driver for the camera, we are experiencing errors using it. While it functions at the moment for testing we will need to resolve these errors eventually.
  • Access to REMS Robot - Completed
  • CT Scans for Phantom - Completed
  • Camera to Robot Mount - Completed

Milestones and Status

  1. Mount Camera to Robot
    • Planned Date: 2/28/16
    • Expected Date: 3/21/16
    • Status: Completed
  2. Construct Phantom from CT Scans
    • Planned Date: 3/6/16
    • Expected Date: 3/9/16
    • Status: Completed
  3. Perform Mesh to Point Cloud Registration
    • Planned Date: 3/11/16
    • Expected Date: 3/28/16
    • Status: Completed
  4. Validate Accuracy of Registration
    • Planned Date: 3/25/16
    • Expected Date: 4/1/16
    • Status: Completed
  5. Get Pose Information from Robot
    • Planned Date: 4/1/16
    • Expected Date: 4/8/16
    • Status: Completed
  6. Implement AX = XB Algorithm
    • Planned Date: 4/1/16
    • Expected Date: 4/8/16
    • Status: Completed
  7. Implement Full Calibration
    • Planned Date: 4/8/16
    • Expected Date: 4/30/16
    • Status: Incomplete
  8. Implement Guidance System
    • Planned Date: 4/8/16
    • Expected Date: 4/19/16
    • Status: Not Begun
  9. Decide on Maximum Deliverables
    • Planned Date: 4/8/16
    • Expected Date: 4/15/16
    • Status: Not Begun
  10. Deformable Registration OR Motion Tracking OR Determine Ideal Pose
    • Planned Date: 5/1/16
    • Expected Date: 5/1/16
    • Status: Not Begun

Reports and presentations

Project Bibliography

  • [1] S. Billings, A. Kapoor, M. Keil, B. J. Wood, and E. Boctor, “A hybrid surface/image-based approach to facilitate ultrasound/CT registration”, in SPIE Medical Imaging 2011: Ultrasonic Imaging, Tomography, and Therapy, Lake Buena Vista, Florida, Feb 13, 2011. pp. 79680V-1 to 79680V-12.
  • [2] S. Billings, E. Boctor, and R. H. Taylor, “Iterative Most-Likely Point Registration (IMLP): A Robust Algorithm for Computing Optimal Shape Alignment”, PLOS ONE, vol. 10- 3, pp. (e0117688) 1-45, 2015. http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0117688 doi:10.1371/journal.pone.0117688
  • [3] S. Billings and R. Taylor, “Generalized Iterative Most-Likely Oriented Point (G-IMLOP) Registration”, Int. J. Computer Assisted Radiology and Surgery, vol. 8- 10, pp. 1213-1226, 2015. DOI 10.1007/s11548-015-1221-2
  • [4] K. C. Olds, P. Chalasani, P. Pacheco-Lopez, I. Iordachita, L. M. Akst, and R. H. Taylor, “Preliminary Evaluation of a New Microsurgical Robotic System for Head and Neck Surgery”, in IEEE Int. Conf on Intelligent Robots and Systems (IROS), Chicago, Sept 14-18, 2014. pp. 1276-1281.
  • [5] K. Olds, Robotic Assistant Systems for Otolaryngology-Head and Neck Surgery, PhD thesis in Biomedical Engineering, Johns Hopkins University, Baltimore, March 2015.

External Links

Our (public) github repo has our code documentation.

https://github.com/zpaines/EyeInHand

courses/446/2016/446-2016-09/project_09_main_page.txt · Last modified: 2016/05/06 00:42 by jmin9@johnshopkins.edu




ERC CISST    LCSR    WSE    JHU