Contact Us
CiiS Lab
Johns Hopkins University
112 Hackerman Hall
3400 N. Charles Street
Baltimore, MD 21218
Directions
Lab Director
Russell Taylor
127 Hackerman Hall
rht@jhu.edu
Last updated: 5/5/16, 11pm
Our goal is to provide an image-depth registration for the Robotic ENT Microsurgery System (REMS) robot to assist a surgeon with aligning the robot in preparation for surgery. This will be done using an Intel RealSense image depth camera to obtain a pointcloud and patient CT scans to construct a surface mesh, and using a matching algorithm (such as Iterative Closest Point or Coherent Point Drift) to align the two, allowing for a higher precision alignment of the robot.
The Robotic ENT Microsurgery System (REMS) robot is a surgical robot that uses minimally invasive techniques by utilizing the body’s natural openings to perform head and neck surgery. Currently, the surgeon must manually move and align the robot in preparation for surgery.
The range image camera from Intel provides image and depth data, which we will parse, using the company’s SDK, into point-cloud data.
We want to integrate the power of this range image camera to aid the surgeon in aligning the robot to more accurately and efficiently prepare for surgery.
The overall technical approach is outlined by this chart documenting the flow of information during a typical run of our software.
Obtain Mesh from CT Scans using 3DSlicer
We obtained a set of CT scans in their raw DICOM data format and used a program called 3DSlicer to convert them to the standard STL mesh format, which we in turn simplified using MeshLab.
Obtain Point Cloud from Camera and Process
Using the Intel RealSense SDK, we were able to pull raw point clouds off the camera, which we processed using the Point Cloud Library (PCL) to downsample the clouds and to remove the background planes. Each intermediate step can be seen below from the full original, dense point cloud, to the downsampled point cloud, to the sampled cloud without its background plane.
Register Point Cloud to Mesh
Once we have a processed point cloud we then attempt to compute the transformation between the cloud and the reference mesh (from the pre-operative CT scan). To get this registration, we use the standard ICP algorithm. We will not discuss ICP in great detail, but it is an algorithm that iteratively approximates the transformation between two objects and (hopefully) improves on each guess. The results for one such application of ICP can be seen below.
Calibration
Being able to calibrate the camera such that we know the pose of the camera relative to the wrist of the robot makes the registration much more useful. In order to do this we set up a system of equations resembling AX=XB where A are the differences between poses of the robot and B are the differences between poses of the calibration object relative to the camera.
Our (public) github repo has our code documentation.