Table of Contents

Image Processing for Video-CT Registration in Sinus Surgery​

Last updated: 3/24/2015 at 12:58AM

Summary

For this project, we will be developing computer vision to track tool tip locations during sinus surgeries. We will detect contours and register them to our CT image data to provide real-time tracking.

Background, Specific Aims, and Significance

Currently, magnetic trackers are common in robotic microsurgery where maintaining sight is difficult. While these trackers are relatively inexpensive, interference with metal instruments in the operating room limits their accuracy and effectiveness. We aim to enhance the tracker registration, and hence accuracy, by combining high resolution patient CT data with real time video from the endoscopic surgical camera. Through a these inexpensive software-based algorithms, we hope to greatly enhance the tracking accuracy. We hope to demonstrate these improvements using a real-time augmented-reality overlay that will provide the surgeon with valuable anatomical data. This project will provide great accuracy improvements to magnetic trackers, enabling new uses for this existing technology at a low cost.

We will be contributing to the registration algorithm currently being developed by Seth Billings. Specifically, we will be working on extracting occluding contours from video. Occluding contours are edges of physical features that obscure the background. These contours delineate distinct “layers” in a scene. They are distinct from texture contours, which merely outline small surface deviations and discolorations. By examining the occluding contours from the endoscope and mapping them to the CT model, an accurate registration can be achieved. Once our occluding contour algorithm interfaces properly with the registration algorithm, we hope to implement an augmented reality overlay.

Specific Aims

  1. An algorithm that accurately and efficiently extracts occluding contours from sinus surgery videos
  2. Proper integration with existing registration software
  3. Real-time augmented reality overlay on endoscope video feed

Deliverables

Technical Approach

Our CIS II project on image processing can be divided into three steps:

  1. Contour Detection: The goal of this project is to be able to rigorously register and track surgical tool position in CT coordinates by extracting occluding contours from video data.
    • Develop a new algorithm, based on existing optical flow algorithms, specifically to efficiently and accurately extract occluding contours. Algorithm will be developed in MATLAB and then in C/C++ for efficiency.
    • Using Horn-Schunk in combination with Canny edge detection, we expect to find accurate contours of tissue in sinus surgery and filter out extraneous details to achieve well-defined and clear edges in real-time video feed.
    • Another approach is to use train an SVM using a training set of images that will look for a number of image-related features (TBD). Using a binary classifier, we can determine which edges are part of the hand-drawn ground truth edges and which ones are unnecessary edges. Once the SVM is trained, we can use it on a number of different images to extract the occluding contours.
    • Currently, the SVM method above has shown the most promising results. The product is a package of MATLAB code that reads in input images from a directory, trains the SVM to classify true positives and false positives, and uses the classification to predict edge pixels in any sinus surgery image.
      1. The program uses hand-labeled ground truths during training. The labeled images are found in the 'labeled' directory of the package.
      2. The inputs of the package are found in the 'input' directory. The program automatically reads these images and predicts where their occluding contours are.
      3. The ouputs for each image are:
        1. A binary image of occluding contours (variable name: testFeatures)
        2. Normal vectors for each pixel in the above binary image (variable name: normals)
        3. X and Y positions in the binary image of each normal vector (variable names: x, y)
  2. Integration: Contour detection will integrate with existing registration algorithm developed by Seth Billings. (Expected Deliverable)
    • Once integration is complete and we develop an efficient algorithm, we will be able to use real-time contour detection to track the position of the tool tip in CT coordinates.
  3. Augmented Reality: AR overlay will be developed to provide real time information for the surgeon
    • Use CT data with registration to overlay an augmented reality interface over the video in real-time.

Dependencies

* Sinus Surgery Videos provided by Dr. Reiter

* Video-CT Registration algorithm from Seth Billings

* Sinus surgery video with corresponding CT images

Milestones and Status

  1. Research: Decide on which technique we want to use and which paper(s) we will develop our algorithm from
    • Planned Date: 2/27/15
    • Expected Date: 2/27/15
    • Status: Complete
  2. Design: Construct and modify a draft of our algorithm with pseudocode
    • Planned Date: 3/06/15
    • Expected Date: 3/10/15
    • Status: Complete
  3. Implementation: Have a package that can successfully run contour detection on an image or video
    • Planned Date: 3/20/15
    • Expected Date: 3/23/15
    • Status: Complete
  4. Testing: Make sure our package is successful with our surgical video data based on hand labeled ground truth
    • Planned Date: 3/27/15
    • Completed Date: 5/4/15
    • Status: Complete
  5. Integration with CT Registration: Ensure our generated occlusion contours correctly register to the CT images by calculating edge normals
    • Planned Date: 4/10/15
    • Completed Date: 5/4/15
    • Status: Complete

Reports and presentations

Project Bibliography

Other Resources and Project Files