EchoSure: Detecting Blood Clots Post-Operatively in Blood Vessel Anastomoses

Last updated: 5/7/2014

Summary

The project for the semester involves creating an intuitive and accurate user guidance system that ensures the nurse returns to the correct location each time.

  • Students: Michael Ketcha (mketcha3) and Alessandro Asoni (aasoni1)
  • Mentor(s): Dr. Jerry Prince (prince@jhu.edu), Dr. Emad Boctor (eboctor@jhmi.edu), Dr. Devin O'Brien-Coon (dcoon@jhmi.edu), Dr. Nathanael Kuo (nkuo8@jhmi.edu)

Background, Specific Aims, and Significance

In skin flap transplant surgeries, the flap of skin is large enough to need its own blood supply; therefore a blood vessel anastomosis is required. However, approximately 8-15% of these anastomoses will form a blood clot in the few days that follow the surgery. If the clot is caught in time, the patient will return to surgery where the surgeon can clear the clot and save the skin flap. However, approximately half of the time, the clot is not detected fast enough and the flap of skin undergoes necrosis.1,2 While the current methods for detecting the clots rely on examining pulse oximetry in the flap of skin, this is an inherently delayed detection of the clot. Therefore our approach aims to detect the clot directly.

In terms of its origin, this project started off as a CBID master’s project in the Biomedical Engineering department. During which time, much of the market value and proof of concept was established leading to the reception of many grants for funding and a provisional patent on the technology.

Our approach will be to use ultrasound Doppler imaging to track the velocity of blood flow at the anastomosis site. The location of the anastomoses will be tracked using a biodegradable PLGA fiducial which will be placed under the vessel during surgery. A nurse then will return every hour to monitor the change in velocity over time. The project for the semester then involves creating an intuitive and accurate guidance system that ensures the nurse returns to the correct location each time. Specifically, we will focus on developing an algorithm that processes an ultrasound video file and returns the key points for the fiducial pose evaluation. This includes analysis and implementation of tracking algorithms for video processing.

Deliverables

  • Minimum: Apr 3rd
    1. Same as expected, slower run time (not real time processing).
  • Expected: April 3rd
    1. Develop a point location estimator system that processes ultrasound video data and returns interest points for the fiducial. This system will include pure detection and point tracking. These will interact to allow for faster point detection in sequential frames.
    2. Rough estimation of confidence in each detected point.
    3. etc
  • Maximum: May 8th
    1. Use statistically rigorous frameworks to optimize estimation of confidence of each detected point.
    2. Both point location estimator system and pose estimator system.

Technical Approach

System Overview:

We will implement a set of MATLAB routines that will process a sequence of frames of US video. The diagram above shows how we expect the components of the system to interact. This is what our approach will be:

• First we will develop a point location estimator system. This will have one component that uses image processing and computer vision algorithms to analyze a single frame of the video and identify interest points. These are corners of the projection on a plane slice of the fiducial model. Since processing one image at a time is both slower and less reliable we also plan to develop a point tracker system. This will use information from previous frames to facilitate the job of the point detector. This will be done by reducing the size of the region of interest (ROI) by determining how the points are moving and where they are expected to be in the following frame.

• The points that are determined in this way are fed to the pose position estimator. This system will use the points, and knowledge of the 3D model, to determine what plane slice they came from. Ideally the pose estimator system will use some form of tracking as well, in order to speed up the process. This goes under the assumption that the pose won't change significantly from one frame to the next, and that the ultrasound probe is moved smoothly.

The way we want to build this system is in the following way:

• First our focus will primarily be the point location estimator system. To begin working on the system we will construct a set of shell functions which may do very little processing but can read in the US video one frame at the time. We then will define the two separate systems and determine how they are connected. This implies determining how the point tracker communicates a posterior belief to the point detector and how this belief is used to speed up detection.

• We then will iterate on this first implementation by trying different detection and tracking algorithms to determine the most efficient ones and most effective for our problem.

• The pose estimator system will be in our max deliverables. We haven't determined a technical approach for building this system yet. But if we see that we are finishing the point location estimator quickly, we will start looking into how to best start building the pose estimation system.

Dependencies

Our first dependency will be access to a 3D printer to rapid prototype our fiducial design. We have access to the 3D printer in the basement of Wyman with a budget code already setup.

The second dependency will be access to an ultrasound machine for gathering test data with the rapid-protoyped fiducial. Dr. Boctor’s MUSIIC Lab contains ultrasound machines that we can have access to.

The final dependency will be access to computers to develop and test our algorithms. Both team members have personal laptops. We also have access to Dr. Prince’s servers if we need to process large data tests.

Therefore all of our dependencies have been met.

Milestones and Status

  1. Milestone name: (Min/Expected) Working Prototype For Detection and Tracking
    • Planned Date: Apr 3
    • Expected Date: Min Apr 3, Expected Apr 23
    • Status: Min Done, Currently improving speed and accuracy for Expected.
  2. Milestone name: Kalman Filter Tracking Incorporated
    • Planned Date: Apr 3
    • Expected Date: Apr 3
    • Status: Done
  3. Milestone name: Local Point Tracking Incorporated
    • Planned Date: Apr 3
    • Expected Date: Apr 15
    • Status: Normalized Cross Correlation Implemented, but looking for method that's more robust to noise
  4. Milestone name: (Max): Optimized Pose Estimation Incorporated
    • Planned Date: May 1
    • Expected Date: May 1
    • Status: Done By Collaborator
  5. Milestone name: Code Polished and Documented
    • Planned Date: May 5
    • Expected Date: May 5
    • Status: Complete
  6. Milestone name: Poster Presentation and Final Report
    • Planned Date: May 9
    • Expected Date: May 9
    • Status: Complete

Reports and presentations

* Project Final Presentation

Project Bibliography

Welch, G., and G. Bishop (1995), An introduction to the Kalman Filter. Technical Report TR 95-041, University of North Carolina, Department of Computer Science

Rahul Raguram, Jan-Michael Frahm, and Marc Pollefeys. A comparative analysis of ransac techniques leading to adaptive real-time random sample consensus. In Proceedings of the European Conference on Computer Vision (ECCV), 2008

J.P. Lewis, Fast Normalized Cross-Correlation, Vision Interface, 1995. T.F. Cootes, G.J. Edwards, and C.J. Taylor. Active appearance models. InBurkhardt and Neumann, editors, Computer Vision – ECCV’98 Vol. II, Freiburg, Germany, 1998. Springer, Lecture Notes in Computer Science 1407.

R. E. Kalman, “A New Approach to Linear Filtering and Prediction Problems,” TRANS. ASME, Series D, JOURNAL OF BASIC ENGINEERING, vol. 82, 1960, pp. 35-45.

A. Baumberg and D. Hogg, “An Efficient Method for Contour Tracking Using Active Shape Models,” Proc. Workshop Motion of Nonrigid and Articulated Objects. Los Alamitos, Calif.: IEEE CS Press, 1994

JEPSON, A., FLEET, D., AND ELMARAGHI, T. 2003. Robust online appearance models for visual tracking. IEEE Trans. Patt. Analy. Mach. Intell. 25, 10, 1296–1311.

T.F. Cootes, C.J. Taylor, D. Cooper, and J. Graham, Active Shape Models - Their Training and Application, Computer Vision and Image Understanding, vol. 61, no. 1, pp. 38-59, Jan. 1995.

A. Myronenko and X.B. Song, Point-Set Registration: Coherent Point Drift, IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 32, no. 12, pp. 2262-2275, Dec. 2010.

Other Resources and Project Files

courses/446/2014/446-2014-05/echosure_monitoring_blood_clots_in_skin_flap_surgery.txt · Last modified: 2019/08/07 16:01 by 127.0.0.1




ERC CISST    LCSR    WSE    JHU