Contact Us
CiiS Lab
Johns Hopkins University
112 Hackerman Hall
3400 N. Charles Street
Baltimore, MD 21218
Directions
Lab Director
Russell Taylor
127 Hackerman Hall
rht@jhu.edu
Last updated: 5/7/2014
The project for the semester involves creating an intuitive and accurate user guidance system that ensures the nurse returns to the correct location each time.
In skin flap transplant surgeries, the flap of skin is large enough to need its own blood supply; therefore a blood vessel anastomosis is required. However, approximately 8-15% of these anastomoses will form a blood clot in the few days that follow the surgery. If the clot is caught in time, the patient will return to surgery where the surgeon can clear the clot and save the skin flap. However, approximately half of the time, the clot is not detected fast enough and the flap of skin undergoes necrosis.1,2 While the current methods for detecting the clots rely on examining pulse oximetry in the flap of skin, this is an inherently delayed detection of the clot. Therefore our approach aims to detect the clot directly.
In terms of its origin, this project started off as a CBID master’s project in the Biomedical Engineering department. During which time, much of the market value and proof of concept was established leading to the reception of many grants for funding and a provisional patent on the technology.
Our approach will be to use ultrasound Doppler imaging to track the velocity of blood flow at the anastomosis site. The location of the anastomoses will be tracked using a biodegradable PLGA fiducial which will be placed under the vessel during surgery. A nurse then will return every hour to monitor the change in velocity over time. The project for the semester then involves creating an intuitive and accurate guidance system that ensures the nurse returns to the correct location each time. Specifically, we will focus on developing an algorithm that processes an ultrasound video file and returns the key points for the fiducial pose evaluation. This includes analysis and implementation of tracking algorithms for video processing.
System Overview:
We will implement a set of MATLAB routines that will process a sequence of frames of US video. The diagram above shows how we expect the components of the system to interact. This is what our approach will be:
• First we will develop a point location estimator system. This will have one component that uses image processing and computer vision algorithms to analyze a single frame of the video and identify interest points. These are corners of the projection on a plane slice of the fiducial model. Since processing one image at a time is both slower and less reliable we also plan to develop a point tracker system. This will use information from previous frames to facilitate the job of the point detector. This will be done by reducing the size of the region of interest (ROI) by determining how the points are moving and where they are expected to be in the following frame.
• The points that are determined in this way are fed to the pose position estimator. This system will use the points, and knowledge of the 3D model, to determine what plane slice they came from. Ideally the pose estimator system will use some form of tracking as well, in order to speed up the process. This goes under the assumption that the pose won't change significantly from one frame to the next, and that the ultrasound probe is moved smoothly.
The way we want to build this system is in the following way:
• First our focus will primarily be the point location estimator system. To begin working on the system we will construct a set of shell functions which may do very little processing but can read in the US video one frame at the time. We then will define the two separate systems and determine how they are connected. This implies determining how the point tracker communicates a posterior belief to the point detector and how this belief is used to speed up detection.
• We then will iterate on this first implementation by trying different detection and tracking algorithms to determine the most efficient ones and most effective for our problem.
• The pose estimator system will be in our max deliverables. We haven't determined a technical approach for building this system yet. But if we see that we are finishing the point location estimator quickly, we will start looking into how to best start building the pose estimation system.
Our first dependency will be access to a 3D printer to rapid prototype our fiducial design. We have access to the 3D printer in the basement of Wyman with a budget code already setup.
The second dependency will be access to an ultrasound machine for gathering test data with the rapid-protoyped fiducial. Dr. Boctor’s MUSIIC Lab contains ultrasound machines that we can have access to.
The final dependency will be access to computers to develop and test our algorithms. Both team members have personal laptops. We also have access to Dr. Prince’s servers if we need to process large data tests.
Therefore all of our dependencies have been met.
* Project Final Presentation
Welch, G., and G. Bishop (1995), An introduction to the Kalman Filter. Technical Report TR 95-041, University of North Carolina, Department of Computer Science
Rahul Raguram, Jan-Michael Frahm, and Marc Pollefeys. A comparative analysis of ransac techniques leading to adaptive real-time random sample consensus. In Proceedings of the European Conference on Computer Vision (ECCV), 2008
J.P. Lewis, Fast Normalized Cross-Correlation, Vision Interface, 1995. T.F. Cootes, G.J. Edwards, and C.J. Taylor. Active appearance models. InBurkhardt and Neumann, editors, Computer Vision – ECCV’98 Vol. II, Freiburg, Germany, 1998. Springer, Lecture Notes in Computer Science 1407.
R. E. Kalman, “A New Approach to Linear Filtering and Prediction Problems,” TRANS. ASME, Series D, JOURNAL OF BASIC ENGINEERING, vol. 82, 1960, pp. 35-45.
A. Baumberg and D. Hogg, “An Efficient Method for Contour Tracking Using Active Shape Models,” Proc. Workshop Motion of Nonrigid and Articulated Objects. Los Alamitos, Calif.: IEEE CS Press, 1994
JEPSON, A., FLEET, D., AND ELMARAGHI, T. 2003. Robust online appearance models for visual tracking. IEEE Trans. Patt. Analy. Mach. Intell. 25, 10, 1296–1311.
T.F. Cootes, C.J. Taylor, D. Cooper, and J. Graham, Active Shape Models - Their Training and Application, Computer Vision and Image Understanding, vol. 61, no. 1, pp. 38-59, Jan. 1995.
A. Myronenko and X.B. Song, Point-Set Registration: Coherent Point Drift, IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 32, no. 12, pp. 2262-2275, Dec. 2010.