Mobile-Based Blood Flow Analysis of Chronic Wounds

Last updated: 10:00 PM, 3/21/2015

Summary

Measuring perfusion, the blood flow to a capillary bed in tissue, is vital in assessing the healing of a chronic wound, which if left improperly treated could result in amputation or even death in the worse case scenario. However, laser doppler imaging, the current gold standard in assessing perfusion, is expensive and inaccessible, and often patients' treatments are critically delayed in the wait to access an LDI system. Tissue Analytics, a Baltimore based startup, is sponsoring our team to develop a low cost, smartphone based alternative to perfusion measurement, to augment their existing chronic wound evaluation and monitoring iPhone application.

  • Students: Rohit Bhattacharya (rbhatta8@jhu.edu), Yvonne Jiang (yjiang23@jhu.edu), Azwad Sabik (asabik1@jhu.edu)
  • Mentor(s): Dr. Emad Boctor, Joshua Budman

Background, Specific Aims, and Significance

The goal of the project is to develop an integrated software-and-hardware solution that allows a clinician to use a mobile device to extract a usable metric that assesses local blood flow. Measures of local blood flow (perfusion) can help characterize healing of chronic wounds and assist physicians in developing appropriate treatment plans for patients. Currently laser doppler imaging (LDI) is the standard method of assessing perfusion, but it is often expensive, inaccessible, and inefficient. Consequentially, wound prognosis is often poor in quality and can necessitate skin grafts, amputation of the limb or even death. Our project seeks to develop a classification algorithm that will take smartphone collected data and return an assessment of perfusion, ideally with at least three bins: poor, moderate, and good.

Deliverables

  • Minimum: (Deliver by: 03/02) Status: Complete
    1. Proof (or disproof)-of-concept of EVM as a method of perfusion assessment.
  • Expected: (Deliver by: 04/12) Status: Complete (binary classifier)
    1. Classification algorithm that applies EVM to smartphone collected images and categorizes perfusion into at least 3 bins.
    2. If EVM alone is insufficient: integrate a compact single point laser doppler system thermal infrared imaging for assessing perfusion.
  • Maximum: (Deliver by: 04/30) Status: Incomplete
    1. Full integration of algorithm (with or without additional hardware) with existing smartphone application.
    2. Use of LDI technology for image stabilization/localization and depth measurement.

Technical Approach

Our first approach to the problem is to analyze video data collected via smartphone with Eulerian Video Magnification (EVM), an algorithm developed in MATLAB at MIT that detects and magnifies minute temporal changes in videos that are invisible to the naked eye. One theorized way of extracting perfusion information given EVM data is to determine the rate and magnitude of change in the intensity of the RGB channels of successive images produced by the algorithm. We can then compare this with the perfusion measured by Laser Doppler Imaging (LDI) in selected areas of interest and determine if there is a correlation. If there is good correlation between the two we will then work to build a classifier that can characterize areas in an image as having low, medium or high perfusion.

We then use Support Vector Machines (SVMs) which are a form of supervised learning algorithms. The ground truth given to us in the LDI data presents several “labelled” examples for our SVM and we used a portion of these as our training set and setting others aside to be our test set. In order to prevent overfitting, we assessed our classifier’s performance with cross validation wherein we randomize the data that we select to be our training and test sets each time we train it. The features that we are looking to feed our SVM include a measure of “average time derivative” i.e. average rate of change of intensity in the RGB channels of a particular data point, and given a steady pulse we could also use the magnitude of certain frequencies in the data after taking a Fourier transform.

We also incorporated single point velocimetry readings obtained via compact laser doppler probe thermal infrared imaging to the classification system as an extra feature to help classification. This is obtained using a FLIR ONE iPhone camera attachment.

Dependencies

describe dependencies and effect on milestones and deliverables if not met

  1. Acquire additional EVM/LDI data from Tissue Analytics
    • Planned Date: 02/23
    • Expected Date: 03/13
    • Effect due to delay: Delayed feature extraction from EVM data
    • Status: Resolved
  2. Acquire single point laser doppler
    • Planned Date: 03/08
    • Expected Date: 03/19
    • Effect due to delay:
    • Status: No longer a dependency
  3. Acquire infrared attachment for iPhone camera
    • Planned Date: 03/23
    • Expected Date: 03/26
    • Effect due to delay: One less feature for the SVM (not too significant)
    • Status: Resolved

Milestones and Status

  1. Milestone name: Extract features from EVM data
    • Planned Date: 02/26
    • Expected Date: 03/19
    • Status: Completed
  2. Milestone name: Correlate EVM features with ground truth LDI data
    • Planned Date: 03/02
    • Expected Date: 03/26
    • Status: Complete
  3. Milestone name: Implement SVM classifier for EVM features
    • Planned Date: 03/22
    • Expected Date: 03/29
    • Status: Complete
  4. Milestone name: Test SVM classifier against LDI ground truth
    • Planned Date: 04/12
    • Expected Date: 04/12
    • Status: Complete
  5. Milestone name: Project report
    • Planned Date: 04/24
    • Expected Date: 04/24
    • Status: Complete
  6. Milestone name: Poster presentation
    • Planned Date: 05/08
    • Expected Date: 05/08
    • Status: Complete

Updated Timeline

Reports and presentations

Project Bibliography

  1. Wu, H., et al. ”Eulerian Video Magnification for Revealing Subtle Changes in the World”. ACM Transactions on Graphics (TOG)- Proceedings of ACM SigGraph 2012. 31.4 (2012). July 2012.
  2. Liu, C., Torralba, A., Freeman, W. T., Durand, F., & Adelson, E. H. (2005). Motion magnification. ACM Transactions on Graphics, 24(3), 519–526.
  3. G. Balakrishnan, F. Durand, and J. Guttag, “Detecting Pulse from Head Motions in Video,” in 2013 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2013, pp. 3430–3437.
  4. M. Z. Poh , D. J. McDuff and R. W. Picard “Non-contact, automated cardiac pulse measurements using video imaging and blind source separation”, Opt. Expr., vol. 18, pp.10762 -10774 2010.
  5. B.K.P. Horn and B. Schunk, “Determining Optical Flow,” Artificial Intelligence, vol. 17, pp. 185-203, 1981.
  6. Russell, S. and Norvig, “Chapter 18: Learning from Example”. Artificial Intelligence : A Modern Approach, Englewood Cliffs, NJ: Prentice-Hall, 2009.
  7. Hua S, Sun Z: Support vector machine approach for protein subcellular localization prediction. Bioinformatics. 2001; 17(8): 721–728

Other Resources and Project Files

courses/446/2015/446-2015-07/project_7_main_page.txt · Last modified: 2019/08/07 16:01 by 127.0.0.1




ERC CISST    LCSR    WSE    JHU