======Content Generation for HMD Based Medical Training======
**Last updated: 2/26/2018 9:20pm**
======Summary======
{{:courses:456:2018:456-2018-09:project_logo.jpg?600|}}
The goal of our project is to create a smart training module that provides automated feedback for basic medical procedures using the Hololens.
* **Students:** Benjamin Pikus, Patrick Myers, Alexander Chang
* **Mentor(s):** Ehsan Azimi, Chien-Ming Huang, Peter Kazanzides, Camilo Molina
======Relevance/Importance: ======
A recent study done by the BMJ has shown that medical error is the third leading cause of death in the United States, with an estimated 251,000 deaths in 2013. The article cites “inadequate skill” as one of the primary reasons for this medical error.1 Traditional non-interactive instruction has not properly prepared medical and paramedical students to perform life-saving procedures. There is a need for an interactive training system that allows the user to efficiently learn and perfect the medical skill before ever performing on a real patient. This project hopes to address this need, and will be working towards the goal of eliminating the need for a medical professional instructor, which would greatly save resources.
======Background:======
This project builds upon a previous study on the use of the Microsoft Hololens, in training nursing and university students, staff and faculty on: Needle Chest Decompression and Initiating Direct Line I.V. During the training procedure, the participants in that study were split into two groups. For both tasks one group used the Hololens while the other group didn’t. On the Hololens, users saw text and images telling the user what to do during each step or each procedure.
======Deliverables======
* **Minimum:** (Expected by **March 18th, 2018**)
- Remote monitoring of live HoloLens video feed
- Simple UI on remote system - provides set feedback
- Documentation of user interface
- Test existing training module on new remote monitoring system
* **Expected:** (Expected by **April 15th, 2018**)
- Replace 2D instruction images with 3D overlays
- Tracking of tool tip and planar orientation upon insertion
- Documentation of tracking method
- Performance score analysis
- Documentation of algorithm
- Test improved system against existing training module
* **Maximum:** (Expected by **May 11th, 2018**)
- 3D orientation of tool upon insertion
- Update documentation of tracking method
- Implement levels of difficulty based on scoring analysis
- Documentation of level system
- 3D overlay feedback to replace text
- Documentation of improved feedback system
======Technical Approach======
==Surgical Tool Tracking ==
The goal of tool tracking is to detect the position and orientation of the tool (in the case of direct line IV, a needle). Previous works have attempted to do this in ultrasound images with optical flow and Hough transform. Our first attempt will be to try to adapt their methods to fit the Hololens: First, detect the marker on the manekin using ARToolkit and limit the scope of the image to within a specified distance of the marker. Then, use canny detection to find edges and hough line transformation to find lines. Next, filter lines based off length (the needle is about 2-3 inches long), so exceptionally long or short lines are likely not the needle. Finally, threshold using HSV color-space (since it is more robust to brightness edge). Preferably, we will eventually add sparse optical flow instead of color-thresholding. This algorithm is rudimentary, so it can be done quickly to get a quick gauge of the Hololens capabilities and is quickly adaptable. If this algorithm ends up being not feasible, we will next try to adapt tool tracking convolutional neural networks used in pose estimation for minimally invasive surgery.
==Adaptive Leveling System==
The adaptive leveling system will be implemented by using a recommender which uses a score to determine the amount of guidance the user should receive during training. The scoring metric the recommender will be using are time and accuracy. Time can be measured using system time. Accuracy can be determined with the aid of the tool tracking. At specific intervals, the user will check the placement of the tool. The tool tracking software will determine the pose error of the tool based on its goal position and then combine this data with the time of the task to create a cost function for each step.
The recommender will use these scores to determine which level the user is at, and what type of feedback they need. As the user gains more experience there will be less of the overlays and the text so the user does not use the Hololens as a crutch.
======Dependencies======
^ Dependency ^ Solution/Backup ^ Date/Progress ^
| Microsoft Hololens | Get access to Dr. Huang and/or Dr. Kazanzides labs | Completed; 15 February |
| Code for modules on previous study | Get GitHub access to code by Ehsan |Completed; 18 February|
| Surgical Training Mannequins | Get access to Dr. Kazanzides’s pod in LCSR | Completed; 19 February |
| Hololens Streaming Application | Get GitHub access to code | Completed; 23 February |
| Surgical Training Content | Work closely with Allan and Prateek; Already have for first two procedures | In progress throughout semester |
| Ventriculostomy Training Model | Order. Backup: use Dr. Molina’s model | Completed; 15 March |
| Structure Sensor for 3D images | None Needed | Completed; 2 April |
| iPad | None Needed | Completed; 2 April |
======Milestones and Status ======
- Feb 26, 2018: Project Proposal (Completed)
- Mar 6, 2018: Live video feed from HoloLens (Completed)
- Mar 6, 2018: Interfacing specifications completed (Completed)
- Mar 16, 2018: UI added to remote system (Completed)
- Apr 6, 2018: 3D overlays imported as static objects in scene (Completed)
- Apr 13, 2018: Scoring system integrated (Almost Done) -> Waiting on tool tracking
- Apr 13, 2018: Software for tip position and planar orientation (Almost done)
- Apr 20, 2018: Complete testing for initial tool tracking and overlays
- Apr 27, 2018: Difficulty levels completed
- May 4, 2018: 3D overlays dynamic objects to provide feedback
- May 4, 2018: 3D orientation of tool tracking completed
- May 11, 2018: Scoring updated based on new tool tracking
- May 11, 2018 Overlays updated based on new tool tracking
======Reports and presentations======
* Project Plan
* {{https://drive.google.com/file/d/1SQvZQznboq6zw16H7v9tHQyj6XwFpVmp/view?usp=sharing|Project Plan Presentation}}
* {{https://drive.google.com/file/d/1VjFGf0o_Vh_5af_0LtggzeN6hxu1kpaZ/view?usp=sharing|Project plan proposal}}
* Project Background Reading
* {{https://docs.google.com/presentation/d/1OHY7n7rWsCKiX4a9wNOc58smbw5lGJF_AiKn4MyyfdQ/edit?usp=sharing|Alex's Background Presentation}}
* {{https://docs.google.com/document/d/1sv19baFybRmlmVtRrrCYb1A3pIYFJmuUqnKz4fWh5Z8/edit?usp=sharing|Alex's Background Report}}
* {{https://link.springer.com/content/pdf/10.1007%2Fs11548-017-1564-y.pdf|Alex's Background Paper}}
* {{https://drive.google.com/file/d/1Ia7MfCtsW0iN9cHg87hrZ3F-l7ZxfgWE/view?usp=sharing|Patrick's Background Paper}}
* {{https://drive.google.com/file/d/1IjU_KNniOB6CJOR8gCB26CMi2L1jZWcD/view?usp=sharing|Patrick's Background Presentation}}
* {{https://drive.google.com/file/d/1p_8ApfPZjBLnu9_Tkp-UIM4EqfUGRjTu/view?usp=sharing|Patrick's Background Report}}
* {{https://drive.google.com/file/d/16FZxUj7KPzV08CO7ryUbPv7v174gRHYo/view?usp=sharing|Benjamin's Background Paper}}
* {{https://drive.google.com/file/d/1zfDPLOq-imTdCpvM7A-ohWH4a847rAYM/view?usp=sharing|Benjamin's Background Presentation}}
* {{https://drive.google.com/file/d/1mlpvD4AiDOC7YLFBpp1nurJGffOq1IG7/view?usp=sharing|Benjamin's Background Report}}
* Project Checkpoint
* {{:courses:456:2018:456-2018-09:checkpoint_presentation.pdf| Project checkpoint presentation}}
* Paper Seminar Presentations
* here provide links to all seminar presentations
* Project Final Preview
* {{https://drive.google.com/file/d/1xDIUGC04OB7PA0D2bcuJZ648d7IOnqrR/view?usp=sharing|PDF of FF Slide}}
* Project Final Presentation
* {{https://drive.google.com/file/d/1C2PhgcDkjLxt8tHgSUGnPeYY3mHUfEaS/view?usp=sharing|PDF of Poster}}
* Project Final Report
* {{https://drive.google.com/file/d/1D1vtQoiluX07ydKVMoJgs9waIR9NvDdN/view?usp=sharing|Final Report}}
* links to any appendices or other material
* Paper Presentation
======Project Bibliography=======
- Ayvali, Elif and Jaydev P. Desai. “Optical Flow-Based Tracking of Needles and Needle-Tip Localization Using Circular Hough Transform in Ultrasound Images.” Ann Biomed Eng. 43(8): 1828–1840, 2015
- Herron, Jennifer. “Augmented Reality in Medical Education and Training.” Journal of Electronic Resources in Medical Libraries, 13:2, 51-55, 2016.
- Pratt, Philip, Mathew Ives, Graham Lawton, Jonathan Simmons, Nasko Radev, Liana Spyropoulou, and Dimitri Amiras. “Through the HoloLens looking glass: augmented reality for extremity reconstruction surgery using 3D vascular models with perforating vessels.” European Radiology Experiment. 2:2, 2018
- Qiu, Wu and Mingyue Ding. “Needle Segmentation Using 3D Quick Randomized Hough Transform.” First International Conference on Intelligent Networks and Intelligent Systems
- E. Azimi et al., “Evaluation of Optical See-Through Head-Mounted Displays in Training for Critical Care and Trauma,” In Virtual Reality (VR), IEEE, 2018. (accepted)
- Wright, T., de Ribaupierre, S., & Eagleson, R. (2017). Design and evaluation of an augmented reality simulator using leap motion. Healthcare Technology Letters, 4-5. DOI: 10.1049/htl.2017.0070
- R. Dockter, R. Sweet and T. Kowalewski, "A fast, low-cost, computer vision approach for tracking surgical tools," 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, Chicago, IL, 2014, pp. 1984-1989.doi: 10.1109/IROS.2014.6942826
- Qian, L., Barthel, A., Johnson, A., Osgood, G., Kazanzides, P., Navab, N., & Fuerst, B. (2017). Comparison of optical see-through head-mounted displays for surgical interventions with object-anchored 2D-display. International Journal of Computer Assisted Radiology and Surgery, 1-10. DOI: 10.1007/s11548-017-1564-y
======Other Resources and Project Files======
* Documents
* {{https://drive.google.com/file/d/1MDl7hMLteh53PYPAdbEhAsNoBh-klMqC/view?usp=sharing|User Manual}}
* {{https://drive.google.com/file/d/1m_DWjFwRWW-kQa2IpSvuilZCNr4F5I_I/view?usp=sharing|Interface Specs}}
* {{https://drive.google.com/file/d/10_CIx8LozhYx1yMMYyCrpyrd4rCLmhFc/view?usp=sharing|Needed Improvements}}
* Code
* {{https://drive.google.com/file/d/1lD0lkugqVCDIOq2BFsz0GPNJAr6mO8KF/view?usp=sharing|Unity Code}}
* Private GitHub Repo (Ehsan has access)
* {{https://github.com/AlexDaIii/hmd_remote_monitor|HMD Remote Monitor}}