======Project Name======
**Introaperative Fiducial Tracking in TORS**
======Summary======
This project is intended to develop and implement an intraoperative fiducial tracking method, which is an important part of a TORS system.
* **Student:** Xiao Hu
* **Mentors:** Wen P. Liu, Anton Deguet
You may want to include a picture or two here.
======Background, Specific Aims, and Significance======
===== Background =====
• Background of TORS
* TORS: TransOral Robotic Surgery
{{ :courses:446:2014:446-2014-15:advantages_of_tors_1_.jpg?direct&200 |}}
http://www.ohsu.edu/xd/health/services/comprehensive-robotics-program/surgical-services/transoral-robotic-surgery-tors.cfm
* The base of tongue tumors has become a significant health care
concern. Because most base of tongue tumors are buried deep in
the musculature of the tongue, when doing the transoral surgery,
expert surgeons always rely on experience to remain correctly
oriented with respect to critical anatomy.
* Such practice leaves considerable room for improvement and has
brought TORS. It is a minimally invasive surgical intervention for
resection of base of tongue tumors.
• Background of Image guidance in TORS
* An image guidance system with intraoperative stereoscopic video augmentation in TORS has been proposed. It is to give the surgeon intuitive knowledge of the position of the tumor in the stereoscopic view during TORS.
{{ :courses:446:2014:446-2014-15:64b6b82f0dd1f3edad425af3eda2fb9f_media_500x415.png?direct |}} Image in courtesy of Wen P. Liu
* The rough steps for this system are as below.
- First, a detailed surgical plan is made by the surgeon from the preoperative CT or MR of the patient. Basically, the surgical plan is the position of the tongue tumor in the CT or MR coordinate.
- Second, when the patient is positioned for the surgery, a CBCT image is acquired immediately to capture the intraoperative deformation.
- third, using the deformable registration between the preoperative CT (MR) and the intraoperative CBCT to get the intraoperative surgical plan on CBCT.
- Then, a rigid transformation, resulting from the tracking of the fiducial which is attached directly above the resection area at the beginning of the surgery, between the CBCT and the endoscopic video is calculated.
- Next, the planned data can be registered to the robotic endoscopic video using this transformation, to form the augmented video during the surgery.
===== Specific Aims =====
• The goal of this project is to design and implement an intraoperative fiducial tracking method in TORS that could track the fiducial under the stereo endoscope without additional user input.
{{:courses:446:2014:446-2014-15:fidocl13.jpg?direct&300 |}}
{{ :courses:446:2014:446-2014-15:fiddet7.jpg?direct&300 |}}
The green triangular frame is the fiducial frame, with three fiducials on it, each fiducial is a colored sphere with white, yellow and black color respectively
===== Significance =====
• Project Relevance:
* In order to do stereo video augmentation, several coordinate transformation need to be calculated.
* In summary: (Video𝑇CT) = (Video𝑇CBCT) (CBCT𝑇CT) . We need to find (video𝑇CBCT).
* Using Fiducial: (𝑃fiducial)Video = (Video𝑇CBCT) ∗ (𝑃fiducial)CBCT, and Video𝑇CBCT= (𝑃fiducial)Video *( 𝑃fiducial)CBCT )−1
* As long as (𝑃fiducial)Video is got, the transformation between video and CBCT can be got.
* This project is to track the fiducials and get the position of the fiducials in the stereo video which is (𝑃fiducial)Video.
• Project Significance:
* Current existing method to find (Video𝑇CBCT) is a manual process which is based on user input, either by the operator using 3DUI virtual cursors of MTMs, or by placing tool tips of PSMs directly onto the sphere, which would require robot-to-tooltip calibration. A non-user input method should be developed, and this is the goal for this project.
======Deliverables======
* **Minimum:** (Expected by 4/10)
- Implementation of fiducial detection of the intraoperative endoscopic images (recorded images) -------Done 4/14
- Implementation of fiducial detection of the intraoperative stereo video (recorded videos)--------Done 4/22
- Test and optimization of the implementation to confirm better result than the already existed detection method
* **Expected:** (Expected by 4/22)
- Real-time fiducial tracking under the endoscopic camera
- Real-time fiducial tracking under the recorded endoscopic video -------done fiducial tracking 4/25, but has approximate 1 second delay
- Testing the system under the realtime robot endoscopic camera
- Optimization and Getting intraoperative real-time tracking result ------did optimization 5/7, but still not capable to track when it's at a big angle
* **Maximum:** (Expected by 4/29)
- Optimization of the implementation under experimental surgery scenario
- A video recorded for the tracking in experimental surgery scenario
- A new fiducial for better and more accurate tracking
======Technical Approach======
* The approaches to achieve the goal consist mainly of three steps:use color and edge characteristics to detect the frame which encompasses the three fiducials, use color and position characteristics to detect the fiducials, and apply Kalman filter to track the fiducials in videos.
* Fiducial frame detection
- Use color information to detect the green pixels
- Use edge detector to detect the edges in the image
- Combine the two images to get the possible fiducial frame contour
{{ :courses:446:2014:446-2014-15:framedet6.jpg?nolink&300 |}}
* Fiducial Detection
- Use the frame contour and the color information to detect the three colored fiducials, and delete some noise points
- Use connectivity to group pixels that might belong to a fiducial.
- Use a weighting function to select the fiducial group from the candidate groups. The geometric center of the group is the center of detected fiducial, and the size of the group reflects the size of the fiducial. Denote the detected fiducial as black squres on the image
{{ :courses:446:2014:446-2014-15:fiddet7.jpg?nolink&300 |}}
* Fiducial tracking in videos
- Apply Kalman filter for tracking, which is used to find fiducials of which the detections are missing. The tracking is based on the motion of the fiducials.
======Dependencies======
• Cisst and saw understanding (current state: still working on it, pause)
Solution: Read tutorials and ask Wen and Anton
• Access to the robot and the system (currenet state: got it)
Solution: Ask Wen and Prof. Taylor for permission
======Milestones and Status ======
- Milestone name: Complete Software Installation
* Planned Date: Feb. 20
* Expected Date: Feb. 20
* Status: completed
- Milestone name: get the new fiducial (optional)
* Planned Date: March. 1
* Expected Date: April.25
* Status: haven't begun
- Milestone name: completion of preliminary algorithm design
* Planned Date: March. 7
* Expected Date: March. 12
* Status: completed
- Milestone name: Begin algorithm implementation (coding) with C++
* Planned Date: March. 10
* Expected Date: March. 15
* Status: Began Paused
- Milestone name: (Minimum deliverable) Complete algorithm implementation on recorded images
* Planned Date: April. 10
* Expected Date: April. 15
* Status: Completed
- Milestone name: (Minimum deliverable) Complete algorithm implementation on recorded videos
* Planned Date: April. 15
* Expected Date: April. 17
* Status: Completed
- Milestone name: (Expected deliverable) Complete algorithm implementation for tracking on recorded videos
* Planned Date: April. 17
* Expected Date: April. 22
* Status: Completed
- Milestone name: (Expected deliverable) Complete algorithm implementation for Real-time fiducial tracking on recorded videos
* Planned Date: April. 17
* Expected Date: April. 25
* Status: Partially Completed
- Milestone name: (Expected deliverable) Optimization for the implementation of tracking
* Planned Date: April. 22
* Expected Date: April. 27
* Status: Partially Completed
- Milestone name: (Maxium deliverable) Optimization for the implementation of tracking under intraoperative video
* Planned Date: April. 29
* Expected Date: May. 3
* Status: Haven't begun
- Milestone name: Post session and project report
* Planned and Expected Date: May. 9
======Reports and presentations======
* Project Plan
* {{:courses:446:2014:446-2014-15:project_plan_presentation.pdf| Project plan presentation}}
* {{:courses:446:2014:446-2014-15:project_proposal.pdf| Project plan proposal}}
* Project Background Reading
* See Bibliography below for links.
* Project Checkpoint
* {{:courses:446:2014:446-2014-15:project_check_point_presentation_.pdf|Project checkpoint presentation}}
* {{:courses:446:2014:446-2014-15:project_mini_check_point_presentation.pdf|Project mini checkpoint presentation}}
* Paper Seminar Presentations
* {{:courses:446:2014:446-2014-15:seminar_paper_summary.pdf|}}
* {{:courses:446:2014:446-2014-15:paper_seminar_presentation.pdf|}}
* Project Final Presentation
* {{:courses:446:2014:446-2014-15:project_poster_teaser.pdf|PDF of Poster teaser}}
* {{:courses:446:2014:446-2014-15:project_presentation_poster.pdf|PDF of Poster}}
* Project Final Report
* {{:courses:446:2014:446-2014-15:project_report.pdf|Final Report}}
* links to any appendices or other material
======Project Bibliography=======
*Tony F. Chan and Luminita A. Vese, “Active Contours Without Edges”, IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 10, NO. 2, FEBRUARY 2001 {{:courses:446:2014:446-2014-15:active_contours_without_edges.pdf|}}
* Wen P. Liu et al, “Toward intraoperative image-guided transoral robotic surgery”. J Robotic Surg 2013
* Wen P. Liu et al, “Intraoperative Cone Beam CT Guidance for Transoral Robotic Surgery”
======Other Resources and Project Files======
Here give list of other project files (e.g., source code) associated with the project. If these are online give a link to an appropriate external repository or to uploaded media files under this name space.