Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
research.image_overlay [2014/01/27 17:09]
che12@johnshopkins.edu
research.image_overlay [2019/08/07 16:01] (current)
Line 7: Line 7:
 **Method** **Method**
  
 +{{ :​ar_goggle.png?​240}}
 As illustrated in Figure 1, the user wears the optical seethrough goggles (Juxtopia LLC, Baltimore, MD) and a helmet that supports a compact optical tracking system (Micron tracker, Claron Technology, Toronto, CA). The first step is a registration procedure (Fig 1), in which the surgeon uses a tracked probe to touch markers that were affixed to the patient prior to the preoperative imaging. A paired point rigid registration technique computes the transformation that aligns the preoperative data (i.e., tumor outline) to the real world. ​ As illustrated in Figure 1, the user wears the optical seethrough goggles (Juxtopia LLC, Baltimore, MD) and a helmet that supports a compact optical tracking system (Micron tracker, Claron Technology, Toronto, CA). The first step is a registration procedure (Fig 1), in which the surgeon uses a tracked probe to touch markers that were affixed to the patient prior to the preoperative imaging. A paired point rigid registration technique computes the transformation that aligns the preoperative data (i.e., tumor outline) to the real world. ​
  
 After registration,​ the surgeon can see a registered preoperative model overlayed on the real anatomy using the optical seethrough goggles (Figure 2, which illustrates the concept with a skull model, rather than the tumor margin that would be used in an actual surgery). After registration,​ the surgeon can see a registered preoperative model overlayed on the real anatomy using the optical seethrough goggles (Figure 2, which illustrates the concept with a skull model, rather than the tumor margin that would be used in an actual surgery).
 +
 +{{ ::​overlaypic.png?​360 |}}
  
 The software is written in C++, using a component-based architecture. It has three main interconnected components: 1) tracking module, 2) registration module, and 3) 3D graphical rendering. The 3D graphical rendering component uses the Visualization Toolkit (VTK). ​ The transformation matrix obtained from the registration step is applied to the preoperative model in order to represent a correctly-posed scene in the tracker reference frame (Figure 2). The model is rendered after defining a VTK actor and camera. Based on the tracker information,​ the relative position and orientation between the Micron and the tracked object (i.e. the skull) is known. We then have to reorient the VTK camera by adjusting its parameters to put it in the same relative position to the actor as the tracker is with respect to the skull. This task is achieved by calculating the focal point, position, view angle, and viewup direction of the camera. The resulting image is illustrated in Figure 2. The software is written in C++, using a component-based architecture. It has three main interconnected components: 1) tracking module, 2) registration module, and 3) 3D graphical rendering. The 3D graphical rendering component uses the Visualization Toolkit (VTK). ​ The transformation matrix obtained from the registration step is applied to the preoperative model in order to represent a correctly-posed scene in the tracker reference frame (Figure 2). The model is rendered after defining a VTK actor and camera. Based on the tracker information,​ the relative position and orientation between the Micron and the tracked object (i.e. the skull) is known. We then have to reorient the VTK camera by adjusting its parameters to put it in the same relative position to the actor as the tracker is with respect to the skull. This task is achieved by calculating the focal point, position, view angle, and viewup direction of the camera. The resulting image is illustrated in Figure 2.
Line 19: Line 22:
 **Future** **Future**
  
-Inertial Sensing (e.g., accelerometers,​ gyroscopes)  +  *Inertial Sensing (e.g., accelerometers,​ gyroscopes) 
-Kalman Filter for Sensor Fusion  +  *Kalman Filter for Sensor Fusion  
-Magnification  +  *Magnification  
-Eye Tracking+  *Eye Tracking
  
-**Publications**+===== Publications ​=====
  
-1. Ehsan Azimi, Jayfus Doswell, Peter Kazanzides, ”Augmented Reality Goggles with an Integrated Tracking System for Navigation in Neurosurgery”,​ IEEE Virtual Reality 2012 +  -Ehsan Azimi, Jayfus Doswell, Peter Kazanzides, ”Augmented Reality Goggles with an Integrated Tracking System for Navigation in Neurosurgery”,​ IEEE Virtual Reality 2012 4-8 March. 
-4-8 March. +  -Praneeth Sadda, Ehsan Azimi, George Jallo, Jayfus Doswell and Peter Kazanzides, ​ “Surgical Navigation with a Head-Mounted Tracking System and Display”, Studies in health technology and informatics. Volume 184. Page 363.
-2. Praneeth Sadda, Ehsan Azimi, George Jallo, Jayfus Doswell and Peter Kazanzides, ​ “Surgical Navigation with a Head-Mounted Tracking System and Display”, Studies in health technology and informatics. Volume 184. Page 363.+
  
research.image_overlay.1390842580.txt.gz · Last modified: 2019/08/07 16:05 (external edit)




ERC CISST    LCSR    WSE    JHU