Image Overlay


The aim of this research is to investigate the feasibility of implementing a head-mounted tracking system with an augmented reality environment to provide the surgeon with visualization of both the tumor margin and the surgical instrument in order to create a more accurate and natural overlay of the affected tissue versus healthy tissue. It allows the surgeon to see the precise boundaries of the tumor for neurosurgical procedures, while at the same time providing contextual overlay of the surgical tools intraoperatively which are displayed on optical see-through goggles worn by the surgeon. This makes it feasible for augmented reality, as the overlay provides the most pertinent information without unduly cluttering the visual field. It provides the benefits of navigation, visualization, and all of the capabilities of the existing modalities and is expected to be comfortable and intuitive for the surgeon.


As illustrated in Figure 1, the user wears the optical seethrough goggles (Juxtopia LLC, Baltimore, MD) and a helmet that supports a compact optical tracking system (Micron tracker, Claron Technology, Toronto, CA). The first step is a registration procedure (Fig 1), in which the surgeon uses a tracked probe to touch markers that were affixed to the patient prior to the preoperative imaging. A paired point rigid registration technique computes the transformation that aligns the preoperative data (i.e., tumor outline) to the real world.

After registration, the surgeon can see a registered preoperative model overlayed on the real anatomy using the optical seethrough goggles (Figure 2, which illustrates the concept with a skull model, rather than the tumor margin that would be used in an actual surgery).

The software is written in C++, using a component-based architecture. It has three main interconnected components: 1) tracking module, 2) registration module, and 3) 3D graphical rendering. The 3D graphical rendering component uses the Visualization Toolkit (VTK). The transformation matrix obtained from the registration step is applied to the preoperative model in order to represent a correctly-posed scene in the tracker reference frame (Figure 2). The model is rendered after defining a VTK actor and camera. Based on the tracker information, the relative position and orientation between the Micron and the tracked object (i.e. the skull) is known. We then have to reorient the VTK camera by adjusting its parameters to put it in the same relative position to the actor as the tracker is with respect to the skull. This task is achieved by calculating the focal point, position, view angle, and viewup direction of the camera. The resulting image is illustrated in Figure 2.


This is the first time that a head mounted tracking, registration and display system are integrated for surgical navigation. This would reduce the line of sight problem (because the tracker’s line-of-sight is the same as the surgeon’s), the difficulty in association of preoperative images, and the bulkiness that exists in the current systems.


  • Inertial Sensing (e.g., accelerometers, gyroscopes)
  • Kalman Filter for Sensor Fusion
  • Magnification
  • Eye Tracking


  1. Ehsan Azimi, Jayfus Doswell, Peter Kazanzides, ”Augmented Reality Goggles with an Integrated Tracking System for Navigation in Neurosurgery”, IEEE Virtual Reality 2012 4-8 March.
  2. Praneeth Sadda, Ehsan Azimi, George Jallo, Jayfus Doswell and Peter Kazanzides, “Surgical Navigation with a Head-Mounted Tracking System and Display”, Studies in health technology and informatics. Volume 184. Page 363.
research.image_overlay.txt · Last modified: 2019/08/07 16:01 by