Last updated: 5/8/2019, 11:50PM
In this project, we present the basic design of the modified head-mounted display (HMD), and the method and results of the calibration of Magic Leap One displays to a real-world scene. We have developed a calibration method to associate the field-of-magnified-vision, the HMD screen space and the task workspace. The final calibration errors are measured both in the magnified view and regular view, respectively. The mean target augmentation error is 3.47 ± 1.03 mm in the magnified view and 2.59 ± 1.29 mm in the regular view.
Background
Aims
Significance
The success of the project can potentially increase the clinical acceptance of AR and the proposed system can be used to provide accurate guidance and navigation in a wide range of computer-aided surgery.
HMD Choice
There are three common Optical See-through HMD, Microsoft holoLens, Magic leap One and Epson BT-300. For this project, I will use Magic Leap One or BT-300 instead of HoloLens, because HoloLens‘ bulky and round-shaped design makes it hard to attach a loupe in front of the display. And, Magic Leap One or BT-300 have a light-weighted design on the head. And they both have a flat surface in front of the display, making them easier to attach a loupe.
Mechanical Design
HMD Calibration
In order to determine the distortion parameters of the loupe and the intrinsic parameters of the system, a system calibration process is needed. A mini USB camera module with 1080P resolution and field-of-view of 100 degree is used. The camera module was first calibrated using Zhang’s method [10] with a 30mm checkerboard consisting of 8×6 vertices and 9×7 squares. We then attach the camera module to the back of the HMD display to capture 30 images from different poses of a 5mm checkerboard consisting of 8×6 vertices and 9×7 squares. During the photo capturing process, the checkerboard is placed in the range of the working distance of the loupe, and the whole checkerboard is visible in the magnified view.
AR Rendering Pipeline
In the Unity scene, each eye is assigned with two cameras for rendering. The regular camera is used to render a non-magnified view of the augmented content directly to the display and the other camera is used to render the distorted magnified view of any AR content to a texture. The texture is then displayed to the 2D screen in a constrained domain.
Tracking
[1] Wang, Junchen & Suenaga, Hideyuki & Hoshi, Kazuto & Yang, Liangjing & Kobayashi, Etsuko & Sakuma, Ichiro & Liao, Hongen. (2014). Augmented Reality Navigation With Automatic Marker-Free Image Registration Using 3-D Image Overlay for Dental Surgery. IEEE transactions on bio-medical engineering. 61. 1295-304.
To enable AR, a tracking system with good accuracy is necessary. Although the HMDs chosen for the project have their own visual-SLAM based tracking system works off-the-shelf, they usually have errors of a few centimeters. It is not accurate enough for surgery. So I will be using marker tracking for this project.
Marker designed for the project: 15cm planar marker:
Dependencies | Solution | Alternative | Estimated Date |
---|---|---|---|
Access to Magic Leap One | Ask Dr. Navab for access | Ask Ehsan for Epson BT-300 | Resolved |
Access to surgical loupe | Ask Long for access | Resolved | |
Access to CAD Software (SolidWorks or PTC Creo) | Download from JHU software catalog | Resolved | |
Access to 3D printer | Access to LCSR 3D printer | Use DMC 3D printer | Resolved |
Access to USB Camera | Ask Long for access | Buy one from Amazon | Resolved |
Here give list of other project files (e.g., source code) associated with the project.