Augmented Reality Assisted Orbital Floor Reconstruction

Last updated: 2020 Feb 24 and 11.11 pm

Summary

Develop a visualization system that utilizes augmented reality to help surgeons have better view in orbital floor reconstruction.

  • Students: Yihao Liu, and Nikhil Dave
  • Mentors: Dr. Peter Kazanzides, Ehsan Azimi, Dr. Cecil Qiu, and Dr. Sashank Reddy

Figure 1. The fractured eye socket Figure 2. The placing of the orbital floor implant

Background, Specific Aims, and Significance

  • Orbital Floor Fracture:

Due to pressure on the eye from blunt trauma, the medial wall and orbital floor can fracture. Fracture repair requires manipulation of delicate and complex structures in a tight, compact space. Surgeons struggle with visibility in the confined region. 

  • Reconstruction:

A concave plate is placed along the wall of the eye socket to prevent tissue from entering fracture cavity. The major complexities are hard to place the implants, low visibility, misplacement can result in injury to sensitive tissue, and long operating time.

Orbital floor reconstruction surgery is a long and arduous process, requiring significant attention from the surgeon and manipulation of delicate and complex structures in a tight, compact space. Due to the nature of the orbital floor bone and manner in which it may fracture, shattered bone fragments may be present scattered in the maxillary sinus and in other regions within the operative field.

The orbital aperture is a tightly confined space due to the fact that its primary purpose is to provide structure of the ocular system. As a result, a fracture in the orbital aperture is difficult access since it is within the compact space. Additionally, any incision made in order to access the tissue underneath is relatively small, giving surgeons limited visibility into the orbital aperture. As a result, it is difficult for surgeons to develop context and orientation of the anatomy once they have dissected along the orbital wall. This sense of orientation is necessary, as placement of the orbital implant plate requires precise shaping of the implant and identification of the posterior lip of fracture. The implant must rest on this bony structure into order to remain securely in place. Since the posterior lip is towards the backside of the orbital aperture, it can take surgeons multiple attempts before they feel confident that the implant is resting on the posterior wall as they struggle to see its location through the relatively small incision and following dissection.

This relative operative blindness is an indication of a clear need for improved surgical navigation and visualization techniques specific for orbital floor reconstruction. An augmented reality assisted orbital floor reconstruction system is proposed to resolve the problem of low visibility of the distal orbital wall during the procedure, so that misplacement may be avoided. The introduction of a head mounted display to provide navigation to surgeons in the orbital floor implant process would reduce operation time and increase surgeon confidence in secure implant placement.

Deliverables

The following deliverables are all expected before the end of the semester (final presentation).

  • Point/surface registration method for orbital socket
    1. Min: Target registration error (TRE) <4mm
    2. Expected: TRE <3mm
    3. Max: TRE <2mm (Achieved)
  • Calibration of implant with respect to tracked hemostat
    1. Min: Pivot Calibration of Hemostat and Implant Corner.
    2. Expected: Pivot Calibration of Hemostat and Point Collection Implant Bottom Edge.
    3. Max: Use Calibrated Pointer to make Model of Entire Implant (Achieved).
  • Application to visualize position of tracked implant  respect to CT
    1. Min: Visualization on 3D-Slicer, OpenIGTLink
    2. Expected: Visualization in Unity and in AR systemc, Hololens (Achieved, Built in Unity, HoloLens Apps not Built)
    3. Max: A comparison between 3D-Slicer implementation and Hololens implementation 

Results

Two anatomic features are selected to calculate the target registration error: temporal bone tip and the intersection of the zygomatic bone and superaorbital margin. The CT model (converted to left handedness) coordinates for these two anatomic features can be shown in the following figures. The TRE for fiducial-based registration on these two anatomic features are calculated to be 1.6366 mm and 1.0700 mm, respectively.

 Temporal bone tip target coordinate Figure 3. Temporal bone tip target coordinate

 The intersection of zygomatic bone and superaor-bital margin coordinate Figure 4. The intersection of zygomatic bone and superaor-bital margin coordinate

The target registration errors calculated for both anatomic features Figure 5. The target registration errors calculated for both anatomic features

The average registration residual is calculated to be 0.63 mm and 2.33 mm for fiducial-based registration and fiducial-initialized ICP registration, respectively.

Although the average residual of ICP registration was low, the TRE was tested to be high. We conclude that it did not converge for the points we collected specifically. By comparing the derived transformations of fiducial-based registration and ICP, we found that rotation matrix did not differ too much but the position offset changed by a lot (around 10 mm in both x axis and y axis, 1 mm in z aixs). We conclude that the ICP made the fiducial-based registration worse. However, the fiducial-based registration works well in our case.

The visualization is done by Unity webcam appli-cation. It can also be compiled to be a HoloLensapplication so the user can use the system in head-mounted display. The skull model overlay is suc-cessfully made while the implant motion is not sta-ble, although the static pose and position was testedto be correct. This can be solved by a better mov-ing method in Unity frame update. The skull modeloverlay can be seen in the following figure.

Figure 6. The visualization

The implant used to validate the calibration processof the system was an orbital floor implant sample from the manufacturer, DePuy Synthes. To com-plete the calibration process, points were capturedalong the sides and on the edges of the grids withinthe implant. These points were captured using thetracked pointer tool of the Polaris system. Prior tothis step, the implant was securely clamped by thehemostat, as shown below.

After points were captured, the data was thenparsed by the calibration code and digitized. Usinga number of python packages including pyvista, pymeshfix, and vtk, a surface mesh of triangleswas created from the digitized points. It was ob-served that the greater number of points collectedresulted in a more accurate surface mesh. The accu-racy of the implant model is a key portion of makingthe system functional. Though point collection cantake 1 to 2 minutes, it is paramount that a largenumber of points be collected to ensure that themodel is as accurate as possible.

Once a surface mesh is made, a thickness factoris added to the mesh. First, the surface mesh isextruded using the vtk package. It is important tonote that this is thickness must be measured fromthe implant directly in order to maintain the bestaccuracy. Once this thickness is added, pymesh-fixis used to rid the new mesh of any potential non-manifold faces. This step is crucial in ensuring that the mesh not have any faces that cannotexist in the real world. This includes faces within faces and other triangles in the mesh that are intersecting each other. After the mesh has been rid of these deformities, it is then tetrahedralized to giveit a 3D shape. This step is crucial to ensure that the implant is visible in the visualization portion of the system. A plain surface mesh will not be visible in the visualization due to the fact that has no thickness.

Figure 7. The physical and calibrated implant

Technical Approach

The project was originally attempted using existing libraries (mostly from cisst software library), however, the implementation involves the use of other customized software. The major components can be seen in the following chart, which also doubles as our coordinate system diagram. Below that is a system diagram.

Coordinate System Diagram Figure 8. The coordinate system diagram

The transformation map can be viewed in the coordinate system diagram figure (Figure 8). The three unknown transformation relations are the transformation from the Unity/HMD space to the physical skull space, the transformation from the tracked AR marker to the physical skull space, and the transformation from CT spaceto the physical skull space. The Python server is implemented to solve these transformations. Iterative closest point algorithm can be used to getthe transformation from CT to skull. And the AR marker can be tracked by Vuforia virtual reality engine that is integrated in Unity. To get the transformation from AR marker to the skull, we simplyused a point cloud registration by getting the fourcorners of the AR marker in skull space to the desired corresponding points in virtual space.

The user procedure includes five steps that was showed in the Figure 12. The calibration methodis pivot calibration. Then, the model samplingmethod allows the user to specify frequency and the number of the points the user needs to take.The sampling method also allows the user to specify which tracked tool to be the base coordinatesystem. After that, the implant model is createdby Delaunay triangulation. In order to give the implant surface a thickness, we can use the Uniform Mesh Resampling in Meshlab, or using thepymesh-fix library available in Python. The registration and transformation calculation is then carried outbased upon the transformation map that was discussed in Section 5.1 Transformation Map. Among these steps, handedness handling should be made.There are two different methods to handle handedness which will be discussed in Left Handedness. Now the transformation (the pose of the skull and implant model) be can send to the Unity client for visualization. System Diagram Figure 9. System diagram

The data collection will be made through NDI Polaris camera and will be transmitted by via adapted UDP process. A Windows PC will be used to run the algorithms. The system may require more computers to run the Unity algorithm and registration/calibration algorithms separately.

Polaris camera is used to get the tracker coordinates by scikit-surgerynditracker package for python, with adaptations specific for our application. For the registration process, the received coordinate data is the coordinates data of the passive trackers on a pointer, which will be used to collect the coordinate of the pointer tip. The pointer tip collects the point cloud which will be used for registration. Once a sufficient point cloud is collected, it will be sent along with the CT model data to Iterative Closest Point (ICP) algorithm. For the calibration process, the received Polaris data is again the coordinates of the passive trackers attached to the pointer. The pointer will be used to collect the coordinates of the distal edge (expected deliverable) or a complete point cloud (maximum deliverable) of the implant.

Unity and/or Hololens will be used to model and visualize the implant.

Visualization block diagram Figure 10. Visualization block diagram

 Registration block diagram Figure 11. The registration block diagram

Unity uses a convention of left handedness, whileall the other coordinate systems (Polaris, CD models etc.) are in right handedness. This problemis solved by two different methods, and there aretwo different branches in the Github repository.The “master” branch handles the handedness by “Calculate, convert, send”, while the “lefthanded” branch handles the handedness by “Convert, calculate, send”. More specifically, the “Calculate, con-vert, send” method means to calculate the transformations under assumption that all the coordinate systems are in right handedness, then only the desired results are converted to lefthandedness andsent to Unity client. On the other hand, the “Convert, calculate, send” methods corresponds to converting all the coordinate system to lefthandedness initially and then conduct procedures needed. In “Convert, calculate, send” method, the skull modelused in ICP is flipped along x-axis to match the Unity. In the Polaris collection, all the x, y, and z are flipped to have a left handed coordinate while not changing the rotation matrix.

We used the older version of Polaris optical trackerby Northern Digital Inc. In this project, the raw Polaris data collection hardware interface used is the scikit surgeryn-ditrackerlibrary developed by Wellcome EPSRC Centre for Interventional and Surgical Sciences at University College London. It is a light-weightedpackage that is easy to customize. There are two known issues when working with this package. One is that on Windows development environment, itonly properly installs under Python 3.6. However, Python 3.7 works for both Linux and MacOS development environment. The other problem is that an UnicodeDecodeErroris reported when using 3 tracked bodies and the system uses “BX Transforms”, which can be fixed by changing source code in nditracker.py to “TX Transforms”. The same problem also happens when using 1 tracked body and the system uses “TX Transforms”. To customize the interface, a polarisUtilityScript.py is created. This script is located in the serverPython/util folder of the project repository. To improve code reusability, the point collection method is made to be able to collect point directly with respect to another tracked tool. This largely decreases the transformation calculation procedure. Details can be seen in the polarisUtilityScript.py script.

Vuforia is a augmented reality engine that is integrated in Unity. One important feature in Vuforia is the image tracking feature. Vuforia tracks an imageand estimate the pose of the image. We utilize this feature to complete our transformation chain. However, sometimes the Vuforia engine takes a flipped pose of a horizontal marker. The effect can be seen in the following figure. We found Vuforia is more capable to successfully calculate a vertical AR marker pose.

Figure 12. The wrong Vuforia tracking

Although the transformation can be accurately calculated, the accuracy is still limited by the Vuforia image tracking. To improve the accuracy of themodel overlay, a calibration procedure may need tobe done. A calibration work done by Azimi et al (E.Azimi, L.Qian, N.Navab, and P. Kazanzides. Alignment of the virtual scene to the tracking space of a mixed reality head-mounted display. Retrieved from: https://arxiv.org/pdf/1703.05834.pdf, 2019.) can be a solution. This calibration procedurerequires a tracked head mounted display.

After parsing the point calibration data collectedfrom the tracker pointer on the implant clamped to the hemostat, a surface mesh could be created of the digitized points using the pymesh package however, this model would not appear in the visualization. After inspection, it was determined thatthickness would need to be added to the surfacemesh. This proved to be difficult as a simple extrusion of the surface mesh would simply yield another surface mesh. In order to tetrahedral the surface, several packages were attempted to be used to accomplish the task, yet all yielded the same error, Cannot tetrahedralize non-manifold mesh. After researching the definition of a manifold mesh, it was difficult to determine a remedy to the issue without having to load the surface mesh into a computer-aided design software to remove all non-manifold vertices within the model. However, after somemore research, it was determined that it could bedone within the code using the pymeshfix package. Unfortunately, after this step, the tetrahedralizedmesh turned out to not be Representative of theactual geometry. Up till this point thickness was only added in one cardinal direction. After some experimentation, it was discovered that in order tohave the most accurate 3D model, thickness mustbe added in all three cardinal directions.

* Please note the technical details will be added and maybe modified along the development process.

Dependencies

Dependencies, Solution, Alternatives, and Status

  • Computer with Linux, Use Personal computers, Use LCSR Lab Computer, Resolved
  • Computer with Windows (HMD development), Use Personal computers, Request Lab Computer, Resolved
  • Data Back-ups, Use Microsoft OneDrive, Use personal hard drive, Resolved
  • Learn Workflow from Surgeons, Shadow surgery in OR, Meet with Surgeons, Resolved
  • STL Files for Implants, Coordinate with Clinical Partners, Find Potential Online Source, Resolved
  • CT Scans of Skulls for Corresponding STL Files, Coordinate with Clinical Partners, Obtain other model from Dr. Kazanzides, Resolved
  • Polaris Camera, Coordinate with Anton, Coordinate with Dr. Kazanzides, Resolved
  • Learn cisst Library ICP, Refer to Online Material, Work with Anton, Resolved
  • Passive Rigid Body Pointer, Coordinate with Ehsan & Dr. Kazanzides, Coordinate with Peter, Resolved
  • Installation of SAWSocketStreamer, Discuss with Anton, Discuss with Long, Resolved
  • Learning Python Wrapper, Refer to Online Material, Seek mentorship from Anton and Ehsan, Resolved
  • Hemostat, Coordinate with LCSR, Seek from Clinical Mentors, Resolved
  • Attachable Rigid Body for Polaris and Hemostat, Seek from Ehsan or Dr. Kazanzides, Coordinate with LCSR, Potentially Make Our Own, Resolved
  • Learn cisst Pivot Calibration, Read Online Material, Coordinate with Anton or Dr. Kazanzides, Resolved
  • HoloLens, Coordinate with Ehsan, Coordinate with Dr. Kazanzides, Resolved
  • Unity Installation and Hololens Set-up, Utilization of Online Resources, Help from Ehsan, Resolved
  • OpenGL/3D-Slicer Installation and Set-up, Utilization of Online Resources, Coordinate with Dr. Kazanzides and Ehsan, Resolved

Milestones and Status

Main project repo: https://github.com/bingogome/CISII_Orbital

  1. Milestone name: Project Selection
    • Planned Date: Feb 15
    • Expected Date: Completed
    • Status: Completed
  2. Milestone name: Project Plan and Report
    • Planned Date: Feb 25
    • Expected Date: Mar 1
    • Status: Completed
  3. Milestone name: ICP Registration Implementation
    • Planned Date: Mar 25
    • Expected Date: Mar 25
    • Status: Completed
  4. Milestone name: ICP Registration Validation
    • Planned Date: Mar 25
    • Expected Date: Mar 25
    • Status: Completed
  5. Milestone name: Tracked Hemostat
    • Planned Date: Apr 1
    • Expected Date: Apr 1
    • Status: Completed
  6. Milestone name: Calibration Implementation
    • Planned Date: Apr 10
    • Expected Date: Apr 15
    • Status: Completed
  7. Milestone name: Calibration Validation
    • Planned Date: Apr 10
    • Expected Date: Apr 15
    • Status: Completed
  8. Milestone name: Visualization in 3D slicer
    • Planned Date: May 1
    • Expected Date: May 1
    • Status: In progress
  9. Milestone name: Visualization in Unity
    • Planned Date: May 6
    • Expected Date: May 6
    • Status: Completed

Team Management

The figure below illustrates the portions of theproject that were done by each member. The figure is broken down to reflect the amount done byeach party for each of the three deliverable categories. Both parties worked in conjunction on thepresentations and reports. Documentation of thesystem was also done in conjunction, depending onthe party involved on that aspect of the project.Due to the coronavirus outbreak, the project hadto be split up according to the resources availableto each member. Fortunately, we were still able to achieve the declared deliverables. Figure 13. Breakdown of Portions of project done by each party, the yellow encircled letter N representing Nikhil Dave and brown encircled letter Y representing Yihao Liu

Lessons Learned

Both members of the team learned a considerable amount from the implementation of this project. This was not just limited to the technical skills required to develop the project. In particular, the following lessons were learned below.

  • It is incredibly important to know how toproperly use the tools you are interested inusing in order to be successful in developinga project. This includes software, computerlanguages, and hardware.
  • Project planning is crucial ensuring that youare able meet your deliverables in the assignedamount of time. This includes developing aneffective Gantt chart outlining your goals andthe time you estimate will take to completethem. This gives you a visual tool to comeback to and analyze if you are on track.
  • The time spent validating and debugging yourapplication can often take far longer then thetime spend developing it.
  • Proper and continual communication betweenteam members as well as mentors is incredibly important in ensuring that issues are resolved efficiently and that all members understand wear the application stands in development.
  • More often then not, through thorough in-vestigation of possible solutions, one can find the source of their issue and resolve it withoutthe need for help. It is important to investigate your problem thoroughly before asking for help.
  • Interfacing with a tracking system is non-trivial. We spent a considerable amount of time developing a method to extract datafrom the NDI Polaris. Looking back, prioritizing this first would have given us more time to accomplish other goals.
  • The importance of being able to pitch yourapplication is critical. In order to illustrate the value of your project to others, it is important to have a concise pitch.
  • Technical writing as well as presenting skills are hugely important in conveying what you accomplished and what your goals are.
  • Documentation is critical in the development of a large project. Without proper documentation, it limits the repeatably of your achievements and can also force you to repeat development tasks/mistakes that you have already done due to forgetting your exact previous approach.

Future Works

While achieving the assigned deliverables was an accomplishment for the semester, the primary objective for the project is to develop a functional, highly accurate augmented reality navigation system; ideally visualized through Microsoft’s HoloLens. Having created a functional Unity application, it is not unrealistic to say that a functional HoloLens application is possible in the near future. Developing such an application of a head mounted display(HMD) will provide the surgeon’s with the mostmost practical visualization method. While previous studies have shown that these systems struggle with providing accurate enough visualization forsurgical procedures, from consultations with surgeons, an HMD based visualization would providethe surgeon with the most benefit in comparison toalternatives. A clear view of operative field in addition to the information provided by the system,would make for an excellent and desirable tool inthe operating room with little discomfort for surgeons and lead to better patient outcomes.

Having made significant headway into this part ofthe project, it would also not be unrealistic to continue to develop this method of visualization. Asstated in previous sections, the development of this portion of the project was not to far from completion and could provide a great deal of validation tothe future Hololens application by providing a direct comparison to identify the strengths and weaknesses of both systems. By developing the 3D-Slicer visualization further, a baseline standard of visualization can be established. Thus, a more empirical validation of the future HoloLens application’sability to visualize the system information can beachieved.

The goal of any computer integrated system is toeventually see use. As an engineer, it is very common to get lost in the technicality of a project. While such an intense focus can be beneficial in developing a functional system, other aspects of theoverall system can be neglected, particularly usability. In order to ensure that the system is simple andintuitive to use for surgeons, it is imperative that

user studies be conducted to empirically evaluate aspects of the system that are not user friendly oractually turn out to prevent the accomplishment of system goals. Testing the system in this manner, isa clear and effective way to ensure that the work-flow of the system is sensible and therefore lead toa more functional product. Such user studies neednot be done on a patient, but can be tested on ananatomically similar phantom.

Once the system has been validated through userand cadaver studies, in order to see clinical use, clinical trials have to be done, along with approval fromthe U.S. Food and Drug Administration. Clinicaltrials provide a snapshot into how the system actually impacts the outcome of patients. This is thefinal critical validation step of developing a func-tional and successful product. As a primary goal of this project is to develop a computer-integrated surgical system that improves the outcome of patients and reduce operating time, clinical trials provide asnapshot to how the system can execute its goals and will be necessary to bring this system into usein the operating room.

Acknowledgment

We would like to acknowledge our mentors Dr. Pe-ter Kazanzides and Ehsan Azimi for continually meeting with us to answer our questions and givingus development suggestions. We would also like to acknowledge Anton Deguet for helping use the cisstlibraries and also providing suggestions for the development of the project. We would like to thank our clinical mentors Dr. Cecil Qiu and Dr. Sashank Reddy, for allowing us to shadow a procedure aswell as providing us with information regarding thenature of the procedure and the usability of our application. Our clinical mentors were also instrumental in providing us with the clinical material necessary to develop the project. Finally, we would like to thank Wenhao Gu for help in developing the Unity visualization application.

Reports, Presentations, and Documentation

Project Bibliography

  • Please see final report citations as well.
  • Hussain, R., Lalande, A., Guigou, C., and Bozorg Grayeli, A. (2019). Contribution of Augmented Reality to Minimally Invasive Computer-Assisted Cranial Base Surgery, IEEE Journal of Biomedical and Health Informatics.
  • Gsaxner C., Pepe, A., Wallner, J., Schmalstieg, D., and Egger, J. (2019). Markerless Image-to-Face Registration for Untethered Augmented Reality in Head and Neck Surgery, MICCAI 2019.
  • Li, Y., Chen, X., Wang, N., Zhang, W., Li, D., Zhang, L., Qu, X., Cheng, W., Xu, Y., Chen, W., and Yang, Q. (2018). A wearable mixed-reality holographic computer for guiding external ventricular drain insertion at the bedside, Journal of Neurosurgery JNS, 131(5), 1599-1606.
  • Chen, X., Xu, L., Wang, Y., Wang, H., Wang, F., Zeng, X., Wang, Q., and Egger, J. (2015). Development of a surgical navigation system based on augmented reality using an optical see-through head-mounted display, Journal of Biomedical Informatics, Volume 55, 2015, Pages 124-131
  • Bong, J. H., Song, H. J., Oh, Y. , Park, N., Kim, H., and Park, S.. (2018). Endoscopic navigation system with extended field of view using augmented reality technology, Int. J. Med. Robot., vol. 14, no. 2, pp. e1886
  • Inoue, D., Cho, B., Mori, M., Kikkawa, Y., Amano, T., Nakamizo, A., Yoshimoto, K., Mizoguchi, M., Tomikawa, M., Hong, J., and Hashizume, M.. (2013). Preliminary study on the clinical application of augmented reality neuronavigation, J. Neurol. Surg. Part A, vol. 74, no. 02, pp. 71-76
  • Lapeer, R. J., Jeffrey, S. J., Dao, J. T., García, G. G., Chen, M., Shickell, S. M., Rowland, R. S., and Philpott, C. M.. (2014). Using a passive coordinate measurement arm for motion tracking of a rigid endoscope for augmented‐reality image‐guided surgery, Int. J. Med. Robot., vol. 10, no. 1, pp. 65-77
  • Tokuda, J., Fischer, G. S., Papademetris, X., Yaniv, Z., Ibanez, L., Cheng, P., Liu, H., Blevins, J., Arata, J., Golby, A. J., Kapur, T., Pieper, S., Burdette, E. C., Fichtinger, G., Tempany, C. M., & Hata, N. (2009). OpenIGTLink: an open network protocol for image-guided therapy environment. The international journal of medical robotics + computer assisted surgery : MRCAS, 5(4), 423–434. https://doi.org/10.1002/rcs.274
  • Andras Lasso, Tamas Heffter, Adam Rankin, Csaba Pinter, Tamas Ungi, and Gabor Fichtinger, “PLUS: Open-source toolkit for ultrasound-guided intervention systems”, IEEE Trans Biomed Eng. 2014 Oct;61(10):2527-37. doi: 10.1109/TBME.2014.2322864

Project Files

courses/456/2020/projects/456-2020-13/project-13.txt · Last modified: 2020/05/09 21:00 by 127.0.0.1




ERC CISST    LCSR    WSE    JHU