Last updated: 3-30-12
The purpose of this project is to design and implement an improved user interface for the Robo-ELF, specifically a new robust GUI and vision-based point-and-click, motion control. The current system is undergoing review for FDA approval for clinical trials, and finishing the requirements for approval of the current system is also a major project goal.
The Robotic EndoLaryngeal Flexible Scope(Robo-ELF) solves the problem of visualization in the the throat during endoscopic surgery. It provides a means to hold and position a flexible endoscope during surgery so the surgeon has both hands free to operate. A prototype robot was built and tested in 2011 using phantoms and human cadavers. It functioned well in testing and surgeons were pleased with the results. One complaint was that the control mechanism was unintuitive and difficult to use.
The ultimate goal of the project started in 2011 was to produce a clinically viable system and put it through human trials. The documentation and approval process to do this is almost complete. The largest remaining task is passing FDA requirements for a clinical system. Finishing these requirements is the first goal of this project.
A secondary goal of this project is to produce a more user-friendly control interface. Vision-based guidance using a point-and-click display is much more intuitive and easier to use than the current joystick interface. A more robust GUI will also be of use to the surgeons and is a natural progression for the system.
To complete the requirements for clinical trials, several things must be accomplished. All of the safety features, especially software, must be tested and documented for validation. This includes the software-activated emergency stop, heartbeat signal, encoder/potentiometer checking, joystick failure detection, and Galil error detection. Most of these features are already implemented, but they require more testing and documentation before being submitted for approval. Better handling of system errors should also be implemented. Errors will be split into two categories, serious and non-serious. Serious errors will require a full system restart to continue and are indicative of a serious system failure. Non-serious errors are recoverable without restarting the system and do not present a danger to the patient. Mechanical changes to the system must also be completed to allow for easier draping and disassembly. Renata Smith and Kevin Olds are responsible for completing these changes, as well as designing a required draping system for the robot.
Once the changes and documentation are completed, we must complete an FMEA risk analysis of the system and validate that all risks have been properly compensated for. Part of this compensation is a full software review which will be completed first. These system reviews will be conducted with all team members and senior members of the LCSR faculty and staff.
Vision-based navigation will be implemented using algorithms developed and tested in summer 2011 by visiting student Hongho Kim. The algorithm uses an approximate kinematic model for the flexible scope to estimate its orientation. It uses template matching to confirm it is moving in the right direction. It was developed and tested using OpenCV. Our task is to implement the same algorithm as a CISST svlFilter and integrate the results into the current control code for the robot. The new GUI will be implemented in Qt with the CISST libraries. We will meet with the surgeons to discuss the exact design and layout of the GUI and to discuss which features they would find most useful.
describe dependencies and effect on milestones and deliverables if not met