Table of Contents

Robotic Operation of ICU Equipment in Contagious Environments

Last updated: 2021.May.6

Summary

Our project aims to design an integrated tele-operable robotics system based on 2D cartesian robot and object recognition algorithm to enters the ICU room on behalf of the ICU team member. This robot is designed to recognize key ICU equipment, operate them, and project key information from such equipment straight back to the operator. Thus reducing the time, protection gear, and exposure risk cost that an ICU team member faces when entering an ICU room during a COVID-19 pandemic.

Background, Specific Aims, and Significance

The COVID-19 pandemic has caused a tremendous global surge of ICU devices and staff demand. Since the virus is highly contagious, and directly impacts the respiratory system, it is believed that about ⅓ of the patients who have COVID-19 will require ICU admission at some point[1]. This poses great challenges towards the medical system as hospitals are overloading to take on COVID patients in ICU. As of February, the average national ICU occupation is still at 73 %[2], this is an almost 10 % raise compared to 2010 [3]. Most Significantly, given COVID’s mode of transmittance, medical workers are at risk every time they enter an ICU room to do routine adjustments and check-ups for patients. Furthermore, entering the ICU requires consuming a full set of PPE. Not to mention the amount of time it would take to properly dress in PPE.

Therefore, ICU teams need a novel way to remotely monitor and adjust settings on key ICU equipment in order to efficiently monitor and provide consistent healthcare service to all patients while reducing material cost and risk of exposure for ICU staff during routine checkup.

Therefore, our goal is to build a 2D cartesian robot that recognize the device, load its pre-configured interface layout, and manipulate the oscilloscope (model for ICU equipments) using its end-effector. This robot will be controlled with a working user interface that allows the user to remotely control the robot.

Concept Sketch for Proposed Robotics System

Technical Approach

Summary

To make our solution feasible, we modeled target devices with oscilloscopes. Simplified the problem from 6 to 3 DOF by assuming the robot has already aligned with the device at an ideal position. Then we breakdown our problem into 2 components: Identify Device, Device Operation.

For identifying the device, we trained an object recognition model to recognize the device from distance, and defined json structure to load corresponding interface configuration for device based on user input. Currently it has an classification accuracy of 86 %.

For device operation, we built a user interface based on python-ROS system that allows the user to remotely control the robot; In addition, we designed an novel end-effector to interact effectively and accurately with the target device and built a 2D cartesian for testing end-effector.

Lastly, we incorporated live camera feedback to project important information to user as well as monitor the end-effector interaction with devices. However, this portion is still under construction and not optimized for output yet.

Identify Device:

1. Trained an object recognition model to recognize the device from distance, and defined json structure to load corresponding interface configuration for device based on user input.

The overall approach is to take pictures/videos of the desired equipment and use these data to train a neural network that would be able to discern the equipment using Tensorflow. Note that due to the restriction on medical appliances, we eventually choose to use an oscilloscope in replace of a real ventilator, given that the oscilloscope also contains buttons, switches, knobs and screens.

Sample Image of Osciloscope being labeled for training.

 General steps of doing object recognition include:
  a.Collecting and labelling the data;
  b.Training model;
  c.Testing model based on new data sets;

The recognition of the oscilloscope adapts the YOLO (you only look once) algorithm. The benefits of YOLO is that it can achieve relatively fast recognition speed and the learning accuracy of the general representations is also high compared with other object detection methods including DPM and R-CNN

General workflow of YOLO network

Operate Device

1. Build a user interface based on python-ROS system that allows the user to remotely control the robot.

2. Design novel end-effector to interact effectively and accurately with target device.

Design Criteria used to evaluate ideation

Sample Ideation

3. Built a 2D cartesian for testing end effector.

Goal is to construct a cartesian robot with an end effector optimized for turning knobs, pressing buttons and interacting with the screen, we will do the following steps. To construct a cartesian robot with an end effector optimized for turning knobs, pressing buttons and interacting with the screen, we will do the following steps.

 1. Finding the optimum end effector geometry
 A. To find the optimum end effector geometry, we would first conduct literature research to 
 source current popular mechanical design based on our purpose.
 
 B. Then, to compare selected mechanisms, we would develop Need Creterias and evaluation charts 
 based on the proposed end goals. A preliminary need statement can be found below.
  
  End Effector
      Turning knob within precision requirement
      Press button within precision requirement
      Press screen within precision requirement
      Leave enough viewing angle for camera
      Mechanically stable and driven by motor
  
  Cartesian robot 
      Move to given point on 2D plane within precision requirement
      Remain steady while end effector operates
      Cartesian robot is able to operate time efficiently
  
  With selected mechanisms, we would conduct rapid prototypes and conduct preliminary manual tests 
  to make sure that they pass the proof of concept. 
 
 2. Actuating end effector w/ motor
  Since we need fast, accurate motions of small force, utilizing an electric stepper motor / servo 
  would be our optimized form of actuating our end effector. This portion of the proposal thus 
  focuses more on the manual assembly and testing of the end effector while being motorized.
  This includes, sourcing and coming up with an plan to assemble a system to move and actuate the 
  end effector. Current plan is to utilize an Arduino with servos attached to breadboards powered 
  by triple A batteries. Detailed implementation can be sourced easily from online.
  
 3. Assemble and test with cartesian robot
  To test the accuracy and efficacy of the end effector, we utilize a cartesian motor to conduct     
  all the functionality and accuracy testing. The robot will be given tasks formed by a     
  combination of : going to a specific point, interacting with the geometry of that point, 
  monitoring how much the whole mount changed during operation to assess the accuracy, efficacy 
  and repeatability of the robot. Cartesian robot will be built from T-slots and a stepper motor 
  with 12 V DC battery powering it. The guide to such robots is generously provided by Dr.Krieger 
  and can be located in our share folder. Based on the result and analysis of the testing, we will 
  choose to compensate for the error physically or digitally.

Sample 2D Cartesian Belt Drive System

4. Incorporated live camera feedback to project important information to user as well as monitor the end-effector interaction with device.

Dependencies

Milestones and Status

Deliverables

Responsibility:

Deliverables:

Minimum: (Expected date: March 20. Deadline: March 25)

Expected: (Expected date: April 10. Deadline: April 15)

Maximum:(Expected date: April 25, Deadline: April 29)

Reports and presentations

Project Bibliography

Other Resources and Project Files

Here give list of other project files (e.g., source code) associated with the project. If these are online give a link to an appropriate external repository or to uploaded media files under this name space (2021-18).

Object detection: object_recognition.pdf

Feature matching: feature_matching.pdf

Meeting notes: meeting_notes.pdf

Design journal: design_journel​.pdf

Evaluation chart of end effector design: evaluation_chart.xlsx

Prototype of end effector: testv4_drawing_v1.pdf