Endoscopic Vision Challenge – Micro-Surgical Anastomose Workflow recognition on training sessions (MISAW), MICCAI 2020

We achieved 1st Place on Multi Recognition and 1st Place Tied on Activity Recognition at the Endoscopic Vision Challenge – Micro-Surgical Anastomose Workflow recognition on training sessions (MISAW) during the 23rd International Conference on Medical Image Computing and Computer Assisted Interventions (MICCAI 2020), held from 4th to 8th October 2020 in Lima, Peru.

The team comprises C-B Chng, W Lin, J Zhang, Y Hu, Yan Hu, J Liu and C-K Chui.

For the competition, we employed the approach described by Jin et al (2020) and extended it to step and activity. We used EfficientNet, developed by Tan and Le (2019), as our feature extractor to extract spatial features of each video frames. EfficientNet allows for faster training, greatly improving the practicality of the network.

Since temporal information is crucial for video data, we utilized long-short term memory (LSTM) to model the sequential dependencies. The sequential features were then passed to a fully connected layer for making predictions. As the dataset contains kinematic data recorded at 30 Hz from encoders mounted on the two robotic arms of the master-slave robotic platform, we hypothesized that the kinematic data are related to the verb and step. Therefore, we employed a new LSTM to model the sequential features of kinematic data. Subsequently, these two types of sequential features were concatenated and fed into fully connected layers to make predictions for verb and step.

Dr Y. Jin joined NUS as an Assistant Professor in Biomedical Engineering in January 2023. We look forward to collaborating with her on Intelligent Surgical Robotics research.

References:

Y. Jin et al. ‘Multi-task recurrent convolutional network with correlation loss for surgical video analysis’, Med. Image Anal., vol. 59, 2020, doi: 10.1016/j.media.2019.101572.

M. Tan and Q. V. Le, ‘EfficientNet: Rethinking model scaling for convolutional neural networks’, in Int. Conf. Mach. Learn., ICML, 2019, vol. 2019-June, pp. 10691–10700, [Online]. Available: https://www.scopus.com/inward/record.uri?eid=2-s2.0 85077515832&partnerID=40&md5=b8640eb4e9a606d0067b4a420ca73df1.

 

Robot-assisted Training – Panel Discussion and Presentation in IROS 2020 and ICRA 2021 Workshops

I was a Technical Panel Speaker of RoPat20: IROS 2020 Workshop. IEEE IROS 2020 took place from Oct 25, 2020 to Jan 24, 2021, and was an On-Demand Conference.

In this first RoPat workshop titled “Robot-assisted Training for Primary Care: How can robots help train doctors in medical examinations?”, I briefly introduced my research on medical simulation and robot-assisted training for hand-eye coordination at the beginning of the panel discussion.

I began working on medical simulation in the 1990s through a research collaboration between a Singaporean publicly funded research institution and Johns Hopkins University in the US. Our aim was to develop a training simulator for interventional radiology, similar to a flight simulator for pilot training. Training a doctor to become a qualified interventional radiologist is a time-consuming process. Our focus was on providing realistic hand-eye coordination training for the trainees. We re-constructed the human vascular system from the Virtual Human project, and then modelled the interaction between the vessel wall, catheter and guidewire using finite element methods. Our simulator functions like a flight simulator or a computer game – it does not provide direct instruction.

About 10 years ago, together with my collaborators in NUH and A*Star research institutions in Singapore, we developed a robot trainer for laparoscopic surgery training. The robot would learn the motions from the master surgeon and guide the trainee to replicate them. The trainee could also freely perform the motions and compared them to the  master’s. We tested the VR-based training system with medical students and have since transitioned to an AR sytsem.

The slide below shows the FLS peg transfer setup. Currently, we are focusing on robot motion learning using deep reinforcement learning. We also do scene segmentation and workflow recognition.

The IROS 2020 RoPat workshop was a success. The 2nd RoPat workshop (RoPat 21) on Robot-Assisted Systems for Medical Training was organized in IEEE ICRA 2021 conference, held from May 30 to June 5, 2021, in Xi’an, China. It was a hybrid event.

This time I participated in the workshop as an invited technical speaker. I delivered a presentation on physical tissue modelling and augmented reality training in medical training online. I apologize for not being able to answer questions immediately after my oral presentation. If you have any questions regarding my talk, please feel free to email me.

Following is the title and abstract of my talk in IEEE ICRA 2021 RoPat.

Title: Medical Simulation and Deep Reinforcement Learning

Abstract: Medical simulation provides clinicians with real-time interactive simulations of surgical procedures to enhance training, pre-treatment planning, and the design and customization of medical devices. Robots are increasingly becoming integrated elements of surgical training systems. We propose a robot-assisted laparoscopy training system that extensively utilizes deep reinforcement learning (DRL). By combining exercises, demonstrations from human experts, and RL criteria, our training system aims to improve the trainee’s surgical tool manipulation skills. DRL plays a crucial role in modelling the interaction between biological tissue and surgical tools. Additionally, we explore the application of DRL in surgical gesture recognition. As a pathway to artificial general intelligence, DRL has the potential to transform traditional medical simulation into intelligent simulation.

AE-CAI | CARE | OR2.0 – Intelligent Cyber-Physical System for Patient-Specific Robot-Assisted Surgery and Training

Cyber-Physical Systems (CPS) are advanced mechatronic systems and more. In this 45-minute keynote lecture, I presented our work on CPS for robot-assisted surgical training and surgery. The intelligent CPS is a Cyber-Medical System that integrates medical knowledge to create smart, personalize surgery. Human centricity is extended to the interaction and collaboration with the robot(s) in the robot-assisted environment. To be an effective and efficient assistant to the human surgical team, the robot in the operating room should process intelligence. The intelligent CPS presents an opportunity to address the challenging tool-tissue interaction problems central to patient-specific surgery.

Patient-Specific Controller for an Implantable Artificial Pancreas by Dr. Yvonne Ho

The development of an artificial pancreas capable of providing tight blood glucose control for diabetics remains one of the most challenging problems in the field of medical engineering. Dr. Yvonne Ho’s book will serve as an invaluable reference for those engaged in research on diabetes mellitus and artificial pancreas. Following an introductory chapter that provides a concise overview of the research topic, Chapter 2 offers a comprehensive description of diabetes mellitus and its related physiology. Chapter 3 provides a state-of-the-art review of various treatment options. The subsequent chapters primarily focus on presenting novel research and development findings related to an implantable artificial pancreas.

The implantable artificial pancreas aims to regulate blood glucose levels by administering appropriate insulin dosages when required. By directly sensing blood glucose levels and delivering insulin into the vein, this implantable device seeks to eliminate the delays associated with subcutaneous blood glucose sensing and insulin delivery. Preliminary in vitro and in vivo experimental results suggest that the implantable approach to blood glucose control could be a clinically viable alternative to pancreas transplantation. The book demonstrates a deep understanding of blood glucose level modeling and control, as well as the design of implantable devices.

This book will serve as a valuable addition to the libraries of scientists and engineers working at the intersection of engineering and medicine.

Chee-Kong Chui, January 2018

Cyber-medical system for patient-specific medical devices development

Above is the title of my invited talk in The 13th Annual IEEE International Conference on Nano/Micro Engineered and Molecular Systems (IEEE NEMS 2018), held in Singapore from April 22-26, 2018.

Below is the abstract of my talk:

With increasing demands for quality and affordable healthcare services, organizations in the medical device manufacturing industry are embracing more intelligent and responsive systems through the integration and development of dynamic digital technologies. We propose a Cyber-Physical System (CPS)-based production system with integrated enabling digital technologies and robot assistance. This system has the potential to enhance productivity and sustainability, particularly in the production of patient-specific hybrid medical devices. Hybrid medical devices incorporate multiple components and materials that must function flawlessly over extended periods, often under the demanding conditions of the human body. Examples of hybrid medical devices include artificial tracheas, artificial pancreases for diabetes treatment, and information-delivering microchips.

The proposed CPS-based manufacturing system utilizes an integration of enabling digital technologies, including Augmented Reality (AR), Wireless Sensor Networks (WSN), the Internet of Things (IoT), and Artificial Intelligence (AI), for the fabrication of patient-specific medical devices. Visual and haptic cues are provided to the human operator in a timely manner, allowing for swift intervention. Importance-driven computer graphical rendering of visual cues is embedded into physics-based simulations for haptic rendering.

The study of human centricity in an immersive and robot-assisted environment will provide unique insights into human hand-eye coordination capabilities under external influences. Intriguing scientific questions include the extent to which individuals can learn and develop motor skills with external guidance.

 

Note: There is an IEEE SMC Technical Committee (TC) on Cyber-Medical Systems. More information of the TC can be found here.

Imaging and visualization of bone tissue properties

Accurate quantitative estimation of tissue mechanical properties is a research topic. In order for the measurement to be clinical viable, it should be achieved in a non-invasive manner. The estimation of bone density from clinical CT images was reported in 2010 [1]. We are investigating the application of computational intelligent methods on multimodal medical data to aid in the estimation of bone material properties from images. This research has led to the development of opportunistic screening for detection and management of osteopenia [2][3].

There are limited studies on visualization of tissue mechanical properties. However, a visually informative map of the spine (spine invasive map) could provide clinicians with an intuitive representation of the underlying bone properties. Visualization of bone properties can be achieved using material-sensitive transfer functions and coloring schemes that represent different properties:

A clustering-based framework for automatic generation of transfer functions for medical visualization was reported in [4].

References:
[1] Zhang, J, CH Yan, CK Chui and SH Ong, “Accurate measurement of bone mineral density using clinical CT imaging with single energy beam spectral intensity correction”. IEEE Transactions on Medical Imaging, 29, 7 (2010): 1382-1389.
[2] Tay, WL, CK Chui, SH Ong and ACM Ng, “Osteopenia screening using areal bone mineral density estimation from diagnostic CT images”. Academic Radiology, 19, no. 10 (2012): 1273-1282.
[3] Tay, WL, CK Chui, SH Ong and ACM Ng, “Ensemble-based regression analysis of multimodal medical data for ostopenia diagnosis”. Expert Systems with Applications, 40, no. 2 (2013): 811-819.
[4] Nguyen, BP, WL Tay, CK Chui and SH Ong, “A Clustering-Based System to Automate Transfer Function Design for Medical Image Visualization”. Visual Computer, 28, no. 2 (2012): 181-191.

Robotic surgery: hand-eye coordination, cognition and biomechanics

Around 2013-2014, I contributed an article with the title above to the  Engineering Research News (ISSN 0217-7870) at the time. The theme was “The Changing Faces of ME”. Mechanical Engineering (ME) is multi-disciplinary.

Following is an edited version:

One key research area pursued by my group is intelligent surgical robotic systems, which augment and enhance the hand-eye coordination capability of the surgeon during operations to achieve the desired outcome and reduce invasiveness.

Hand-eye coordination refers to the ability of our vision system to coordinate and process the information received through the eyes to control, guide and direct our hands in accomplishing a given task. In this work, we studied hand-eye coordination to build a medical simulator for surgical training and to develop medical robot that duplicates the best surgeon’s hand-eye coordination skills.

Our research adopts an integrated view of surgical simulators and robot assisted surgery. The former is a simulation game for surgical training and treatment planning, while the latter involves a single or plurality of devices assisting the surgical team in precise patient operations. With computer simulator, a patient specific surgical plan can be derived with robot manipulation included. By combining patient-specific simulation with robotic execution, we can developed highly autonomous robot(s).

In an automated system, providing proper feedback is crucial to keep the human operator engaged in the decision-making process. Necessary visual, audio and haptic cues should be provided to the human operator in a timely manner, enabling swift intervention. The study on human centricity in an immersive and robot-assisted environment will provide unique insights on human hand-eye coordination capabilities under external influences.

A cognitive engine provides a high level of intelligence in the autonomous robot to be effective collaborator with human(s). The engine possesses knowledge about relevant aspects of surgery, including the dynamics of the surgery, the robot actions and the behavior of biological tissue in response to those actions. The actions of the surgical team contribute to the dynamics and, at times, introduce uncertainty to the operation. The self-learning process of the cognitive engine requires inherent knowledge of tissue biomechanics. Biological tissues within the human patient body cavity are living elements that may be preserved, repaired, or destroyed using mechanical and thermal methods.

Surgery can be planned with a virtual robot in a simulator with realistic biomechanical models, and then the procedure can be performed on the patient using the robot with the assistance of advanced man-machine interfaces.  Augmented reality technologies with intelligent visual, haptic and audio cues will provide a medium for the surgical team to effective control the robot.

The figure depicting the architecture of an intelligent surgical robotic system with a cognitive engine, as mentioned in the original article is still a work-in-progress. Its latest version can be found in:

Tan, X, C B Chng, B Duan, Y Ho, R Wen, X Chen, K B Lim and C K Chui, “Cognitive engine for robot-assisted radio-frequency ablation system”, Acta Polytechnica Hungarica 14, no. 1 (2017): 129-145.

https://uni-obuda.hu/journal/Tan_Chng_Duan_Ho_Wen_Chen_Lim_Chui_72.pdf

 

Liver tissue properties and frequency-control of RF ablation

A computational model, consisting of an equivalent circuit of resistors and capacitors, is proposed in [1] to investigate the changes in electrical properties of liver tissue during radio-frequency (RF) ablation.  The variations in tissue mechanical properties are correlated with those of the tissue’s electrical properties. RF ablation, in addition to liver tumor treatment, can be utilized to halt blood flow during liver resection. In [2], we further developed the multi-scale model to study the bioimpedance dispersion of liver tissue. The figure below, taken from [3], compares our model with the Cole-Cole model employed in Gabriel’s study. Both models demonstrate a good fit to the experimental data in the high-frequency region. At the lower frequency region, our model provides a better fit to the data.

Using an accurate multi-scale model and a 3D finite element model, we performed RF ablation simulations at various frequencies in [3]. The size of the ablation region increases with higher frequencies. The frequency-control method may prove to be more effective than the duration-control method in RF ablation.

In [4], we previously conducted preliminary work on the application of a multi-scale/multi-level model for simulating molecular medicine through electroporation.

References:

[1] W-H Huang et al. Multi-scale model for investigating the electrical properties and mechanical properties of liver tissue undergoing ablation, Int J CARS (2011) 6:601-607.

[2] W-H Huang et al. A multiscale model for bioimpedance dispersion of liver tissue, IEEE Trans Biomed Eng (2012) 59(6):1593-1597.

[3] B Duan and CK Chui, Multiscale modeling of liver bio-impedance and frequency control for radiofrequency ablation, 2016 IEEE Region 10 Conference (TENCON) – Proceedings of the International Conference, pp. 1532-1535, November 2016.

[4] Chui et al. A medical simulation system with unified multilevel biomechanical model, Proc of 12th International Conference on Biomedical Engineering ICBME 2002, Singapore, 4-7 December 2002.

Constitutive modeling of biological soft tissue

Stress–strain curves of combined porcine liver tissue sample compression and elongation from (Journal of Biomechanics 47 (2014) 2430–2435): (a) Mean values of experimental data, standard deviations from mean values are indicated with horizontal bars; (b) Median values of experimental data; (c) Simulation using the 5-constant Mooney-Rivlin model with parameters calculated by inverse finite element method; and (d) Simulation using the 5-constant Mooney–Rivlin model with parameters calculated by curve fitting.

For modeling and simulating soft tissue indentation, it is important to consider both compression and elongation stress-strain data, as tissue deformation is influenced by both its compressive and tensile characteristics.

An alternative to the Mooney-Rivlin model is a combined logarithmic and polynomial model originally proposed in (Medical & Biological Engineering and Computing 42 (2004) 787-798). The combined logarithmic and polynomial model is superior to the 5-constant Mooney-Rivlin model as the constitutive model for simulation of soft tissue indentation.

GPU Technology Conference (GTC) 2015 poster

This poster “Accelerated Medical Computing Toolkit and GPU Accelerated Importance-Driven Volume Visualization” was presented in the Medical Imaging category at the GPU Technology Conference (GTC 2015) held in Silicon Valley from March 17-20, 2015. The content of the poster was adapted from “GPU Accelerated Transfer Function Generation for Importance-Driven Volume Visualization,” which received the Best Poster Award (Grand Prize) at the GPU Technology Workshop South East Asia (GTW SEA 2014) held in Singapore on July 10, 2014.