Endoscopic Vision Challenge – Micro-Surgical Anastomose Workflow recognition on training sessions (MISAW), MICCAI 2020

We achieved 1st Place on Multi Recognition and 1st Place Tied on Activity Recognition at the Endoscopic Vision Challenge – Micro-Surgical Anastomose Workflow recognition on training sessions (MISAW) during the 23rd International Conference on Medical Image Computing and Computer Assisted Interventions (MICCAI 2020), held from 4th to 8th October 2020 in Lima, Peru.

The team comprises C-B Chng, W Lin, J Zhang, Y Hu, Yan Hu, J Liu and C-K Chui.

For the competition, we employed the approach described by Jin et al (2020) and extended it to step and activity. We used EfficientNet, developed by Tan and Le (2019), as our feature extractor to extract spatial features of each video frames. EfficientNet allows for faster training, greatly improving the practicality of the network.

Since temporal information is crucial for video data, we utilized long-short term memory (LSTM) to model the sequential dependencies. The sequential features were then passed to a fully connected layer for making predictions. As the dataset contains kinematic data recorded at 30 Hz from encoders mounted on the two robotic arms of the master-slave robotic platform, we hypothesized that the kinematic data are related to the verb and step. Therefore, we employed a new LSTM to model the sequential features of kinematic data. Subsequently, these two types of sequential features were concatenated and fed into fully connected layers to make predictions for verb and step.

Dr Y. Jin joined NUS as an Assistant Professor in Biomedical Engineering in January 2023. We look forward to collaborating with her on Intelligent Surgical Robotics research.

References:

Y. Jin et al. ‘Multi-task recurrent convolutional network with correlation loss for surgical video analysis’, Med. Image Anal., vol. 59, 2020, doi: 10.1016/j.media.2019.101572.

M. Tan and Q. V. Le, ‘EfficientNet: Rethinking model scaling for convolutional neural networks’, in Int. Conf. Mach. Learn., ICML, 2019, vol. 2019-June, pp. 10691–10700, [Online]. Available: https://www.scopus.com/inward/record.uri?eid=2-s2.0 85077515832&partnerID=40&md5=b8640eb4e9a606d0067b4a420ca73df1.

 

Imaging and visualization of bone tissue properties

Accurate quantitative estimation of tissue mechanical properties is a research topic. In order for the measurement to be clinical viable, it should be achieved in a non-invasive manner. The estimation of bone density from clinical CT images was reported in 2010 [1]. We are investigating the application of computational intelligent methods on multimodal medical data to aid in the estimation of bone material properties from images. This research has led to the development of opportunistic screening for detection and management of osteopenia [2][3].

There are limited studies on visualization of tissue mechanical properties. However, a visually informative map of the spine (spine invasive map) could provide clinicians with an intuitive representation of the underlying bone properties. Visualization of bone properties can be achieved using material-sensitive transfer functions and coloring schemes that represent different properties:

A clustering-based framework for automatic generation of transfer functions for medical visualization was reported in [4].

References:
[1] Zhang, J, CH Yan, CK Chui and SH Ong, “Accurate measurement of bone mineral density using clinical CT imaging with single energy beam spectral intensity correction”. IEEE Transactions on Medical Imaging, 29, 7 (2010): 1382-1389.
[2] Tay, WL, CK Chui, SH Ong and ACM Ng, “Osteopenia screening using areal bone mineral density estimation from diagnostic CT images”. Academic Radiology, 19, no. 10 (2012): 1273-1282.
[3] Tay, WL, CK Chui, SH Ong and ACM Ng, “Ensemble-based regression analysis of multimodal medical data for ostopenia diagnosis”. Expert Systems with Applications, 40, no. 2 (2013): 811-819.
[4] Nguyen, BP, WL Tay, CK Chui and SH Ong, “A Clustering-Based System to Automate Transfer Function Design for Medical Image Visualization”. Visual Computer, 28, no. 2 (2012): 181-191.

Robotic surgery: hand-eye coordination, cognition and biomechanics

Around 2013-2014, I contributed an article with the title above to the  Engineering Research News (ISSN 0217-7870) at the time. The theme was “The Changing Faces of ME”. Mechanical Engineering (ME) is multi-disciplinary.

Following is an edited version:

One key research area pursued by my group is intelligent surgical robotic systems, which augment and enhance the hand-eye coordination capability of the surgeon during operations to achieve the desired outcome and reduce invasiveness.

Hand-eye coordination refers to the ability of our vision system to coordinate and process the information received through the eyes to control, guide and direct our hands in accomplishing a given task. In this work, we studied hand-eye coordination to build a medical simulator for surgical training and to develop medical robot that duplicates the best surgeon’s hand-eye coordination skills.

Our research adopts an integrated view of surgical simulators and robot assisted surgery. The former is a simulation game for surgical training and treatment planning, while the latter involves a single or plurality of devices assisting the surgical team in precise patient operations. With computer simulator, a patient specific surgical plan can be derived with robot manipulation included. By combining patient-specific simulation with robotic execution, we can developed highly autonomous robot(s).

In an automated system, providing proper feedback is crucial to keep the human operator engaged in the decision-making process. Necessary visual, audio and haptic cues should be provided to the human operator in a timely manner, enabling swift intervention. The study on human centricity in an immersive and robot-assisted environment will provide unique insights on human hand-eye coordination capabilities under external influences.

A cognitive engine provides a high level of intelligence in the autonomous robot to be effective collaborator with human(s). The engine possesses knowledge about relevant aspects of surgery, including the dynamics of the surgery, the robot actions and the behavior of biological tissue in response to those actions. The actions of the surgical team contribute to the dynamics and, at times, introduce uncertainty to the operation. The self-learning process of the cognitive engine requires inherent knowledge of tissue biomechanics. Biological tissues within the human patient body cavity are living elements that may be preserved, repaired, or destroyed using mechanical and thermal methods.

Surgery can be planned with a virtual robot in a simulator with realistic biomechanical models, and then the procedure can be performed on the patient using the robot with the assistance of advanced man-machine interfaces.  Augmented reality technologies with intelligent visual, haptic and audio cues will provide a medium for the surgical team to effective control the robot.

The figure depicting the architecture of an intelligent surgical robotic system with a cognitive engine, as mentioned in the original article is still a work-in-progress. Its latest version can be found in:

Tan, X, C B Chng, B Duan, Y Ho, R Wen, X Chen, K B Lim and C K Chui, “Cognitive engine for robot-assisted radio-frequency ablation system”, Acta Polytechnica Hungarica 14, no. 1 (2017): 129-145.

https://uni-obuda.hu/journal/Tan_Chng_Duan_Ho_Wen_Chen_Lim_Chui_72.pdf

 

GPU Technology Conference (GTC) 2015 poster

This poster “Accelerated Medical Computing Toolkit and GPU Accelerated Importance-Driven Volume Visualization” was presented in the Medical Imaging category at the GPU Technology Conference (GTC 2015) held in Silicon Valley from March 17-20, 2015. The content of the poster was adapted from “GPU Accelerated Transfer Function Generation for Importance-Driven Volume Visualization,” which received the Best Poster Award (Grand Prize) at the GPU Technology Workshop South East Asia (GTW SEA 2014) held in Singapore on July 10, 2014.