Social Robotics Lab

Breathing Life into Machines

Author: e0031680 (page 1 of 4)

Robotic Welding

Brain game

Sensing and Perception are essential modules for any robotic welding and finishing system to sense, understand, and interact with the environment in real-time and productively. Sensors with complementary features will be distributed in workspace and be fused together to provide robust and reliable sensing for multiple tasks of interest in an automatic factories, which include local operations such as welding and finishing over a large space, and the interactions among robots, machines, humans and their environments. This project is to develop sensing and perception modules with the following main deliverables: (1) A software package which integrates different modules and devices for high resolution 3D reconstruction using low cost sensors. (2) Software packages to provide modules for inspection, process monitoring, and safety which are essential for welding and finishing. (3) Advanced robust sensing techniques for improved welding and finishing process control in uncertain and outdoor environments. We aim to provide key sensing and perception modules for improving the productivity of all stages of robotic welding/finishing, i.e., (i) pre-welding/finishing, (ii) intra-welding/finishing and (iii) post- welding/finishing. The perception modules will enable the robots to acquire information from environment and in turn adapt their actions to the changing environments, thus enhancing the productivity by enabling the fully automatic process.

URBAN-NAV: Urban-Navigation of Unmanned Platform under the GPS Challenged Environments

urban-nav1 urban-nav2

This project contains three main topics: “Location Estimation Using Panoramic View Vision”, “Pose Estimation and Localisation Method Using Visual Odometry”, and “Non-GPS Localization Using Local Geometrical Constraints”. Among them we mainly focus on the third approach. The constrained solution is proposed by approximately modeling the path of the vehicle in the urban canyon environment as pieces of curve. By applying these constraints and implementing multiple sensor data such as visual odometry and inertial measurement unit, the necessary number of GPS satellite can be waived. The challenges, risks and methodologies are described below.

Road Modeling. Fortunately, detailed maps which can be modeled as junctions connected by piecewise continuous curves are usually available for most cities. The model of the road on which the vehicle is traveling can be extracted and regarded as constraints to facilitate the positioning with no GPS signals in view. Visual odometry and local simultaneous localization and mapping (“SLAM”) can be used to assist positioning the vehicle.

Monte-Carlo Localization. Based on the proposed measurement and motion models, a set of particles can be generated from the error model. The particle set covers all probable trajectories of the vehicle, and each particle in the set is assigned an initial weight according to the raw measurement. For real implementation, the place recognition results can be used in initialization process and in environments without drivable roads. This topological localization method, together with odometry measurement is integrated to the Monte-Carlo localization frame for better solution. For better real-time performance, the parameter estimation step is running in parallel within positioning to limit parameter range.

Shape Matching. In order to measure the similarity between road model and probable vehicle trajectories, shape matching techniques in computer vision can be applied. The trajectories with best matching similarities will be used to estimate the true position of the vehicle. At the same time, measurement error of other sensors can be decreased after shape matching process.

Learning Control of Semi-submersible Floatel under Shielding Effectss


Offshore operations have been moving towards ultra-deep waters, more challenging environment and arctic areas where richer resources are detected and to be mined. One of the unavoidable challenge we have to cope with is the large amount of shielding effects due to Floating Production Storage and Offloading (FPSO) in the vicinity. Safety and smoothness of operations between floatel and FPSO, uptime and lifespan of gangway are severely affected due to shielding effects.

Design of the next generation of floatel system should have to take care of the impacts of all these factors to the system. We are not only interested in designing control systems for the floatel system which is highly robust and adaptive to the environmental impacts while maintaining/increasing the uptime of operations between the floatel and FPSO, but also to provide guidelines in designing the next generation floatel for the industry.

Control for Offshore Oil and Gas Platforms

Recent years have seen the formation and growth of the global deepwater offshore industry, which has been driven by increased demand for oil and gas stemming from years of economic growth, reduction in production of existing hydrocarbon fields, and depleting shallow-water reserves. These factors have encouraged operators to invest billions annually chasing this offshore frontier and the development of floating production and subsea systems as solutions for deepwater hydrocarbon extraction.

Currently, 15% of total offshore oil production is carried out in deep waters, and this proportion is expected to rise to 20% in the next few years. The harsher marine environment and need for subsea production systems in remote deepwater developments opens a set of challenges and opportunities for the control theorist and engineer.


Mind Robotics

Project Title

Mind Robotics

NUS Investigator

S.S. Ge


S$0.25M funded by NUS


04-2008 to 03-2011


Mind Robotics aim to develop a system which analyze and extract features from brain signals for the communication and control of robots. In a preliminary stage, we already designed a prototype of two channel EEG acquisition equipment with novel active electrodes and wireless communication module.

Social Robots: Breathing Life into Machine

This project aims to develop intelligent and socially aware robots able to interact and communicate with humans, and even live among humans. Robots will no longer just be industrial machines but companions too. Possible duties include stay-at-home companions for the elderly and medical robots.

NUS Investigator

S.S. Ge


S$1.5M funded by MDA


02-2007 to 11-2011


This project aims to develop intelligent and socially aware robots able to interact and communicate with humans, and even live among humans. Robots will no longer just be industrial machines but companions too. Possible duties include stay-at-home companions for the elderly and medical robots.

Interactive Robot Usher

The aim of the proposed project is building a prototype humanoid service robot for guest reception purpose by integrating visual information for human-robot interaction, a longer term research of a range of intelligent service robots that can assist people in their daily living activities.

Robot fall in love

Chat with Lovotic, The robot which is located in the National university of Singapore and may fall in love by artificial intelligence.

to check Lovotic progress click here:

3D Head

Talking heads are anthropomorphic representations of a software agent used to facilitate interaction between humans and the agent. They can be thought of as virtual humans capable of carrying on conversations with humans by both understanding and producing facial expressions and speech. The issues to be dealt with for the creating such agents span two major areas of research namely animation and AI. With regard to the design of conversational agents, the problems faced are similar to the issues in the Natural Language Processing of AI. AIML, a state-of-the-art technology which attempts to overcome these issues has been used for the design of the brain of the agent developed in this project. It implements a pattern-matching algorithm and gives the user a great amount of functionality by offering tags which helps to make the agent seem context aware and intelligent. Efficient visual presentation of this agent is made possible by the use of superior facial deformation techniques and scripting languages which provide the user with a high level view of the animations that can be realized.

Facial Expression Recognition

This is a real time facial expression recognition system. Initially, a front view image of the tester’s neutral face is captured. This image is processed to detect the tester’s face region, extract the eyebrows, eyes, nose and mouth features. The features location are then mapped to the real time video according to the video’s resolution. Once the initialization is completed, the tester can express his emotion freely. The feature points can be predicted and tracked frame by frame using Kalman filter and Lucas-Kanade optical flow method. The displacement and velocity of each feature points are recorded at each frame. Once a expression is occur, the detection system will choose the maximum value among all the normal dot products and make a judgement displaced at the video. When one expression is over, the tester can expression his following emotions or re-initial the system if any tracker is lost.

Older posts

© 2024 Social Robotics Lab

Theme by Anders NorenUp ↑

Skip to toolbar