Current Research Activities

AR-FEA System  

PhD Graduate: Dr HUANG Jiming

This system uses appropriate sensing and tracking technologies to facilitate the visualization and interaction of finite element analysis (FEA) processes. It provides many possibilities for user to interact with the simulation result in the AR environment. In an AR environment, graphically visualized FEA results can be augmented onto a real scene by tracking the user’s viewpoint. Provided AR interfaces allows user to customize and control simulation parameters.

The system requires a 3D input device for user to interact with the simulated FEA results, such as add virtual loads and explore results using slicing and clipping function. Not only virtual loads, real loads can also be used in AR-FEA system. Pressure sensors are attached at certain locations on the structure, such that loads can be applied by pressing these sensors.

 

Integration of Finite Element Analysis With Outdoor Augmented Reality

PhD Candidate: LI Wenkai

Finite Element Analysis (FEA) is a powerful numerical tool for solving engineering problems. The performance of FEA relies on computing power, visualization technique, and numerical analysis performance. Traditionally, FEA formulated problems are solved in off-site and WIMP (windows, icons, menus, pointer)-style environments using high performance computing tools. However, it is not intuitive and efficient to explore FEA results without a correct sense of scale, orientation, and the surrounding physical context. Integration of augmented reality (AR) with engineering simulation on-site helps users through enhancing their perception and interaction with the engineering problems.

AR in Maintenance Applications

PhD Candidate: SIEW Chi Yung, Terence

By taking leverage of augmented reality, research aims to develop tools to assist and to improve existing AR maintenance applications and to enhance the overall results achieved.  Current AR maintenance application and research tend to be passive in nature with no substantial feedback to the user with regards to the quality and result of the maintenance task. Using a wide variety of sensors and considering the various factors that may influence maintenance quality, research aims to develop a closed loop active AR maintenance system that is not only able to provide maintenance operators dynamic information regarding the maintenance quality of the job at hand, but to also bridge the gap the experience gap between workers while at the same time ensuring the well-being of the maintenance operator both physically and mentally.

AR applications in Disassembly (ARDIS)

PhD candidate: Miko CHANG May Lee

This research investigates augmented reality applications in optimizing product disassembly operation. Notably, disassembly is not the exact reverse of assembly due to irreversible operations such as welding. In disassembly, different objectives such as handling complexity, part accessibility and uncertainties such as part condition due to prolonged usage have to be considered. Despite active research in disassembly sequence planning, there is a lack of intuitive knowledge sharing platform that collaborates with human operator to perform real time disassembly. Based on product information obtained automatically from CAD model, the proposed research takes into account human feedback to generate Pareto optimal sequences in real time. The expected research outcome is a validated AR-assisted disassembly planning and guidance system that allows users to perform disassembly efficiently.

IoT Manufacturing Personal Assistant System

PhD Candidate: HE Fan

This project aims to develop a personal assistant system than can help on-site personnel perform manufacturing workflows more efficiently. It features activity recognition based on information obtained using IoT sensors embedded into the environment, operator and manufacturing equipment. With these information, the system learns the key actions from skilled workers and assist unskilled workers with AR information.

Augmented Reality Operating System

Researcher: Dr Andrew Yew

In this research, a framework for implementing smart environments containing smart objects with augmented reality user interfaces has been developed. Smart objects are real and virtual objects that have processing and networking capabilities. The augmented user interfaces overlay the objects and are registered to physical locations in the environment. Smart objects are interoperable, even if they are built on different platforms and communicate on different networks. Viewing and interaction with the augmented user interfaces using smartphones, and head-mounted devices with bare-hand interaction have also been developed in this research.

Ubiquitous Augmented Reality

Researchers: Dr Andrew Yew, Dr WANG Xin

An extension of the previous work on Augmented Reality Operating System, this research aims to build advanced capabilities, namely natural multimodal interaction, visual programming and distributed computing to enable rich and powerful applications that can be operated intuitively to be embedded in smart environments. Multimodal interaction allows for multiple sensors, interaction methods and display devices to be used to interact with applications. This enables the most natural methods to be applied to different applications and users. Visual programming gives more power to users in being able to create applications and control the smart environment. Distributed computing through the smart objects in the environment allows for more powerful functions, such as physics simulations and photorealistic animations to be used in ubiquitous augmented reality applications.

Augmented Reality Robot Programming by Demonstration

Researcher: Dr Andrew Yew

In this research, a robot workcell for carrying out robot programming quickly and intuitively through augmented reality has been developed. Robot tasks, such as welding and pick-and-place, are defined using a handheld pointer to draw 3D paths or point at workpieces in the workcell. The reachability and manipulability of the robot is visualized as tasks are defined through a virtual robot overlaying the real robot. Essentially a point-and-click interface, with collision-free path planning performed by software, the mental load on the user from carrying out robot task programming is significantly reduced.

Augmented Reality Robot Programming for Disassembly

PhD Candidate: GONG Leiliang

In this study, a tablet-based augmented reality (AR) interface is proposed to facilitate human-robot interaction (HRI) during the robotic disassembly process. With this interface, the operator is able to analyse product structure on an exocentric view from the tablet camera to plan proper disassembly process and send commands to virtual robot for disassembly task simulation before real execution. Human’s knowledge can assist the robot to disassemble properly, including the disassembly sequence.