Implementation of Fuzzy Control Algorithm in Two-Wheeled Differential Drive Platform

Chen Guoyi

NUS Control and Simulation Laboratory, Singapore 117583

Abstract

Designing and developing Artificial Intelligence controllers on separately dedicated chips have many advantages. This report reviews the development of a real-time fuzzy logic controller for optimizing locomotion control of a two-wheeled differential drive platform using an Arduino Uno board. Based on the Raspberry Pi board, fuzzy sets are used to optimize color recognition, enabling the color sensor to correctly recognize color at long distances, across a wide range of light intensity, and with high fault tolerance.

Keywords: fuzzy logic; fuzzy inference system; fuzzification; defuzzification; PID algorithm.

This is a Side View of ALEX
 
Designed and assembled by Team ArduiNUS
 
Due to the superiority of our ALEX in algorithm and hardware design,
this robot is preserved in the Digital Systems & Applications Laboratory for future engineers to study.

Introduction

This First Year Project (FYP) Report encapsulates my comprehensive academic journey throughout my first year at the National University of Singapore, aiming to illustrate a careful integration of concepts from modules ME3243/EE3305 Robotics System Design, EE4305 Fuzzy/Neural Systems for Intelligent Robotics, and CG2111A Engineering Principles and Practice. Through the practical application of these modules on the Arduino Uno & Raspberry Pi (RPi) robotic platform (Referred to as Robot ALEX in module CG2111A), I have engineered a rescue robot governed by fuzzy control algorithms.

In Module CG2111A, the rescue site is conceptualized as a Maze, where both the rescue robot and human operator do not have prior knowledge of the Maze’s terrain. They can only identify the terrain through an onboard LiDAR system. The victim is represented by a green or red cylindrical object that can appear anywhere in the Maze. Since there are cylinders of other colors present in the Maze, the robot needs to determine the color of the cylinders to confirm whether they represent victims.

The fuzzy control algorithm optimizes the robot’s locomotion and ability to identify victims. Chapters 1, 2, and 3 focus on locomotion, and Chapter 4 analyzes color recognition. Chapter 1 uses kinematic models and constraints to analyze the locomotion of the two-wheel differential drive platform, represented by the ALEX robot configuration. Chapter 2 employs the Proportional–Integral–Derivative (PID) control algorithm by using the equation of motion derived in Chapter 1 to optimize the ALEX robot’s movement. Chapter 3 implements fuzzy control algorithms to refine the control accuracy of the PID algorithm. When the robot approaches a victim, it will utilize an optical color sensor to assess the victim’s condition. Chapter 4 illustrates the optimization of the color-recognition process using a fuzzy algorithm. The acknowledgment, Guoyi’s First Adventure in Robotics, is the final part of this paper. It will narrate my learning journey in my Year 1 at the National University of Singapore.

Chapter 1 Analysis of Locomotion

As shown in Figure 1.1, the ALEX robot controls its locomotion by adjusting the driving velocity of two active wheels on both sides. If the left and right active wheels have the same driving velocity, then the robot stays static or travels in rectilinear motion; if the left and right active wheels have different driving velocities (namely, differential velocity), then the robot performs circular motion. ALEX is a two-wheeled differential robot based on the abovementioned motion characteristics. This research only examines the motion control system of two-wheeled differential robots represented by the ALEX configuration.

Before applying the fuzzy-PID theory to control ALEX, we first need to analyze its equation of motion to design its motion control system. The equation of motion consists of the forward equation of motion and the inverse equation of motion. The forward equation of motion calculates the movement velocity of the robot’s spin center point using known driving velocities of the left and right active wheels. In contrast, the inverse equation of motion calculates the driving velocity of the left and right active wheels using the known movement velocity of the robot’s spin center point. This chapter will analyze the solutions to the forward and inverse equation of motion of ALEX. In Chapter 2, the PID control theory and the inverse equation of motion are applied to design the robot’s motion control system. Chapter 3 applies the forward equation of motion and the fuzzy control theory in tuning the motion control system.

Prior to solving the equation of motion, we need first establish the Global Reference Frame ξI and the Robot Local Reference Frame ξR, as shown in Figure 1.2, to set the parameters of these equations.

1.1 Establishing the Global Reference Frame

In the Global Reference Frame, robotic motion can be regarded as a rigid body’s motion on a two-dimensional plane, which can be decomposed into a translational movement with two degrees of freedom (DOFs) and a rotary motion with one DOF. Therefore, the robotic Global Reference Frame has three theoretical degrees of freedom (DOFworkspace). The robot’s locational parameters in the Global Reference Frame can be set as a vector \xi_I = [x_I \ y_I  \ \theta_I]^T whose dimensions are the same as its degrees of freedom. Then, the velocity parameter of the robot in the Global Reference Frame can be set as the vector \dot{\xi}_I = [\dot{x}_I \ \dot{y}_I  \ \dot{\theta}_I]^T.

1.2 Establishing the Robot Local Reference Frame

The Robot Local Reference Frame ξR is established by taking the spin center point of the robot as the origin. To satisfy the right-hand rule, the forward motion direction of the robot is defined as the positive direction of the XR axis, what is perpendicular to the left is the positive direction of the YR axis, and the ZR axis is perpendicular to the O-XY plane to the outward direction. To sum up, the velocity parameter \dot{\xi}_R of the robot’s spin center in the Global Reference Frame ξI can be obtained from point multiplication of the velocity vector of the Global Reference Frame \dot{\xi}_I and the orthogonal rotation matrix _R^IR(\theta):

By now, we have already obtained the equation of motion shown in Equation (1). However, such an equation of motion contains parameters of the Global Reference Frame that the robot cannot directly obtain. Therefore, we are unable to design the robot’s motion control system using Equation (1). In reality, ALEX can only control the driving velocities of left and right active wheels, namely,  and vl. In addition, as the active wheels of ALEX are unable to slide along the direction perpendicular to their running direction, the robot is incapable of immediately change its motion along certain directions by altering the driving velocities of active wheels. This means the robot’s DOF in motion is restricted by its mechanical structure. Therefore, to determine the dimensions of the equation of motion of the robot, we need to explore the actual DOF of the moving robot equivalent to the dimensions of the equation of motion (namely, the differential degree of freedom, dDOF).

1.3 Analysis of the Dimensions of Equation of Motion

To sum up, the robot’s dDOF is not equal to its DOFworkspace due to the constraint over the sliding of active wheels. The following equation can express the relationship between the constraint and DOF:

Where Cf is the sliding constraint degree, denoting how many directions along which the active wheels are subject to sliding constraint. The active wheel coordinate frame for ALEX ξW is constructed, as shown in Figure 1.3.

As can be known, the distance between the rotatory center point of active wheel OW and spin center point OC is constantly l. The angle between the robot body and the wheel’s running direction is constantly α+β. Thus, the linear velocity of the active wheel vW can be expressed by:

Based on the constraints mentioned in Chapter 1.2, the normal velocity of vW can be expressed by:

As the sliding constraint capacity of the active wheel during motion only depends on the wheel structure and the position where it is mounted on the robotic body, this parameter is always constant. To calculate the sliding constraint capacity of the active wheel more conveniently, we stack the reference frames of ξI and ξR to make their three axial directions align and share the same origin. At this point, the left wheel parameter is: \alpha_l=-\frac{\pi}{2}, \beta_l=\pi,l_l=l and the right wheel parameters: \alpha_r=\frac{\pi}{2}, \beta_r=0,l_r=l, then we have:

So, we can get \dot{y}_I=0. This means the robot body cannot move along the direction perpendicular to the rolling direction of the active wheel, and the ALEX structured robot has one degree of sliding constraint. According to Equation (2), we get:

To sum up, the actual DOF can be determined as two by analyzing the constraint and DOF of ALEX. Hence, the equation of motion of the ALEX structured robot can be expressed as a two-dimensional vector.

1.4 Analytical Equation of Motion

The two-wheeled differential model of ALEX is shown in Figure 1.4, where the instantaneous center of rotation (ICR) is the steering center point, denoting that the robot only performs rotary motion around the point. When velocities of left and right active wheels on the same rigid body are differential, this leads to the rotation of the rigid body in motion. At this point, if the steering radius rC is not zero, then the position of the rigid body is also changing, thus, there is also translational movement. Given the sliding constraint on the robot body, the translational movement of the robot can only be fulfilled along the rolling direction of the wheel (the XR axis direction) instead of the YR axis direction.

Therefore, all points in the Robot Local Reference Frame ξR can be described by the angular velocity and linear velocity along the XR axis. Based on the parameters shown in Figure 1.4 and the sliding constraint of ALEX derived in Chapter 1.3, the spin center point velocity can be described by the equation \dot{x}_R=[v_c \ w]^T. As the linear velocity directions of the left and right active wheels are the same as the XR axis and the linear velocity direction is perpendicular to the steering radius, the ICR is constantly located on the connecting line between points L and R, and the specific location of ICR on line LR is determined by the left and right active wheel velocity [v_l \ v_r]. With v=w \cdot r, when w is constant, v is proportional to r. Hence, the velocities of points L, R, and OC can be expressed as:

Based on the formula deformation of Equation (6), the angular velocity w can be expressed as:

Further, the relationship between the linear velocity vc of the spin center point and the left and right active wheel velocity [v_l \ v_r] can be calculated by Equation (7):

1.5 Motion Equations and their Application

The Forward Motion Equation calculates the velocity of the spin center point OC based on the velocities of the left and right driving wheels. Combining Equations (7) and Equation (8), the positive motion equation can be expressed as:

In a real-world application, the encoder will be mounted on both driving wheels of the differential robot, and the linear velocities [v_l \ v_r] of the two driving wheels can be calculated. At this point, the velocity of robot OC can be calculated by Equation (9).

The Inverse Motion Equation decomposes the velocity of the spin center point OC into left and right driving wheel velocities, which can be expressed as:

The inverse motion equation is used to control robot OC to operate at the expected velocity [v_c \ w]^T; that is, the theoretical velocities of the two driving wheels are calculated by Equation (10). In the next chapter, we will utilize the wheel encoder and inverse motion equation obtained in this chapter to optimize the robot’s motion along the expected route in combination with the PID control theory.

Chapter 2 Implementation of PID Control Algorithm

The drive platform of the ALEX robot incorporates two DRV-8833 Motor Drivers, each equipped with an Encoder. In this chapter, we first design a PID algorithm to optimize the control of the motor and then leverage the equation of motion that was derived in Chapter 1 to build an autopilot system to optimize ALEX’s locomotion. To improve efficiency, we will initially adjust the PID parameters within the Robot Operation System (ROS) environment. Subsequently, we will conduct fine-tuning under real-world operational conditions to ensure optimal system performance and adaptability.

2.1 Drive Platform Architecture

The DRV-8833 Motor Driver is a type of Permanent Magnet DC (PMDC) motor (see Figure 2.1(a)). As the power output of the Arduino Uno is insufficient to drive these motors, a Stepper Motor Driver Carrier Pololu A4988 must be used to channel power from the external battery pack to the motors. The Arduino is connected to the Pololu A4988, and the input power to the motors can be regulated through the Arduino via the Pulse Width Modulation (PWM) supplied to those connections.

In actual operation, we found that the motors could not drive ALEX at low torque output, indicating that the mass of ALEX is relatively heavy compared to the torque provided by the motors. Therefore, when we change the motion state of ALEX, especially when initiating motion from a stationary state, we need to ensure that the motor can provide sufficient torque to counteract the load torque. According to the motor characteristics shown in Figure 2.1(b), this requires us to increase the input power of the motor by raising the input voltage, thereby enabling the motor to generate a larger torque and avoid stalling.

When ALEX transitions from a stationary state to the desired speed, the input power is initially increased to prevent motor stalling. However, this could potentially cause ALEX to overspeed and overshoot the setpoint. This is particularly concerning because ALEX’s mapping and localization, which rely on the SLAM algorithm, require the robot to move with a high degree of steadiness. Abrupt or frequent changes in motion are problematic. Therefore, it is crucial that we precisely modulate ALEX’s speed. To accomplish this, we first designed a PID control algorithm for a set of motors and encoders. Then, using this initial PID control algorithm as a foundation, we designed an autopilot system to optimize the robot’s motion.

2.2 PID Algorithm Enhanced PMDC Motor Control System

Figure 2.2 presents a block diagram of the PID controller in a feedback loop configuration. In this system, ‘setEcd’ represents the target degree of rotation of the motor, which is quantified in terms of encoder counts. ‘rcdEcd’ represents the current rotation of the motor. The difference between the target and the current rotation, denoted by ‘errEcd’ (or e(t) in Figure 2.2), is computed as the subtractive difference between ‘setEcd’ and ‘rcdEcd’.

fig2.2

This error value is discretely computed by the PID controller with a time difference t’, which then applies a correctional factor u(t). This factor is a sum of the proportional (P), integral (I), and differential (D) terms as follows expressed mathematically with the following equation:

Term proportional (P) is the instantaneous error of the system, term integral (I) is cumulative error of the system and term differential (D) is the rate of change of error of the system.

The tuning of these PID controller parameters (kp, ki, kd) is crucial as it facilitates the desired correction, thereby achieving the target motor rotation. Finally, PWM values Vpwm are generated, enabling the Arduino to regulate the speed of the PMDC motor. (Operated only within the Digital Systems and Applications Laboratory environment. This is because increasing the input power of the motor by raising the input voltage may lead to motor overheating or even damage.)

2.3 PID Algorithm Enhanced Locomotion

The motion equation of ALEX analyzed in Chapter 1 and the PID Algorithm Enhanced PMDC Motor Control System designed in the previous Chapter are used to fine-tune the locomotion of ALEX.

Figure 2.3 illustrates ALEX searching for victims within a maze. The red points in the figure represent the potential locations of victims and optimal stopping points (setpoints). At each point, the robot’s center of rotation OC stops, and it can successfully complete turning or recognizing victim’s colors.  Since we have requirements for the robot’s orientation during turning and color recognition, we cannot make the robot move in a circular motion as shown in Figure 1.4. Moreover, when the robot moves to the setpoint, it should stop precisely at the setpoint without overshooting. To satisfy these two conditions, the optimal motion trajectory of the robot should be as shown by the gray dashed line in Figure 2.3.

The robot should first rotate in place (with the ICR as the OC point for rotation) to face the setpoint angle and then move in a straight line. Therefore, we can control the robot’s turning and linear motion as two separate motion control systems – steering control system and forward control system. Decoupling these systems often leads to better stability of locomotion. It reduces the chance of instabilities that can arise from the interactions between forward and rotational motion, especially in high-speed or highly dynamic environments. For ALEX, optimal performance can be achieved by adjusting the parameters of forward and steering control system independently. For example, in a mobile robot, it might be advantageous to slow down (reduce forward speed) while turning (increasing rotational speed) to prevent tipping over.

2.3.1 Steering Control

When the robot revolves around the center of rotation OC, its linear velocity VC should be equal to 0. Therefore, by substituting into the inverse motion equation Equation (10), we can derive the relationship between the angular velocity of rotation and the velocities of the left drive wheel vl and the right drive wheel vr:

Thus, the driving speeds of the left and right drive wheels are equal, but their directions are opposite. There exists the following relationship between the driving speed and the angular velocity:

The number of rotations required for both wheels should also be equal. Therefore, after ensuring that the rotation speeds of both wheels are the same but in opposite directions, we can simply use the encoder of either the left or the right wheel to set up the PID control system.

2.3.2 Forward Control

When the robot moves in a straight line, the angular velocity w of rotation around the center point Oc should be 0. Therefore, by substituting into the inverse motion equation Equation (10), we can derive the relationship between the linear velocity of point Oc and the velocities of the left drive wheel vl and the right drive wheel vr:

The number of rotations required for both wheels should also be equal. Therefore, after ensuring that the rotation speeds and directions of both wheels are the same, we can simply use the encoder of either the left or the right wheel to set up the PID control system.

2.4 Autopilot System

The relative distances between the robot’s center of rotation (OC) and setpoints OLF, OL, OLB, OF, OB, ORF, OR, and ORB, along with the initial angle of motion of the robot with respect to the setpoints, are pre-stored in the program’s global variables in the form of encoder counts. A human operator can command ALEX to move to any setpoints, and ALEX will engage the Steering Control and Forward Control systems to move. During this motion, ALEX utilizes an Enhanced Locomotion PID Algorithm for autonomous driving.

During Autopilot mode, the robot regularly checks if the values from the left and right encoders are equal. If the discrepancy between the encoder values exceeds a defined limit, the robot disconnects from autonomous driving and is manually taken over by the human operator.

2.5 Tuning PID Parameters in ROS Environment

To EE3305 students: The robot’s controller in the Gazebo environment uses a single set of PID parameters, which govern both the Forward and Steering controllers. Their coupling makes PID tuning challenging. However, a method has been developed to decouple these controllers, resulting in individual PID parameter sets for each. Details on this decoupling method can be found in the EE3305 Final Report: Tuning Strategy of PID Parameters for Coupled PID Control Systems and its Applications.

To determine the optimal parameters (kp, ki, kd) of PID efficiently, we migrated the parameters of the real-world environment into the ROS simulation environment. Figure 2.4 (a) and (b) respectively demonstrate the debugging of the PID parameters for the Steering Control and Forward Control systems using the ROS simulation environment.

2.5.1 Tuning Methods

Tuning methods are designed to tune a PID controller by adjusting kp, ki and kd parameters. In this project, PID controllers are tuned using a new method adapted from the Manual Tuning Method and the Ziegler–Nichols Method.

2.5.1.1 Manual Tuning Method

Manual Tuning Method is based on the effects of increasing or decreasing the gain of kp, ki and kd parameters separately. By analyzing the error plot curve plotted by MatPlot, Manual Tuning Method can be applied easily to tune parameters according to Table 2-1 below:

The Manual Tuning Method can be relatively time-consuming because it requires increasing or decreasing each parameter independently multiple times to confirm that it is optimal. Furthermore, the tuning process is subjective and cannot be quantified. Hence, we used another heuristic tuning method called the Ziegler–Nichols method, introduced by John G. Ziegler and Nathaniel B. Nichols in the 1940s.

2.5.1.2 Ziegler–Nichols Method

The Ziegler–Nichols Method is performed by setting the P, I, and D gains to zero. The P gain is then increased from zero until it reaches the ultimate gain Ku, at which the control loop output has stable and consistent oscillations. The oscillation will have a period of Pu, which will be used to set the P, I, and D gains depending on the type of controller used and the desired behavior. For this project, the gains for P, PI, and PID control systems can be set according to Table 2-2.

2.5.1.3 The New Tuning Method

Althoughthe Ziegler–Nichols Methodis well-established, it is based on trial and error, which cannot be applied in every scenario. In this project, a new tuning method based on Manual Tuning Method and Ziegler–Nichols Method will be used to implement an optimized PID controller:

  1. Increase Kp until the forward error oscillates.
  2. Half the Kp approximately allows a quarter amplitude decay type response.
  3. Increase Ki until the offset is corrected. There is not much steady-state error to begin with.
  4. Increase Kd if required but not too much because it is susceptible to noise.

The forward control and steering control systems can be tuned by six parameters listed in Table 2-3 below:

2.5.2 Tuning Process

Then, we applied the new tuning method to tune the forward control system.

The forward controller’s tuning procedure is completed. Then, the new tuning method is used to tune the steering control system.

Applying the PID parameters obtained in the ROS simulation environment to the real-world robot control has resulted in quicker response times when the robot reaches the setpoint, smoother velocity changes, and prevention of overshooting the designated point. However, in real-world rescue environments, the robot often encounters uneven maze surfaces, leading to a large discrepancy between the left and right encoder readings, which in turn disconnects autonomous driving. To enhance the robot’s obstacle-crossing ability and reduce the frequency of disconnections, a PID controller optimized using a fuzzy algorithm will be designed and applied to improve ALEX robot’s movements, which will be covered in Chapter 3.

Chapter 3 Fuzzification of Autopilot System

Uneven terrain is a significant factor that leads to the disconnection of a robot’s autopilot system. Small hills within the maze may cause the robot to deviate from its set motion direction during movement or cause the robot’s drive system to lack enough torque to drive the robot over the hills. When the robot encounters the above situations, it often triggers the conditions for disconnection of the autopilot designed in Chapter 2.4. This chapter will fuzzify the Autopilot system designed in Chapter 2, using a fuzzy PID control system to overcome obstacles and avoid the need for manual operations during obstacle crossing.

Fuzzification of the Autopilot System is the use of a fuzzy controller for online self-tuning of the parameters of the PID controller. The process is as follows: first, find out the fuzzy relationship between the three PID parameters kp, ki, kd, and the error E and rate of change of error EC. Then, by continuously detecting the error and the change in error during operation, adjust the three parameters online according to the principles of fuzzy control, to meet the different requirements for controller parameters at different errors and error rate changes. Figure 3.1 presents a block diagram of the Fuzzy – PID controller in a feedback loop configuration:

To find out the fuzzy relationship between the three PID parameters and the error and rate of change of error, we first define the membership function of them.

3.1 Membership Functions

3.1.1 Input Functions

This paper defines seven fuzzy sets on various parameter fuzzy domains: Negative Big (NB), Negative Medium (NM), Negative Small (NS), Zero (ZO), Positive Small (PS), Positive Medium (PM), and Positive Big (PB)

3.1.2 Output Functions

3.2 Implementation of Fuzzy Rules

When the robot is climbing a slope, the drive motor needs to provide more torque to successfully overcome the hill because the robot needs to overcome more gravity. Because kinetic energy is partially converted into gravitational potential energy, the robot’s speed will slow down when going uphill.

When the robot is coming down from a slope, the robot’s gravitational potential energy is converted into kinetic energy, which can cause the robot to exceed its speed limit. To control the speed and prevent overshooting, the drive motor needs to decelerate.

Based on the above rules, we have formulated the following Fuzzy Rule to generate better PID parameters to control robot movement.

3.3 Defuzzification of Fuzzy Controllers in Simulink®

This paper uses the Fuzzy logic editor based on MATLAB Simulink to defuzzify the designed Fuzzy Controllers. The following image is a screenshot of the Simulink software. At this moment, both the error and the rate of change of error are 0. After defuzzification, the PID parameters generated by the Fuzzy Controller are also close to 0.

Chapter 4 Fuzzification of Color Detection Algorithm

There are inherent difficulties with the color sensor that cannot be completely controlled. Factors such as lighting and shadows, and even the quantum electrical effects in the sensor chip, make it virtually impossible to ensure that the color being analyzed remains constant when the robot is detecting the color of the victim. Since color is greatly influenced by many potential factors, fuzziness is incorporated into the system to address the uncertainty problem of color object classification.

Given these inherent challenges in color detection—factors like lighting, shadows, and quantum electrical effects causing uncertainty—fuzziness, or fuzzy logic, proves to be a valuable asset. Fuzzy logic mimics human decision-making by considering degrees of truth, rather than absolute binary values. This allows the color detection system to express uncertainty, effectively stating that a color is, for instance, “75% similar to red” instead of being strictly “red” or “not red”. This is accomplished by defining fuzzy sets for each color, which include possible color variations and their degree of membership to the original color. When a color is detected, the system determines its membership to these sets. To translate these fuzzy values back to a singular, definitive color decision—a process known as defuzzification—the system typically selects the color with the highest membership value. Therefore, the incorporation of fuzziness into the system is a logical way to address the uncertainty problem in color identification in robotics.

4.1 Color Detector Architecture

The TCS3200, a color sensor capable of detecting and measuring the intensity of red, green, and blue (RGB) light, operates via a grid of photodiodes with color filters, converting light into an electrical signal relative to light intensity. This sensor, housing an 8×8 array of photodiodes with 16 of each type for red, green, blue, and clear (no filter) filters, uses an internal oscillator to generate a frequency signal that is output as a square wave on the sensor’s output (OUT) pin, with the frequency of the output signal proportional to the detected light intensity. When integrated with the Raspberry Pi via its onboard General Purpose Input Output (GPIO) pins – with the wiring architecture demonstrated in Figure 4.1 below – the TCS3200’s output frequency corresponding to each color’s intensity can be measured by connecting the OUT pin to a Raspberry Pi GPIO pin, recording the time between the square wave’s rising and falling edges, thereby obtaining the intensity values for color recognition.

Through the above properties of the TCS3200 color sensor, we can run a Python script on Raspberry Pi to obtain the reflected light intensity of RGB color. Here is a pseudocode to show the flowchart of color detection:

For the light intensity of each color returned by the sensor, we can develop a color recognition algorithm based on fuzzy logic.

4.2 Fuzzified Color Detection

Through continuous calibration and adjustment of color data in the Digital Systems and Applications laboratory, we obtained the following comparison table of color and light intensity value xR/G/B at specific detection distances (from 2cm to 8cm). Additionally, we have calculated the RGB values and fuzzy logic membership degrees for each color.

In Chapter 4.2.1, we will set up membership functions between light intensity and color. The horizontal axis represents light intensity under red, green, or blue filters, while the vertical axis represents membership degrees. The membership function of each color is represented by a triangle, in which the height of the triangle indicates the membership degree. The two points at the base of the triangle represent the minimum and maximum intensity thresholds for the color, while the point at the top signifies the RGB value at which the color intensity reaches the highest membership degree. The fill color of each triangle represents the color of the corresponding fuzzy membership function.

4.2.1 Membership Functions

The triangular membership function is used in this project because it is a common function form in fuzzy logic and can well represent the transition between different membership degrees. In this case, the membership function we use is based on experience and experimentation, but in practical applications, the choice of membership function may vary depending on the specific application.

4.2.2 Defuzzification of Fuzzy Color Detection

Below is an example of a fuzzy rule set based on fuzzy logic membership degrees (see Table 4.2), used to determine the color category to which an RGB color value belongs. The fuzzy rule set has a total of 7 color categories, and the fuzzy rule for each category is determined by the membership degrees of the R, G, and B color components. For example, in the fuzzy rule for black, the membership degrees of R, G, and B are “Low, Low, Low,” indicating that they belong to the “low” membership degree of the three-color components, so it is judged to be black. The values of each membership degree can be modified according to the actual situation.

Based on the information provided in the table above, we can derive the following predicate logic for determining colors:

4.3 Limitation of Fuzzified Color Detection

As we consider Table 4.2 as a 7×9 two-dimensional fuzzy logic matrix, we find that the rank of this matrix is 6, which is not equal to the number of colors (7) that the sensor needs to recognize. Upon inspection, we notice that the predicate logic conditions for determining red and white colors are the same, i.e., the R, G, and B membership degrees for green and white in the fuzzy rules are “Medium, Medium, Medium.” If we judge the colors based on the aforementioned fuzzy rules, it is possible to confuse red with white. Fortunately, during ALEX robot’s rescue operation, it will not encounter white dummies, so we can use the fuzzy rules described above to determine colors in a laboratory environment. Due to the difficulty in fixing the detection distance when robots are sensing colors of victims, the R, G, and B values detected by the TCS3200 sensor may undergo slight changes, which could potentially lead to erroneous color detection results. By applying fuzzy logic to the system, strict color definitions are relaxed, and flexibility of detecting distance (2cm – 8cm) is incorporated into the system by defining colors based on fuzzy sets and rules.

Appendix: Hardware Architecture

Acknowledgment

“What kind of robots will you design?”

Prof. Chang asked me this question in the interview for the Science & Technology Scholarship. With a strong desire to figure out the answer to the question, I came to my first semester at NUS, enrolled in module EE3305/ME3243 at the suggestion of Mr. Shi Zhansen, a warm-hearted senior. This decision became the foundation for this First Year Project (FYP) report. I titled this acknowledgement “Guoyi’s First Adventure in Robotics,” a testament to my invaluable experiences throughout my research endeavor, nestled at the end of this report.

The senior recommended I learn module EE3305/ME3243 because he considered that ‘improvement should be your objective as you have a basic command of robotic knowledge.’ Before being a student of NUS, I participated in RoboCup on behalf of my home country and have already applied the PID control algorithm in optimizing robot’s locomotion. But the PID tuning strategy used previously failed to tune the parameters systematically and strategically; rather, it was based entirely on experience and guessing. The lectures delivered by Prof. Vadakkepat and Dr. Putra during module EE3305 introduced me to a more thoughtful tuning strategy which I tested intensively with the ROS system. My academic report submitted for this module, Tuning Strategy of PID Parameters for Coupled PID Control Systems and its Applications, is the summary of the strategy.

“The curriculum (Computer Engineering) aims to bring real-world problems, solutions, and experiences into the university environment. ”

This is a quote from NUS official website of my home course, Computer Engineering, which is also the conclusion of my research process in my first semester. To be specific, I pondered the mutual interference between the outputs of the coupled control systems in ROS environment as a practical problem rather than ignoring it. And support from lecturers and teaching assistants was well received in the research process. In particular, my discussions with Dr. Guo Haoren and Dr. Xian Yuanjie from the NUS Control & Simulation Laboratory helped me learn about the decoupling method by analyzing motion equations. On this basis, a tuning strategy for the coupled PID control systems was developed in combination with my driving experience in real life.

After completing my first semester, I continued my exploration in the field of robotics. I had the privilege to accompany Dr. Zhang Yang at the 2023 Computer Science Research Week held at NUS School of Computing. This event provided me with the opportunity to engage with renowned computer scientists around the world. One scholar in the field of robot vision remarked that my solution during EE3305 bore similarities to fuzzy neural theory. Encouraged and assisted by these scholars, I began exploring fuzzy neural theory. As a freshman, I could only audit the EE4305 module lectures and independently sought related materials on fuzzy theory in the NUS library.

“…I am extremely enthusiastic to become an NUS student and a life-long learner under the Interdisciplinary Learning Scheme…”

My experience of self-studying fuzzy neural theory and my first experience of independently reading literature made me understand how important the lectures and experiments carefully prepared by the professors are. Thanks to the Lecture Notes made by Prof. Xiang for module EE4305, the content he summarized significantly improved my efficiency in learning fuzzy control systems. After studying the content of engineering linear algebra in the second semester, I reviewed the lectures of Prof. Chew Chee Meng in module ME3243. I successfully derived the motion equation of the ALEX robot drive system according to the examples he provided, thus using a more rigorous mathematical method to decouple the robot drive control system. These laid a solid foundation for me to reproduce the fuzzy control system using the ALEX robot in the real environment of module CG2111A.

“My aim is to bring the knowledge I learned in university to solve real-world problems.”

Like the Fuzzy – PID control system, my aim of learning and the teaching objectives of my home course also turn into a closed loop. I have solved real-world problems by applying the knowledge and experience I gained from learning and thinking in my campus life. This FYP report was cited by the RoboCup national team developing A Robust PID Line Patrol Algorithm Based on Four Optical Sensors. The successful completion of this paper would not have been possible without the help of Ms. Arcinas Camille Esther Walet, an academic writing tutor from Yale-NUS. She accompanied me through mock presentation, correcting and improving the grammar and logical errors in my paper writing. I am grateful for her suggestions for the wording of this paper and for her attitude in reviewing the paper from her perspective of different professional backgrounds.

My experience in taking modules EE3305, ME3243, EE4305, and CG2111A comprise my First Adventure in Robotics, which keeps my resolve in the discipline of Robotics during exploration. I was fortunate to get a preliminary understanding of the subject I loved in my fresh year and began to learn Core Modules according to my plan. I am looking forward to joining the Undergraduate Research Opportunities Programme and continuing to indulge my passion for the research I am interested in after certain knowledge accumulation. 

In the end, to answer the questions Prof. Chang asked me when I graduated from high school, I realize I have a long journey ahead. However, I am confident that three years of knowledge accumulation at NUS will provide a satisfactory answer.

路漫漫其修远兮,吾将上下而求索。

The way ahead is long; I see no ending, yet high and low I will search with my will unbending.

Reference

[1] C.-H. Chen, J. Shiou-Yun, and C.-J. Lin, “Mobile Robot Wall-Following Control Using Fuzzy Logic Controller with Improved Differential Search and Reinforcement Learning,” Mathematics, vol. 8, p. 1254, Jul. 2020, doi: 10.3390/math8081254.

[2] M. De Oliveira, R. Baptista, and J. Sousa, “Psychic Distance, Purchase Intention and Life Satisfaction. Analyzing of International Purchase Websites,” 2017, pp. 609–620. doi: 10.1007/978-3-319-42070-7_57.

[3] A. Dirafzoon, S. Emrani, S. M. A. Salehizadeh, and M. B. Menhaj, “Fuzzy virtual torque approach for coverage control of nonholonomic mobile robots,” in 2010 8th World Congress on Intelligent Control and Automation, Jul. 2010, pp. 3109–3114. doi: 10.1109/WCICA.2010.5553937.

[4] H. R. Jayetileke, W. R. de Mei, and H. U. W. Ratnayake, “Real-time fuzzy logic speed tracking controller for a DC motor using Arduino Due,” in 7th International Conference on Information and Automation for Sustainability, Dec. 2014, pp. 1–6. doi: 10.1109/ICIAFS.2014.7069560.

[5] N. D. Kouvakas, F. N. Koumboulis, and J. Sigalas, “Manoeuvring of Differential Drive Mobile Robots on Horizontal Plane through I/O Decoupling,” in 2022 IEEE 27th International Conference on Emerging Technologies and Factory Automation (ETFA), Sep. 2022, pp. 1–8. doi: 10.1109/ETFA52439.2022.9921655.

[6] G. Lv, H. Lv, Z. Chen, W. Chu, and J. Mao, “Research on inertia transfer in load simulation of tracked vehicle under complex working conditions,” Frontiers in Materials, vol. 10, p. 1188411, May 2023, doi: 10.3389/fmats.2023.1188411.

[7] P. Vadakkepat, O. Miin, X. Peng, and T. Lee, “Fuzzy Behavior-Based Control of Mobile Robots,” Fuzzy Systems, IEEE Transactions on, vol. 12, pp. 559–565, Sep. 2004, doi: 10.1109/TFUZZ.2004.832536.

[8] Z. Wang, J. Shao, J. Lin, and G. Han, “Research on controller design and simulation of electro-hydraulic servo system,” in 2009 International Conference on Mechatronics and Automation, Aug. 2009, pp. 380–385. doi: 10.1109/ICMA.2009.5245095.

[9] H. Zhang, K. Yuan, and S. Mei, “Fuzzy Logic Cross-coupling Control of Wheeled Mobile Robots,” in 2006 International Conference on Mechatronics and Automation, Jun. 2006, pp. 740–744. doi: 10.1109/ICMA.2006.257682.

[10] A. Alostaz and B. Hamed, “Optimized automated tracking of a moving object with a robotic eye system,” Control and Intelligent Systems, vol. 44, Jan. 2016, doi: 10.2316/Journal.201.2016.1.201-2721.

[11] K. Zhang, Y. Wu, X. Lü, and X. Jin, “Dynamic modeling and simulation for nonholonomic welding mobile robot,” J Cent. South Univ. Technol., vol. 14, no. 5, pp. 679–684, Oct. 2007, doi: 10.1007/s11771-007-0130-0.


This work is licensed under CC BY-NC-ND 4.0

This research report has been submitted and pre-printed in arvix.org. You may cite this work and download the PDF version from arvix.org.

Warning: This project report has been submitted to CG2111A Final Project Report. Plagiarism is an academic offense taken very seriously by the University, as stated in the NUS Code of Student Conduct.

P Think Before Printing!