Since the birth of the first robot, robots have experienced the development process from low-level to high-level. The first generation of robots are autonomous mechanical devices controlled by computers. To the second generation of robots similar to human force sense, touch, hearing, vision, etc.; To the third generation of intelligent robots, through a variety of sensors to obtain environmental information, the use of artificial intelligence recognition, understanding, reasoning and judgment and decision-making to complete certain tasks. Therefore, in addition to the ability of environment perception, intelligent robots also have strong ability of recognition and understanding and decision-making and planning.
Localization is the basic step of robot autonomous navigation
At present, autonomous mobile robots are the hot spot and focus of the current development, especially in the field of service robots. The scenes involved are relatively complex, so it is necessary for robots to have autonomous positioning and navigation technology. Autonomous localization and navigation of robots can be attributed to three questions proposed by MIT professor JohnJ.Leonard and former University of Sydney professor Hugh Durrant-Whyte:
(1) Where am I?
(2) Where am I going?
(3) How should I go there?
The first problem is the positioning of the robot, that is, how to judge the position of the robot in the current environment according to the observed information. The latter two problems are actually to establish a goal and then plan a path to achieve this goal. For ordinary mobile robots, this target is a point, point-to-point navigation.
Localization is the basic link of autonomous navigation of mobile robot, and it is also the problem that robot must solve to complete the task. The positioning mode of the robot mainly depends on the sensors used, the common mobile robot positioning sensors are lidar, ultrasonic, camera, odometer, infrared, anti-screw instrument and so on.
Robot positioning can be divided into absolute positioning and relative positioning
Corresponding, robot location technology and can be divided into two large, absolute positioning and relative positioning technology, absolute positioning mainly adopts a navigation beacon, active or passive identification, map matching or satellite navigation technology (GPS) for positioning, positioning accuracy is higher, these methods, the beacon or sign construction and maintenance cost is higher, map matching technology processing speed is slow, GPS can only be used outdoors, so its accuracy is still poor.
The calculation methods of absolute positioning include two view Angle method, two view distance method and model matching algorithm. Relative positioning is to determine the current position of the robot by measuring the distance and direction of the robot relative to the initial position, which is also known as dead reckoning. Commonly used sensors include odometer and inertial navigation system (speed gyro, accelerometer, etc.). The advantage of dead reckoning is that the pose of the robot is self-calculated without the perception of the external environment. The disadvantage is that drift error will accumulate over time and it is not suitable for precise positioning.
For indoor mobile robots with high requirements for autonomy, model matching is the main positioning method, and the robot needs to rely on its own sensors to obtain environmental information for positioning and navigation. This kind of robot involves five parts: environment awareness, map matching, pose estimation, trajectory planning and motion execution
When the robot realizes autonomous movement, it first needs to carry out global path planning, which uses line segment, arc or spline curve to fit the motion path and form a series of motion curve segments. Then, according to the planned path, local trajectory planning is carried out on each section of motion curve to generate reference trajectory in real time. When encountering obstacles or abnormal collisions, the path and motion track need to be replanned at the current position to generate the reference motion state and input the robot motion controller. Sensor measuring and identifying the environment characteristic information, after feature extraction and transcendental environment map matching, and then combining odometer measurement information for real-time data processing and necessary information fusion, pose estimation algorithm, by using effective produce more accurate current pose estimation, compared with the reference input signal to form closed-loop control, fixed position of the robot.
Laser SLAM has become a hot topic for robot localization and navigation
In the realization of autonomous robot movement, laser SLAM has become an unavoidable topic. Laser SLAM mainly uses lidar as the core sensor to scan and collect the surrounding environment in real time, and presents a series of scattered point cloud data with accurate Angle and distance for the collected object information. Then through SLAM technology, the two point cloud data at different times are matched and compared to calculate the distance of lidar relative movement and the change of attitude, thus completing the localization of the robot itself.







