0 votes
by (120 points)
LiDAR Robot Navigation

LiDAR robot navigation is a complex combination of mapping, localization and path planning. This article will outline the concepts and explain how they work by using an easy example where the robot reaches a goal within a row of plants.

imageLiDAR sensors are low-power devices that can prolong the battery life of robots and decrease the amount of raw data needed to run localization algorithms. This allows for a greater number of iterations of SLAM without overheating the GPU.

LiDAR Sensors

The core of lidar systems is its sensor which emits laser light in the surrounding. The light waves bounce off the surrounding objects at different angles depending on their composition. The sensor is able to measure the amount of time it takes for each return, which is then used to determine distances. Sensors are mounted on rotating platforms, which allow them to scan the surroundings quickly and at high speeds (10000 samples per second).

LiDAR sensors are classified by their intended applications on land or in the air. Airborne lidar systems are usually mounted on aircrafts, helicopters or unmanned aerial vehicles (UAVs). Terrestrial LiDAR systems are typically mounted on a static robot platform.

To accurately measure distances the sensor must always know the exact location of the robot. This information is recorded by a combination of an inertial measurement unit (IMU), GPS and time-keeping electronic. These sensors are employed by LiDAR systems in order to determine the exact location of the sensor in the space and time. This information is used to create a 3D model of the surrounding.

LiDAR scanners can also be used to identify different surface types, which is particularly beneficial for mapping environments with dense vegetation. When a pulse passes a forest canopy it will usually produce multiple returns. The first return is attributable to the top of the trees, while the final return is related to the ground surface. If the sensor can record each pulse as distinct, it is called discrete return LiDAR.

Distinte return scans can be used to study surface structure. For example, a forest region may yield a series of 1st and 2nd returns, with the final big pulse representing bare ground. The ability to separate and record these returns as a point cloud allows for detailed models of terrain.

Once a 3D model of the surrounding area is created and the robot is able to navigate using this data. This process involves localization and creating a path to take it to a specific navigation "goal." It also involves dynamic obstacle detection. This is the process that detects new obstacles that are not listed in the map's original version and then updates the plan of travel accordingly.

SLAM Algorithms

SLAM (simultaneous localization and mapping) is an algorithm that allows your robot to build an image of its surroundings and then determine the position of the robot vacuum lidar in relation to the map. Engineers make use of this information to perform a variety of tasks, such as the planning of routes and obstacle detection.

To utilize SLAM the robot needs to be equipped with a sensor that can provide range data (e.g. A computer that has the right software for processing the data and either a camera or laser are required. Also, you need an inertial measurement unit (IMU) to provide basic information about your position. The system can determine your robot's location accurately in an unknown environment.

The SLAM system is complex and there are many different back-end options. Regardless of which solution you choose for your SLAM system, a successful SLAM system requires a constant interaction between the range measurement device and the software that extracts the data and the vehicle or robot. This is a highly dynamic procedure that can have an almost unlimited amount of variation.

As the robot moves around the area, it adds new scans to its map. The SLAM algorithm then compares these scans with the previous ones using a method called scan matching. This allows loop closures to be created. If a loop closure is discovered, the SLAM algorithm uses this information to update its estimated robot trajectory.

The fact that the environment can change over time is another factor that complicates SLAM. For instance, if your robot is navigating an aisle that is empty at one point, and it comes across a stack of pallets at a different location, it may have difficulty connecting the two points on its map. The handling dynamics are crucial in this case, and they are a feature of many modern Lidar SLAM algorithm.

Despite these issues, a properly-designed SLAM system can be extremely effective for navigation and 3D scanning. It is particularly useful in environments that do not allow the robot to depend on GNSS for positioning, such as an indoor factory floor. It is important to keep in mind that even a properly-configured SLAM system may experience mistakes. To correct these mistakes it is crucial to be able detect them and understand their impact on the SLAM process.

Mapping

The mapping function creates a map for a robot's environment. This includes the robot, its wheels, actuators and everything else that is within its vision field. This map is used for localization, route planning and obstacle detection. This is an area where 3D lidars can be extremely useful because they can be effectively treated as the equivalent of a 3D camera (with a single scan plane).

The process of building maps takes a bit of time however the results pay off. The ability to create an accurate, complete map of the robot's surroundings allows it to carry out high-precision navigation as well as navigate around obstacles.

As a general rule of thumb, the higher resolution the sensor, the more precise the map will be. However there are exceptions to the requirement for high-resolution maps: for example, a floor sweeper may not require the same level of detail as an industrial robot navigating factories with huge facilities.

For this reason, there are many different mapping algorithms that can be used with LiDAR sensors. Cartographer is a popular algorithm that employs the two-phase pose graph optimization technique. It adjusts for drift while maintaining an unchanging global map. It is particularly beneficial when used in conjunction with Odometry data.

GraphSLAM is another option, which utilizes a set of linear equations to represent constraints in the form of a diagram. The constraints are modeled as an O matrix and a X vector, with each vertice of the O matrix containing the distance to a point on the X vector. A GraphSLAM update consists of the addition and subtraction operations on these matrix elements with the end result being that all of the O and X vectors are updated to accommodate new robot observations.

imageAnother useful mapping algorithm is SLAM+, which combines odometry and mapping using an Extended Kalman filter (EKF). The EKF updates not only the uncertainty in the robot's current location, but also the uncertainty of the features drawn by the sensor. The mapping function can then make use of this information to estimate its own position, which allows it to update the base map.

Obstacle Detection

A robot needs to be able to perceive its environment to overcome obstacles and reach its destination. It employs sensors such as digital cameras, infrared scans sonar, laser radar and others to sense the surroundings. It also makes use of an inertial sensors to monitor its speed, location and the direction. These sensors allow it to navigate safely and avoid collisions.

A key element of this process is obstacle detection that involves the use of a range sensor to determine the distance between the robot and obstacles. The sensor can be attached to the vehicle, the robot, or a pole.

Your answer

Your name to display (optional):
Privacy: Your email address will only be used for sending these notifications.
Welcome to My QtoA, where you can ask questions and receive answers from other members of the community.
...