![]() The main cause for measurement error in mobile navigation and localization is the inability to detect certain obstacles due to the physical nature of the obstacle. Their results are consistent with the previous state of the art related work. ![]() used a Laser Range Finder (LRF) and a digital camera as data generators for an Artificial Neural Network (ANN) with a standard backpropagation algorithm that outputs the speed and steers values for the navigation of an intelligent wheelchair. ![]() A data fusion contest for processing High-Resolution LiDAR and RGB data of outdoor aerial photography involving water, trees, cars, and boats shows that the complement of RGB and LiDAR with Deep Neural Networks (DNN) was the best approach. While this comprehensive report shows better results for DCNN, BPNN is not far behind in the data fusion level. They fused vibration, acoustic, electric current, instantaneous angular speed signals and were able to learn the features at different fusion levels using Deep Convolutional Neural Networks (DCNN), Back Propagation Neural Networks (BPNN), and a support vector machine decision-making algorithm. implemented a multi-sensor data fusion method for fault diagnosis of a planetary gearbox. This implementation requires a fixed in placed sensor infrastructure and laser scanner measurement errors were up to 30 cm in the presence of glass doors. Extended Kalman Filters were used by Dobrev, Gulden, and Vossiek to improve an indoor positioning system using multi-modal sensor fusion for service robots’ applications. Some of the most common approaches are Kalman Filters and NN with a wide range of implementation configurations. Fusion strategies can be based on: probabilistic approach such as Factor graphs, extended Kalman Filters, Bayesian methods, and Particle Filters or it can be based on artificial intelligence such as Neural Networks (NN) or Fuzzy Logic. Outcome quality can be determined by comparing real-time capability, RMSE of the data output, physical implementation, and reproducibility of the experimental platform. It is also the most differentiating factor between fusion strategies. Sensor data fusion for mobile applications is varied, but in most cases, the approach used will be the determining factor regarding the outcome quality of the system. Test results show that with such a fusion algorithm, it is possible to detect glass and other obstacles with an estimated root-mean-square error (RMSE) of 3 cm with multiple fusion strategies. The Robotis Turtlebot3 Waffle Pi robot is used as the experimental platform to conduct experiments given the different fusion strategies. With a Neural Network as a data fusion algorithm, we integrate all the information into a single, more accurate distance-to-obstacle reading to finally generate a 2D Occupancy Grid Map (OGM) that considers all sensors information. A preprocessing scheme is implemented to filter all the outliers, project a 3D pointcloud to a 2D plane and adjust distance data. An artificial neural network is used to fuse data from a tri-sensor (RealSense Stereo camera, 2D 360 ° LiDAR, and Ultrasonic Sensors) setup capable of detecting glass and other materials typically found in indoor environments that may or may not be visible to traditional 2D LiDAR sensors, hence the expression improved LiDAR. With this, it is possible to generate a 2D occupancy map in which glass obstacles are identified. To detect different materials that might be undetectable to one sensor but not others it is necessary to construct at least a two-sensor fusion scheme. Mobile robots must be capable to obtain an accurate map of their surroundings to move within it.
0 Comments
Leave a Reply. |