SLAM Technology in Autonomous Driving

Authors

  • Juncheng Xu School of Mechanical and Automotive Engineering, Xiamen University of Technology, Xiamen 361024, China Author

DOI:

https://doi.org/10.63313/Engineering.8002

Keywords:

Autonomous Driving, Slam, Environmental Perception, Multi-Sensor Fusion

Abstract

With the rapid development of transportation systems, autonomous driving technology has emerged, in which Simultaneous Localization and Mapping (SLAM) plays a crucial role. To gain an in-depth understanding of SLAM technology in autonomous driving, this paper provides a comprehensive review of its research progress and applications. First, the fundamental princi-ples of SLAM are introduced, and SLAM techniques are categorized into different types. Next, the practical applications of SLAM in autonomous driving are analyzed, along with a detailed exami-nation of the advantages and disadvantages of various existing SLAM methods. Finally, the im-portance of SLAM in autonomous driving is summarized, and potential directions for future re-search are proposed.

 

References

[1] Litman, T. (2020). Autonomous Vehicle Implementation Predictions. Victoria Transport Policy Institute.

[2] Thrun, S., Montemerlo, M., Dahlkamp, H., et al. (2006). Stanley: The robot that won the DARPA Grand Challenge. Journal of Field Robotics, 23(9), 661-692.

[3] Shalev-Shwartz, S., Shammah, S., & Shashua, A. (2016). Safe, Multi-Agent, Reinforcement Learning for Auton-omous Driving. arXiv preprint arXiv:1610.03295.

[4] Cadena, C., Carlone, L., Carrillo, H., et al. (2016). Past, Present, and Future of Simultaneous Localization and Mapping: Toward the Robust-Perception Age. IEEE Transactions on Robotics, 32(6), 1309-1332.

[5] Zhang, J., & Singh, S. (2017). Low-drift and real-time lidar odometry and mapping. Autonomous Robots, 41, 401-416.

[6] Engel, J., Koltun, V., & Cremers, D. (2018). Direct sparse odometry. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(3), 611-625.

[7] Bresson, G., Alsayed, Z., Yu, L., & Glaser, S. (2017). Simultaneous Localization and Mapping: A Survey of Current Trends in Autonomous Driving. IEEE Transactions on Intelligent Vehicles, 2(3), 194-220.

[8] Thrun, S., Burgard, W., & Fox, D. (2005). Probabilistic Robotics. MIT Press.

[9] Zhang, J., & Singh, S. (2017). Low-drift and real-time lidar odometry and mapping. Autonomous Robots, 41, 401-416.

[10] Engel, J., Koltun, V., & Cremers, D. (2018). Direct sparse odometry. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(3), 611-625.

[11] Qin, T., Li, P., & Shen, S. (2018). VINS-Mono: A robust and versatile monocular visual-inertial state estimator. IEEE Transactions on Robotics, 34(4), 1004-1020.

[12] Mur-Artal, R., Montiel, J. M. M., & Tardós, J. D. (2015). ORB-SLAM: A versatile and accurate monocular SLAM system. IEEE Transactions on Robotics, 31(5), 1147-1163.

[13] Kümmerle, R., Grisetti, G., Strasdat, H., Konolige, K., & Burgard, W. (2011). g2o: A general framework for graph optimization. 2011 IEEE International Conference on Robotics and Automation, 3607-3613.

[14] Dellaert, F., & Kaess, M. (2006). Square Root SAM: Simultaneous localization and mapping via square root in-formation smoothing. The International Journal of Robotics Research, 25(12), 1181-1203.

[15] Cadena, C., Carlone, L., Carrillo, H., et al. (2016). Past, Present, and Future of Simultaneous Localization and Mapping: Toward the Robust-Perception Age. IEEE Transactions on Robotics, 32(6), 1309-1332.

[16] Thrun, S., Montemerlo, M., Dahlkamp, H., et al. (2006). Stanley: The robot that won the DARPA Grand Challenge. Journal of Field Robotics, 23(9), 661-692.

[17] Tesla. (2024). Autopilot.

[18] Chen, X., Ma, H., Wan, J., et al. (2022). A Survey on Scene Understanding: Database, Algorithms, and Applications. IEEE Transactions on Intelligent Transportation Systems, 23(4), 1623-1646.

[19] Grisetti, G., Stachniss, C., & Burgard, W. (2007). Improved techniques for grid mapping with Rao-Blackwellized particle filters. IEEE Transactions on Robotics, 23(1), 34-46.

[20] Kohlbrecher, S., Meyer, J., Graber, T., Petersen, K., von Stryk, O., & Klingauf, U. (2011). Hector open source modules for autonomous mapping and navigation with rescue robots. Proc. of the Robot Soccer World Cup.

[21] Hess, W., Kohler, D., Rapp, H., & Andor, D. (2016). Real-time loop closure in 2D LIDAR SLAM. In 2016 IEEE International Conference on Robotics and Automation (ICRA) (pp. 1271-1278).

[22] Zhang, J., & Singh, S. (2014). LOAM: Lidar Odometry and Mapping in Real-time. In Robotics: Science and Systems (Vol. 2, No. 9).

[23] Mur-Artal, R., Montiel, J. M. M., & Tardos, J. D. (2015). ORB-SLAM: a versatile and accurate monocular SLAM system. IEEE Transactions on Robotics, 31(5), 1147-1163.

[24] Klein, G., & Murray, D. (2007). Parallel tracking and mapping for small AR workspaces. In 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality (pp. 225-234).

[25] Engel, J., Schöps, T., & Cremers, D. (2014). LSD-SLAM: Large-scale direct monocular SLAM. In European Con-ference on Computer Vision (pp. 834-849). Springer, Cham.

[26] Newcombe, R. A., Lovegrove, S. J., & Davison, A. J. (2011). DTAM: Dense tracking and mapping in real-time. In 2011 International Conference on Computer Vision (pp. 2320-2327). IEEE.

[27] Qin, T., Li, P., & Shen, S. (2018). VINS-Mono: A robust and versatile monocular visual-inertial state estimator. IEEE Transactions on Robotics, 34(4), 1004-1020.

[28] Shan, T., Englot, B., Meyers, D., Wang, W., Ratti, C., & Rus, D. (2020). LIO-SAM: Tightly-coupled Lidar Inertial Odometry via Smoothing and Mapping. In 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 5135-5142). IEEE.

[29] Lin, J., & Zhang, F. (2022). R3LIVE: A robust, real-time, LiDAR-inertial-visual tightly-coupled state estimator and mapping. IEEE Robotics and Automation Letters, 7(2), 3033-3040.

[30] Wang, A., & Clark, R. (2017). DeepVO: Towards end-to-end visual odometry with deep recurrent convolutional neural networks. In 2017 IEEE International Conference on Robotics and Automation (ICRA) (pp. 2043-2050).

[31] Yang, N., Wang, R., & Cremers, D. (2020). D3VO: Deep depth, deep pose and deep uncertainty for monocular visual odometry. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 1281-1292).

[32] Shi, H., Zhang, L., Wang, X., & Shi, J. (2020). DS-SLAM: A semantic visual SLAM towards dynamic environments. In 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 5117-5123).

[33] Cadena, C., Carlone, L., Carrillo, H., Latif, Y., Scaramuzza, D., Neira, J., ... & Leonard, J. J. (2016). Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age. IEEE Transactions on Robotics, 32(6), 1309-1332.

[34] Durrant-Whyte, H., & Bailey, T. (2006). Simultaneous localization and mapping: part I. IEEE Robotics & Automa-tion Magazine, 13(2), 99-110.

[35] Bailey, T., & Durrant-Whyte, H. (2006). Simultaneous localization and mapping (SLAM): Part II. IEEE Robotics & Automation Magazine, 13(3), 108-117.

Downloads

Published

2025-06-10

How to Cite

SLAM Technology in Autonomous Driving. (2025). 工程学辑要, 1(1), 11-18. https://doi.org/10.63313/Engineering.8002