Xingguang Zhong

PhD Student
Contact:
Email: zhong@nulligg.uni-bonn.de
Tel: +49 – 228 – 73 – 29 01
Fax: +49 – 228 – 73 – 27 12
Office: Nussallee 15, EG, room 0.013
Address:
University of Bonn
Photogrammetry, IGG
Nussallee 15
53115 Bonn

Profiles: Google Scholar | Github

Research Interests

  • SLAM
  • Computer Vision
  • Robot Navigation

Short CV

Xingguang Zhong is a PhD student at the Photogrammetry & Robotics Lab at the University of Bonn since November 2021. He received his Master’s degree in mechatronics engineering and Bachelor’s degree in mechanical engineering.

Publications

2025

  • Y. Pan, X. Zhong, L. Jin, L. Wiesmann, M. Popović, J. Behley, and C. Stachniss, “PINGS: Gaussian Splatting Meets Distance Fields within a Point-Based Implicit Neural Map,” arXiv Preprint, vol. arXiv:2502.05752, 2025.
    [BibTeX] [PDF]

    Robots require high-fidelity reconstructions of their environment for effective operation. Such scene representations should be both, geometrically accurate and photorealistic to support downstream tasks. While this can be achieved by building distance fields from range sensors and radiance fields from cameras, the scalable incremental mapping of both fields consistently and at the same time with high quality remains challenging. In this paper, we propose a novel map representation that unifies a continuous signed distance field and a Gaussian splatting radiance field within an elastic and compact point-based implicit neural map. By enforcing geometric consistency between these fields, we achieve mutual improvements by exploiting both modalities. We devise a LiDAR-visual SLAM system called PINGS using the proposed map representation and evaluate it on several challenging large-scale datasets. Experimental results demonstrate that PINGS can incrementally build globally consistent distance and radiance fields encoded with a compact set of neural points. Compared to the state-of-the-art methods, PINGS achieves superior photometric and geometric rendering at novel views by leveraging the constraints from the distance field. Furthermore, by utilizing dense photometric cues and multi-view consistency from the radiance field, PINGS produces more accurate distance fields, leading to improved odometry estimation and mesh reconstruction.

    @article{pan2025arxiv,
    author = {Y. Pan and X. Zhong and L. Jin and L. Wiesmann and M. Popovi\'c and J. Behley and C. Stachniss},
    title = {{PINGS: Gaussian Splatting Meets Distance Fields within a Point-Based Implicit Neural Map}},
    journal = arxiv,
    year = 2025,
    volume = {arXiv:2502.05752},
    url = {https://arxiv.org/pdf/2502.05752},
    abstract = {Robots require high-fidelity reconstructions of their environment for effective operation. Such scene representations should be both, geometrically accurate and photorealistic to support downstream tasks. While this can be achieved by building distance fields from range sensors and radiance fields from cameras, the scalable incremental mapping of both fields consistently and at the same time with high quality remains challenging. In this paper, we propose a novel map representation that unifies a continuous signed distance field and a Gaussian splatting radiance field within an elastic and compact point-based implicit neural map. By enforcing geometric consistency between these fields, we achieve mutual improvements by exploiting both modalities. We devise a LiDAR-visual SLAM system called PINGS using the proposed map representation and evaluate it on several challenging large-scale datasets. Experimental results demonstrate that PINGS can incrementally build globally consistent distance and radiance fields encoded with a compact set of neural points. Compared to the state-of-the-art methods, PINGS achieves superior photometric and geometric rendering at novel views by leveraging the constraints from the distance field. Furthermore, by utilizing dense photometric cues and multi-view consistency from the radiance field, PINGS produces more accurate distance fields, leading to improved odometry estimation and mesh reconstruction.}
    }

  • H. Kuang, Y. Pan, X. Zhong, L. Wiesmann, J. Behley, and C. Stachniss, “Improving Indoor Localization Accuracy by Using an Efficient Implicit Neural Map Representation,” in Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA), 2025.
    [BibTeX]
    @inproceedings{huang2025icra,
    author = {H. Kuang and Y. Pan and X. Zhong and L. Wiesmann and J. Behley and Stachniss, C.},
    title = {{Improving Indoor Localization Accuracy by Using an Efficient Implicit Neural Map Representation}},
    booktitle = icra,
    year = {2025},
    note = {Accepted},
    }

2024

  • L. Jin, X. Zhong, Y. Pan, J. Behley, C. Stachniss, and M. Popovic, “ActiveGS: Active Scene Reconstruction using Gaussian Splatting,” arXiv Preprint, vol. arXiv:2412.17769, 2024.
    [BibTeX] [PDF]
    @article{jin2024arxiv,
    author = {L. Jin and X. Zhong and Y. Pan and J. Behley and C. Stachniss and M. Popovic},
    title = {{ActiveGS: Active Scene Reconstruction using Gaussian Splatting}},
    journal = arxiv,
    year = 2024,
    volume = {arXiv:2412.17769},
    url = {https://arxiv.org/pdf/2412.17769},
    }

  • Y. Pan, X. Zhong, L. Wiesmann, T. Posewsky, J. Behley, and C. Stachniss, “PIN-SLAM: LiDAR SLAM Using a Point-Based Implicit Neural Representation for Achieving Global Map Consistency,” IEEE Trans. on Robotics (TRO), vol. 40, p. 4045–4064, 2024. doi:10.1109/TRO.2024.3422055
    [BibTeX] [PDF] [Code]
    @article{pan2024tro,
    author = {Y. Pan and X. Zhong and L. Wiesmann and T. Posewsky and J. Behley and C. Stachniss},
    title = {{PIN-SLAM: LiDAR SLAM Using a Point-Based Implicit Neural Representation for Achieving Global Map Consistency}},
    journal = tro,
    year = {2024},
    pages = {4045--4064},
    volume = {40},
    doi = {10.1109/TRO.2024.3422055},
    codeurl = {https://github.com/PRBonn/PIN_SLAM},
    }

  • X. Zhong, Y. Pan, C. Stachniss, and J. Behley, “3D LiDAR Mapping in Dynamic Environments using a 4D Implicit Neural Representation,” in Proc. of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR), 2024. doi:10.1109/CVPR52733.2024.01460
    [BibTeX] [PDF] [Code] [Video]
    @inproceedings{zhong2024cvpr,
    author = {X. Zhong and Y. Pan and C. Stachniss and J. Behley},
    title = {{3D LiDAR Mapping in Dynamic Environments using a 4D Implicit Neural Representation}},
    booktitle = cvpr,
    year = 2024,
    doi = {10.1109/CVPR52733.2024.01460},
    codeurl = {https://github.com/PRBonn/4dNDF},
    videourl ={https://youtu.be/pRNKRcTkxjs}
    }

2023

  • X. Zhong, Y. Pan, J. Behley, and C. Stachniss, “SHINE-Mapping: Large-Scale 3D Mapping Using Sparse Hierarchical Implicit Neural Representations,” in Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA), 2023.
    [BibTeX] [PDF] [Code] [Video]
    @inproceedings{zhong2023icra,
    author = {Zhong, Xingguang and Pan, Yue and Behley, Jens and Stachniss, Cyrill},
    title = {{SHINE-Mapping: Large-Scale 3D Mapping Using Sparse Hierarchical Implicit Neural Representations}},
    booktitle = icra,
    year = 2023,
    codeurl = {https://github.com/PRBonn/SHINE_mapping},
    videourl = {https://youtu.be/jRqIupJgQZE},
    }