PD Dr. Jens Behley
Lecturer (Privatdozent) Contact:Email: jens.behley@nulligg.uni-bonn.de
Tel: +49 – 228 – 73 – 60 190
Fax: +49 – 228 – 73 – 27 12
Office: Nussallee 15, 1. OG, room 1.008
Address:
University of Bonn
Photogrammetry, IGG
Nussallee 15
53115 Bonn
Research Interests
- LiDAR-based perception in urban environments
- Perception in the agricultural domain
- Machine learning for robotic applications
Short CV
Jens Behley is a postdoctoral researcher at the Department for Photogrammetry since February 2016. He finished his habilitation with the title “Towards LiDAR-based Spatio-temporal Scene Understanding for Autonomous Vehicles” at the University of Bonn in 2023. From September 2008 to July 2015, Jens worked at the Department for Computer Science III, University of Bonn and he successfully defended his PhD thesis on “Three-dimensional Laser-based Classification in Outdoor Environments” supervised by Prof. Dr. Armin B. Cremers in January 2014. He was an Associate Editor at IEEE Robotics and Automation Letters (RA-L) from 2020-2023.Awards
- Outstanding Reviewer at European Conference on Computer Vision (ECCV), 2024.
- Outstanding Reviewer at IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024.
- Finalist of the Best Paper Award in Service Robotics for the paper “Efficient and Accurate Transformer-Based 3D Shape Completion and Reconstruction of Fruits for Agricultural Robots” at the IEEE Intl. Conf. on Robotics & Automation (ICRA), 2024.
- Best Paper for the paper “KISS-ICP: In Defense of Point-to-Point ICP — Simple, Accurate, and Robust Registration If Done the Right Way” by the IEEE Robotics and Automation Letters (RA-L), 2023.
- Honorable Mention for the paper “High Precision Leaf Instance Segmentation in Point Clouds Obtained Under Real Field Conditions” by the IEEE Robotics and Automation Letters (RA-L), 2023.
- Outstanding Reviewer at IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023.
- Outstanding Reviewer at European Conference on Computer Vision (ECCV), 2022.
- Finalist for IROS Best Paper Award on Agri-Robotics at IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), 2022.
- Outstanding Reviewer at IEEE Robotics and Automation Letters (RA-L), 2022.
- Outstanding Reviewer at IEEE International Conference on Robotics and Automation (ICRA), 2022.
- Outstanding Reviewer at IEEE/CVF International Conference on Computer Vision (ICCV), 2021.
- Faculty Award for Geodesy from the Agricultural Faculty of the University of Bonn for “Adaptive Robust Kernels for Non-Linear Least Squares Problems” by Chebrolu et al., IEEE RA-L, 2021. (co-authored paper), 2021.
- Outstanding Reviewer at IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021.
- Finalist for Best System Paper at Robotics: Science and Systems (RSS), 2020.
- Diplomarbeitspreis der Bonner Informatik Gesellschaft e.V., 2009.
Publications
2024
- M. Zeller, D. Casado Herraez, B. Ayan, J. Behley, M. Heidingsfeld, and C. Stachniss, “SemRaFiner: Panoptic Segmentation in Sparse and Noisy Radar Point Clouds,” IEEE Robotics and Automation Letters (RA-L), 2024. doi:10.1109/LRA.2024.3502058
[BibTeX] [PDF]@article{zeller2024ral, author = {M. Zeller and Casado Herraez, D. and B. Ayan and J. Behley and M. Heidingsfeld and C. Stachniss}, title = {{SemRaFiner: Panoptic Segmentation in Sparse and Noisy Radar Point Clouds}}, journal = ral, year = {2024}, volume = {}, number = {}, pages = {}, issn = {2377-3766}, doi = {10.1109/LRA.2024.3502058}, }
- L. Wiesmann, T. Läbe, L. Nunes, J. Behley, and C. Stachniss, “Joint Intrinsic and Extrinsic Calibration of Perception Systems Utilizing a Calibration Environment,” IEEE Robotics and Automation Letters (RA-L), vol. 9, iss. 10, pp. 9103-9110, 2024. doi:10.1109/LRA.2024.3457385
[BibTeX] [PDF]@article{wiesmann2024ral, author = {L. Wiesmann and T. L\"abe and L. Nunes and J. Behley and C. Stachniss}, title = {{Joint Intrinsic and Extrinsic Calibration of Perception Systems Utilizing a Calibration Environment}}, journal = ral, year = {2024}, volume = {9}, number = {10}, pages = {9103-9110}, issn = {2377-3766}, doi = {10.1109/LRA.2024.3457385}, }
- C. Witte, J. Behley, C. Stachniss, and M. Raaijmakers, “Epipolar Attention Field Transformers for Bird’s Eye View Semantic Segmentation,” arXiv Preprint, vol. arXiv:2412.01595, 2024.
[BibTeX] [PDF]Spatial understanding of the semantics of the surroundings is a key capability needed by autonomous cars to enable safe driving decisions. Recently, purely vision-based solutions have gained increasing research interest. In particular, approaches extracting a bird’s eye view (BEV) from multiple cameras have demonstrated great performance for spatial understanding. This paper addresses the dependency on learned positional encodings to correlate image and BEV feature map elements for transformer-based methods. We propose leveraging epipolar geometric constraints to model the relationship between cameras and the BEV by Epipolar Attention Fields. They are incorporated into the attention mechanism as a novel attribution term, serving as an alternative to learned positional encodings. Experiments show that our method EAFormer outperforms previous BEV approaches by 2\% mIoU for map semantic segmentation and exhibits superior generalization capabilities compared to implicitly learning the camera configuration.
@article{witte2024arxiv, author = {C. Witte and J. Behley and C. Stachniss and M. Raaijmakers}, title = {{Epipolar Attention Field Transformers for Bird's Eye View Semantic Segmentation}}, journal = arxiv, year = 2024, volume = {arXiv:2412.01595}, url = {http://arxiv.org/pdf/2412.01595v1}, abstract = {Spatial understanding of the semantics of the surroundings is a key capability needed by autonomous cars to enable safe driving decisions. Recently, purely vision-based solutions have gained increasing research interest. In particular, approaches extracting a bird's eye view (BEV) from multiple cameras have demonstrated great performance for spatial understanding. This paper addresses the dependency on learned positional encodings to correlate image and BEV feature map elements for transformer-based methods. We propose leveraging epipolar geometric constraints to model the relationship between cameras and the BEV by Epipolar Attention Fields. They are incorporated into the attention mechanism as a novel attribution term, serving as an alternative to learned positional encodings. Experiments show that our method EAFormer outperforms previous BEV approaches by 2\% mIoU for map semantic segmentation and exhibits superior generalization capabilities compared to implicitly learning the camera configuration.} }
- F. Magistri, T. Läbe, E. Marks, S. Nagulavancha, Y. Pan, C. Smitt, L. Klingbeil, M. Halstead, H. Kuhlmann, C. McCool, J. Behley, and C. Stachniss, “A Dataset and Benchmark for Shape Completion of Fruits for Agricultural Robotics,” arXiv Preprint, 2024.
[BibTeX] [PDF]@article{magistri2024arxiv, title={{A Dataset and Benchmark for Shape Completion of Fruits for Agricultural Robotics}}, author={F. Magistri and T. L\"abe and E. Marks and S. Nagulavancha and Y. Pan and C. Smitt and L. Klingbeil and M. Halstead and H. Kuhlmann and C. McCool and J. Behley and C. Stachniss}, journal = arxiv, year=2024, eprint={2407.13304}, }
- M. Sodano, F. Magistri, J. Behley, and C. Stachniss, “Open-World Panoptic Segmentation,” arXiv Preprint, vol. arXiv:2412.12740, 2024.
[BibTeX] [PDF]@article{sodano2024arxiv, author = {M. Sodano and F. Magistri and J. Behley and C. Stachniss}, title = {{Open-World Panoptic Segmentation}}, journal = arxiv, year = 2024, volume = {arXiv:2412.12740}, url = {http://arxiv.org/pdf/2412.12740.pdf}, }
- Y. Pan, X. Zhong, L. Wiesmann, T. Posewsky, J. Behley, and C. Stachniss, “PIN-SLAM: LiDAR SLAM Using a Point-Based Implicit Neural Representation for Achieving Global Map Consistency,” , vol. 40, pp. 4045-4064, 2024. doi:10.1109/TRO.2024.3422055
[BibTeX] [PDF] [Code]@article{pan2024tro, author = {Y. Pan and X. Zhong and L. Wiesmann and T. Posewsky and J. Behley and C. Stachniss}, title = {{PIN-SLAM: LiDAR SLAM Using a Point-Based Implicit Neural Representation for Achieving Global Map Consistency}}, journal = tro, year = {2024}, pages = {4045-4064}, volume = {40}, doi = {10.1109/TRO.2024.3422055}, codeurl = {https://github.com/PRBonn/PIN_SLAM}, }
- J. Weyler, F. Magistri, E. Marks, Y. L. Chong, M. Sodano, G. Roggiolani, N. Chebrolu, C. Stachniss, and J. Behley, “PhenoBench: A Large Dataset and Benchmarks for Semantic Image Interpretation in the Agricultural Domain,” IEEE Trans. on Pattern Analysis and Machine Intelligence (TPAMI), 2024. doi:10.1109/TPAMI.2024.3419548
[BibTeX] [PDF] [Code]@article{weyler2024tpami, author = {J. Weyler and F. Magistri and E. Marks and Y.L. Chong and M. Sodano and G. Roggiolani and N. Chebrolu and C. Stachniss and J. Behley}, title = {{PhenoBench: A Large Dataset and Benchmarks for Semantic Image Interpretation in the Agricultural Domain}}, journal = tpami, year = {2024}, volume = {}, number = {}, pages = {}, doi = {10.1109/TPAMI.2024.3419548}, codeurl = {https://github.com/PRBonn/phenobench}, }
- D. Casado Herraez, L. Chang, M. Zeller, L. Wiesmann, J. Behley, M. Heidingsfeld, and C. Stachniss, “SPR: Single-Scan Radar Place Recognition,” IEEE Robotics and Automation Letters (RA-L), vol. 9, iss. 10, pp. 9079-9086, 2024.
[BibTeX] [PDF]@article{casado-herraez2024ral, author = {Casado Herraez, D. and L. Chang and M. Zeller and L. Wiesmann and J. Behley and M. Heidingsfeld and C. Stachniss}, title = {{SPR: Single-Scan Radar Place Recognition}}, journal = ral, year = {2024}, volume = {9}, number = {10}, pages = {9079-9086}, }
- F. Magistri, Y. Pan, J. Bartels, J. Behley, C. Stachniss, and C. Lehnert, “Improving Robotic Fruit Harvesting Within Cluttered Environments Through 3D Shape Completion,” IEEE Robotics and Automation Letters (RA-L), vol. 9, iss. 8, p. 7357–7364, 2024. doi:10.1109/LRA.2024.3421788
[BibTeX] [PDF]@article{magistri2024ral, author = {F. Magistri and Y. Pan and J. Bartels and J. Behley and C. Stachniss and C. Lehnert}, title = {{Improving Robotic Fruit Harvesting Within Cluttered Environments Through 3D Shape Completion}}, journal = ral, volume = {9}, number = {8}, pages = {7357--7364}, year = 2024, doi = {10.1109/LRA.2024.3421788}, }
- E. A. Marks, J. Bömer, F. Magistri, A. Sah, J. Behley, and C. Stachniss, “BonnBeetClouds3D: A Dataset Towards Point Cloud-Based Organ-Level Phenotyping of Sugar Beet Plants Under Real Field Conditions,” in Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), 2024.
[BibTeX] [PDF]@inproceedings{marks2024iros, author = {E.A. Marks and J. B\"omer and F. Magistri and A. Sah and J. Behley and C. Stachniss}, title = {{BonnBeetClouds3D: A Dataset Towards Point Cloud-Based Organ-Level Phenotyping of Sugar Beet Plants Under Real Field Conditions}}, booktitle = iros, year = 2024, }
- H. Lim, S. Jang, B. Mersch, J. Behley, H. Myung, and C. Stachniss, “HeLiMOS: A Dataset for Moving Object Segmentation in 3D Point Clouds From Heterogeneous LiDAR Sensors,” in Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), 2024.
[BibTeX] [PDF]@inproceedings{lim2024iros, author = {H. Lim and S. Jang and B. Mersch and J. Behley and H. Myung and C. Stachniss}, title = {{HeLiMOS: A Dataset for Moving Object Segmentation in 3D Point Clouds From Heterogeneous LiDAR Sensors}}, booktitle = iros, year = 2024, }
- A. Narenthiran Sivakumar, M. Magistri, M. Valverde Gasparino, J. Behley, C. Stachniss, and G. Chowdhary, “AdaCropFollow: Self-Supervised Online Adaptation for Visual Under-Canopy Navigation,” in Proc.~of the IROS 2024 Workshop on AI and Robotics For Future Farming, 2024.
[BibTeX] [PDF]@inproceedings{narenthiran-sivakumar2024irosws, author = {Narenthiran Sivakumar, A. and Magistri, M. and Valverde Gasparino, M. and Behley, J. and Stachniss, C. and Chowdhary, G.}, title = {{AdaCropFollow: Self-Supervised Online Adaptation for Visual Under-Canopy Navigation}}, booktitle = {Proc.~of the IROS 2024 Workshop on AI and Robotics For Future Farming}, year = 2024, url = {https://arxiv.org/pdf/2410.12411}, }
- M. Sodano, F. Magistri, L. Nunes, J. Behley, and C. Stachniss, “Open-World Semantic Segmentation Including Class Similarity,” in Proc. of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR), 2024.
[BibTeX] [PDF] [Code] [Video]@inproceedings{sodano2024cvpr, author = {M. Sodano and F. Magistri and L. Nunes and J. Behley and C. Stachniss}, title = {{Open-World Semantic Segmentation Including Class Similarity}}, booktitle = cvpr, year = 2024, codeurl = {https://github.com/PRBonn/ContMAV}, videourl = {https://youtu.be/ei2cbyPQgag?si=_KabYyfjzzJZi1Zy}, }
- L. Nunes, R. Marcuzzi, B. Mersch, J. Behley, and C. Stachniss, “Scaling Diffusion Models to Real-World 3D LiDAR Scene Completion,” in Proc. of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR), 2024.
[BibTeX] [PDF] [Code] [Video]@inproceedings{nunes2024cvpr, author = {L. Nunes and R. Marcuzzi and B. Mersch and J. Behley and C. Stachniss}, title = {{Scaling Diffusion Models to Real-World 3D LiDAR Scene Completion}}, booktitle = cvpr, year = 2024, codeurl = {https://github.com/PRBonn/LiDiff}, videourl = {https://youtu.be/XWu8svlMKUo}, }
- X. Zhong, Y. Pan, C. Stachniss, and J. Behley, “3D LiDAR Mapping in Dynamic Environments using a 4D Implicit Neural Representation,” in Proc. of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR), 2024.
[BibTeX] [PDF] [Code] [Video]@inproceedings{zhong2024cvpr, author = {X. Zhong and Y. Pan and C. Stachniss and J. Behley}, title = {{3D LiDAR Mapping in Dynamic Environments using a 4D Implicit Neural Representation}}, booktitle = cvpr, year = 2024, codeurl = {https://github.com/PRBonn/4dNDF}, videourl ={https://youtu.be/pRNKRcTkxjs} }
- M. Zeller, D. Casado Herraez, J. Behley, M. Heidingsfeld, and C. Stachniss, “Radar Tracker: Moving Instance Tracking in Sparse and Noisy Radar Point Clouds,” in Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA), 2024.
[BibTeX] [PDF] [Video]@inproceedings{zeller2024icra, author = {M. Zeller and Casado Herraez, Daniel and J. Behley and M. Heidingsfeld and C. Stachniss}, title = {{Radar Tracker: Moving Instance Tracking in Sparse and Noisy Radar Point Clouds}}, booktitle = icra, year = 2024, videourl = {https://youtu.be/PixfkN8cMig}, }
- M. V. R. Malladi, T. Guadagnino, L. Lobefaro, M. Mattamala, H. Griess, J. Schweier, N. Chebrolu, M. Fallon, J. Behley, and C. Stachniss, “Tree Instance Segmentation and Traits Estimation for Forestry Environments Exploiting LiDAR Data ,” in Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA), 2024.
[BibTeX] [PDF] [Code] [Video]@inproceedings{malladi2024icra, author = {M.V.R. Malladi and T. Guadagnino and L. Lobefaro and M. Mattamala and H. Griess and J. Schweier and N. Chebrolu and M. Fallon and J. Behley and C. Stachniss}, title = {{Tree Instance Segmentation and Traits Estimation for Forestry Environments Exploiting LiDAR Data }}, booktitle = icra, year = 2024, videourl = {https://youtu.be/14uuCxmfGco}, codeurl = {https://github.com/PRBonn/forest_inventory_pipeline}, }
- F. Magistri, R. Marcuzzi, E. A. Marks, M. Sodano, J. Behley, and C. Stachniss, “Efficient and Accurate Transformer-Based 3D Shape Completion and Reconstruction of Fruits for Agricultural Robots,” in Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA), 2024.
[BibTeX] [PDF] [Code] [Video]@inproceedings{magistri2024icra, author = {F. Magistri and R. Marcuzzi and E.A. Marks and M. Sodano and J. Behley and C. Stachniss}, title = {{Efficient and Accurate Transformer-Based 3D Shape Completion and Reconstruction of Fruits for Agricultural Robots}}, booktitle = icra, year = 2024, videourl = {https://youtu.be/U1xxnUGrVL4}, codeurl = {https://github.com/PRBonn/TCoRe}, }
- M. Zeller, V. S. Sandhu, B. Mersch, J. Behley, M. Heidingsfeld, and C. Stachniss, “Radar Instance Transformer: Reliable Moving Instance Segmentation in Sparse Radar Point Clouds,” tro, vol. 40, pp. 2357-2372, 2024. doi:10.1109/TRO.2023.3338972
[BibTeX] [PDF] [Video]@article{zeller2024tro, author = {M. Zeller and Sandhu, V.S. and B. Mersch and J. Behley and M. Heidingsfeld and C. Stachniss}, title = {{Radar Instance Transformer: Reliable Moving Instance Segmentation in Sparse Radar Point Clouds}}, journal = tro, year = {2024}, volume = {40}, doi = {10.1109/TRO.2023.3338972}, pages = {2357-2372}, videourl = {https://www.youtube.com/watch?v=v-iXbJEcqPM} }
- J. Weyler, T. Läbe, J. Behley, and C. Stachniss, “Panoptic Segmentation with Partial Annotations for Agricultural Robots,” IEEE Robotics and Automation Letters (RA-L), vol. 9, iss. 2, pp. 1660-1667, 2024. doi:10.1109/LRA.2023.3346760
[BibTeX] [PDF] [Code]@article{weyler2024ral, author = {J. Weyler and T. L\"abe and J. Behley and C. Stachniss}, title = {{Panoptic Segmentation with Partial Annotations for Agricultural Robots}}, journal = ral, year = {2024}, volume = {9}, number = {2}, pages = {1660-1667}, issn = {2377-3766}, doi = {10.1109/LRA.2023.3346760}, codeurl = {https://github.com/PRBonn/PSPA} }
2023
- R. Marcuzzi, L. Nunes, L. Wiesmann, E. Marks, J. Behley, and C. Stachniss, “Mask4D: End-to-End Mask-Based 4D Panoptic Segmentation for LiDAR Sequences,” IEEE Robotics and Automation Letters (RA-L), vol. 8, iss. 11, pp. 7487-7494, 2023. doi:10.1109/LRA.2023.3320020
[BibTeX] [PDF] [Code] [Video]@article{marcuzzi2023ral-meem, author = {R. Marcuzzi and L. Nunes and L. Wiesmann and E. Marks and J. Behley and C. Stachniss}, title = {{Mask4D: End-to-End Mask-Based 4D Panoptic Segmentation for LiDAR Sequences}}, journal = ral, year = {2023}, volume = {8}, number = {11}, pages = {7487-7494}, issn = {2377-3766}, doi = {10.1109/LRA.2023.3320020}, codeurl = {https://github.com/PRBonn/Mask4D}, videourl = {https://youtu.be/4WqK_gZlpfA}, }
- G. Roggiolani, F. Magistri, T. Guadagnino, J. Behley, and C. Stachniss, “Unsupervised Pre-Training for 3D Leaf Instance Segmentation,” IEEE Robotics and Automation Letters (RA-L), vol. 8, pp. 7448-7455, 2023. doi:10.1109/LRA.2023.3320018
[BibTeX] [PDF] [Code] [Video]@article{roggiolani2023ral, author = {G. Roggiolani and F. Magistri and T. Guadagnino and J. Behley and C. Stachniss}, title = {{Unsupervised Pre-Training for 3D Leaf Instance Segmentation}}, journal = ral, year = {2023}, volume = {8}, issue = {11}, codeurl = {https://github.com/PRBonn/Unsupervised-Pre-Training-for-3D-Leaf-Instance-Segmentation}, pages = {7448-7455}, doi = {10.1109/LRA.2023.3320018}, issn = {2377-3766}, videourl = {https://youtu.be/PbYVPPwVeKg}, }
- F. Magistri, J. Weyler, D. Gogoll, P. Lottes, J. Behley, N. Petrinic, and C. Stachniss, “From one Field to Another – Unsupervised Domain Adaptation for Semantic Segmentation in Agricultural Robotics,” Computers and Electronics in Agriculture, vol. 212, p. 108114, 2023. doi:https://doi.org/10.1016/j.compag.2023.108114
[BibTeX] [PDF]@article{magistri2023cea, author = {F. Magistri and J. Weyler and D. Gogoll and P. Lottes and J. Behley and N. Petrinic and C. Stachniss}, title = {From one Field to Another – Unsupervised Domain Adaptation for Semantic Segmentation in Agricultural Robotics}, journal = cea, year = {2023}, volume = {212}, pages = {108114}, doi = {https://doi.org/10.1016/j.compag.2023.108114}, }
- B. Mersch, T. Guadagnino, X. Chen, I. Vizzo, J. Behley, and C. Stachniss, “Building Volumetric Beliefs for Dynamic Environments Exploiting Map-Based Moving Object Segmentation,” IEEE Robotics and Automation Letters (RA-L), vol. 8, iss. 8, pp. 5180-5187, 2023. doi:10.1109/LRA.2023.3292583
[BibTeX] [PDF] [Code] [Video]@article{mersch2023ral, author = {B. Mersch and T. Guadagnino and X. Chen and I. Vizzo and J. Behley and C. Stachniss}, title = {{Building Volumetric Beliefs for Dynamic Environments Exploiting Map-Based Moving Object Segmentation}}, journal = ral, volume = {8}, number = {8}, pages = {5180-5187}, year = 2023, issn = {2377-3766}, doi = {10.1109/LRA.2023.3292583}, videourl = {https://youtu.be/aeXhvkwtDbI}, codeurl = {https://github.com/PRBonn/MapMOS}, }
- Y. L. Chong, J. Weyler, P. Lottes, J. Behley, and C. Stachniss, “Unsupervised Generation of Labeled Training Images for Crop-Weed Segmentation in New Fields and on Different Robotic Platforms,” IEEE Robotics and Automation Letters (RA-L), vol. 8, iss. 8, p. 5259–5266, 2023. doi:10.1109/LRA.2023.3293356
[BibTeX] [PDF] [Code] [Video]@article{chong2023ral, author = {Y.L. Chong and J. Weyler and P. Lottes and J. Behley and C. Stachniss}, title = {{Unsupervised Generation of Labeled Training Images for Crop-Weed Segmentation in New Fields and on Different Robotic Platforms}}, journal = ral, volume = {8}, number = {8}, pages = {5259--5266}, year = 2023, issn = {2377-3766}, doi = {10.1109/LRA.2023.3293356}, videourl = {https://youtu.be/SpvrR9sgf2k}, codeurl = {https://github.com/PRBonn/StyleGenForLabels} }
- Y. Pan, F. Magistri, T. Läbe, E. Marks, C. Smitt, C. S. McCool, J. Behley, and C. Stachniss, “Panoptic Mapping with Fruit Completion and Pose Estimation for Horticultural Robots,” in Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), 2023.
[BibTeX] [PDF] [Code] [Video]@inproceedings{pan2023iros, author = {Y. Pan and F. Magistri and T. L\"abe and E. Marks and C. Smitt and C.S. McCool and J. Behley and C. Stachniss}, title = {{Panoptic Mapping with Fruit Completion and Pose Estimation for Horticultural Robots}}, booktitle = iros, year = 2023, codeurl = {https://github.com/PRBonn/HortiMapping}, videourl = {https://youtu.be/fSyHBhskjqA} }
- N. Zimmerman, M. Sodano, E. Marks, J. Behley, and C. Stachniss, “Constructing Metric-Semantic Maps using Floor Plan Priors for Long-Term Indoor Localization,” in Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), 2023.
[BibTeX] [PDF] [Code] [Video]@inproceedings{zimmerman2023iros, author = {N. Zimmerman and M. Sodano and E. Marks and J. Behley and C. Stachniss}, title = {{Constructing Metric-Semantic Maps using Floor Plan Priors for Long-Term Indoor Localization}}, booktitle = iros, year = 2023, codeurl = {https://github.com/PRBonn/SIMP}, videourl = {https://youtu.be/9ZGd5lJbG4s} }
- J. Weyler, F. Magistri, E. Marks, Y. L. Chong, M. Sodano, G. Roggiolani, N. Chebrolu, C. Stachniss, and J. Behley, “PhenoBench –- A Large Dataset and Benchmarks for Semantic Image Interpretation in the Agricultural Domain,” arXiv preprint, vol. arXiv:2306.04557, 2023.
[BibTeX] [PDF] [Code]@article{weyler2023arxiv, author = {Jan Weyler and Federico Magistri and Elias Marks and Yue Linn Chong and Matteo Sodano and Gianmarco Roggiolani and Nived Chebrolu and Cyrill Stachniss and Jens Behley}, title = {{PhenoBench --- A Large Dataset and Benchmarks for Semantic Image Interpretation in the Agricultural Domain}}, journal = {arXiv preprint}, volume = {arXiv:2306.04557}, year = {2023}, codeurl = {https://github.com/PRBonn/phenobench} }
- L. Wiesmann, T. Guadagnino, I. Vizzo, N. Zimmerman, Y. Pan, H. Kuang, J. Behley, and C. Stachniss, “LocNDF: Neural Distance Field Mapping for Robot Localization,” IEEE Robotics and Automation Letters (RA-L), vol. 8, iss. 8, p. 4999–5006, 2023. doi:10.1109/LRA.2023.3291274
[BibTeX] [PDF] [Code] [Video]@article{wiesmann2023ral-icra, author = {L. Wiesmann and T. Guadagnino and I. Vizzo and N. Zimmerman and Y. Pan and H. Kuang and J. Behley and C. Stachniss}, title = {{LocNDF: Neural Distance Field Mapping for Robot Localization}}, journal = ral, volume = {8}, number = {8}, pages = {4999--5006}, year = 2023, url = {https://www.ipb.uni-bonn.de/wp-content/papercite-data/pdf/wiesmann2023ral-icra.pdf}, issn = {2377-3766}, doi = {10.1109/LRA.2023.3291274}, codeurl = {https://github.com/PRBonn/LocNDF}, videourl = {https://youtu.be/-0idH21BpMI}, }
- E. Marks, M. Sodano, F. Magistri, L. Wiesmann, D. Desai, R. Marcuzzi, J. Behley, and C. Stachniss, “High Precision Leaf Instance Segmentation in Point Clouds Obtained Under Real Field Conditions,” IEEE Robotics and Automation Letters (RA-L), vol. 8, iss. 8, pp. 4791-4798, 2023. doi:10.1109/LRA.2023.3288383
[BibTeX] [PDF] [Code] [Video]@article{marks2023ral, author = {E. Marks and M. Sodano and F. Magistri and L. Wiesmann and D. Desai and R. Marcuzzi and J. Behley and C. Stachniss}, title = {{High Precision Leaf Instance Segmentation in Point Clouds Obtained Under Real Field Conditions}}, journal = ral, pages = {4791-4798}, volume = {8}, number = {8}, issn = {2377-3766}, year = {2023}, doi = {10.1109/LRA.2023.3288383}, codeurl = {https://github.com/PRBonn/plant_pcd_segmenter}, videourl = {https://youtu.be/dvA1SvQ4iEY} }
- H. Lim, L. Nunes, B. Mersch, X. Chen, J. Behley, H. Myung, and C. Stachniss, “ERASOR2: Instance-Aware Robust 3D Mapping of the Static World in Dynamic Scenes,” in Proc. of Robotics: Science and Systems (RSS), 2023.
[BibTeX] [PDF]@inproceedings{lim2023rss, author = {H. Lim and L. Nunes and B. Mersch and X. Chen and J. Behley and H. Myung and C. Stachniss}, title = {{ERASOR2: Instance-Aware Robust 3D Mapping of the Static World in Dynamic Scenes}}, booktitle = rss, year = 2023, }
- J. Weyler, T. Läbe, F. Magistri, J. Behley, and C. Stachniss, “Towards Domain Generalization in Crop and Weed Segmentation for Precision Farming Robots,” IEEE Robotics and Automation Letters (RA-L), vol. 8, iss. 6, pp. 3310-3317, 2023. doi:10.1109/LRA.2023.3262417
[BibTeX] [PDF] [Code]@article{weyler2023ral, author = {J. Weyler and T. L\"abe and F. Magistri and J. Behley and C. Stachniss}, title = {{Towards Domain Generalization in Crop and Weed Segmentation for Precision Farming Robots}}, journal = ral, pages = {3310-3317}, volume = 8, number = 6, issn = {2377-3766}, year = {2023}, doi = {10.1109/LRA.2023.3262417}, codeurl = {https://github.com/PRBonn/DG-CWS}, }
- L. Nunes, L. Wiesmann, R. Marcuzzi, X. Chen, J. Behley, and C. Stachniss, “Temporal Consistent 3D LiDAR Representation Learning for Semantic Perception in Autonomous Driving,” in Proc. of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR), 2023.
[BibTeX] [PDF] [Code] [Video]@inproceedings{nunes2023cvpr, author = {L. Nunes and L. Wiesmann and R. Marcuzzi and X. Chen and J. Behley and C. Stachniss}, title = {{Temporal Consistent 3D LiDAR Representation Learning for Semantic Perception in Autonomous Driving}}, booktitle = cvpr, year = 2023, codeurl = {https://github.com/PRBonn/TARL}, videourl = {https://youtu.be/0CtDbwRYLeo}, }
- H. Kuang, X. Chen, T. Guadagnino, N. Zimmerman, J. Behley, and C. Stachniss, “IR-MCL: Implicit Representation-Based Online Global Localization,” IEEE Robotics and Automation Letters (RA-L), vol. 8, iss. 3, p. 1627–1634, 2023. doi:10.1109/LRA.2023.3239318
[BibTeX] [PDF] [Code]@article{kuang2023ral, author = {Kuang, Haofei and Chen, Xieyuanli and Guadagnino, Tiziano and Zimmerman, Nicky and Behley, Jens and Stachniss, Cyrill}, title = {{IR-MCL: Implicit Representation-Based Online Global Localization}}, journal = ral, volume = {8}, number = {3}, pages = {1627--1634}, doi = {10.1109/LRA.2023.3239318}, year = {2023}, codeurl = {https://github.com/PRBonn/ir-mcl}, }
- X. Zhong, Y. Pan, J. Behley, and C. Stachniss, “SHINE-Mapping: Large-Scale 3D Mapping Using Sparse Hierarchical Implicit Neural Representations,” in Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA), 2023.
[BibTeX] [PDF] [Code] [Video]@inproceedings{zhong2023icra, author = {Zhong, Xingguang and Pan, Yue and Behley, Jens and Stachniss, Cyrill}, title = {{SHINE-Mapping: Large-Scale 3D Mapping Using Sparse Hierarchical Implicit Neural Representations}}, booktitle = icra, year = 2023, codeurl = {https://github.com/PRBonn/SHINE_mapping}, videourl = {https://youtu.be/jRqIupJgQZE}, }
- M. Sodano, F. Magistri, T. Guadagnino, J. Behley, and C. Stachniss, “Robust Double-Encoder Network for RGB-D Panoptic Segmentation,” in Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA), 2023.
[BibTeX] [PDF] [Code] [Video]@inproceedings{sodano2023icra, author = {Matteo Sodano and Federico Magistri and Tiziano Guadagnino and Jens Behley and Cyrill Stachniss}, title = {{Robust Double-Encoder Network for RGB-D Panoptic Segmentation}}, booktitle = icra, year = 2023, codeurl = {https://github.com/PRBonn/PS-res-excite}, videourl = {https://youtu.be/r1pabV3sQYk} }
- A. Riccardi, S. Kelly, E. Marks, F. Magistri, T. Guadagnino, J. Behley, M. Bennewitz, and C. Stachniss, “Fruit Tracking Over Time Using High-Precision Point Clouds,” in Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA), 2023.
[BibTeX] [PDF] [Video]@inproceedings{riccardi2023icra, author = {Alessandro Riccardi and Shane Kelly and Elias Marks and Federico Magistri and Tiziano Guadagnino and Jens Behley and Maren Bennewitz and Cyrill Stachniss}, title = {{Fruit Tracking Over Time Using High-Precision Point Clouds}}, booktitle = icra, year = 2023, videourl = {https://youtu.be/fBGSd0--PXY} }
- G. Roggiolani, M. Sodano, F. Magistri, T. Guadagnino, J. Behley, and C. Stachniss, “Hierarchical Approach for Joint Semantic, Plant Instance, and Leaf Instance Segmentation in the Agricultural Domain,” in Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA), 2023.
[BibTeX] [PDF] [Code] [Video]@inproceedings{roggiolani2023icra-hajs, author = {G. Roggiolani and M. Sodano and F. Magistri and T. Guadagnino and J. Behley and C. Stachniss}, title = {{Hierarchical Approach for Joint Semantic, Plant Instance, and Leaf Instance Segmentation in the Agricultural Domain}}, booktitle = icra, year = {2023}, codeurl = {https://github.com/PRBonn/HAPT}, videourl = {https://youtu.be/miuOJjxlJic} }
- G. Roggiolani, F. Magistri, T. Guadagnino, J. Weyler, G. Grisetti, C. Stachniss, and J. Behley, “On Domain-Specific Pre-Training for Effective Semantic Perception in Agricultural Robotics,” in Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA), 2023.
[BibTeX] [PDF] [Code] [Video]@inproceedings{roggiolani2023icra-odsp, author = {G. Roggiolani and F. Magistri and T. Guadagnino and J. Weyler and G. Grisetti and C. Stachniss and J. Behley}, title = {{On Domain-Specific Pre-Training for Effective Semantic Perception in Agricultural Robotics}}, booktitle = icra, year = 2023, codeurl= {https://github.com/PRBonn/agri-pretraining}, videourl = {https://youtu.be/FDWY_UnfsBs} }
- M. Zeller, V. S. Sandhu, B. Mersch, J. Behley, M. Heidingsfeld, and C. Stachniss, “Radar Velocity Transformer: Single-scan Moving Object Segmentation in Noisy Radar Point Clouds,” in Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA), 2023.
[BibTeX] [PDF] [Video]@inproceedings{zeller2023icra, author = {M. Zeller and V.S. Sandhu and B. Mersch and J. Behley and M. Heidingsfeld and C. Stachniss}, title = {{Radar Velocity Transformer: Single-scan Moving Object Segmentation in Noisy Radar Point Clouds}}, booktitle = icra, year = 2023, videourl = {https://youtu.be/dTDgzWIBgpE} }
- I. Vizzo, T. Guadagnino, B. Mersch, L. Wiesmann, J. Behley, and C. Stachniss, “KISS-ICP: In Defense of Point-to-Point ICP – Simple, Accurate, and Robust Registration If Done the Right Way,” IEEE Robotics and Automation Letters (RA-L), vol. 8, iss. 2, pp. 1-8, 2023. doi:10.1109/LRA.2023.3236571
[BibTeX] [PDF] [Code] [Video]@article{vizzo2023ral, author = {Vizzo, Ignacio and Guadagnino, Tiziano and Mersch, Benedikt and Wiesmann, Louis and Behley, Jens and Stachniss, Cyrill}, title = {{KISS-ICP: In Defense of Point-to-Point ICP -- Simple, Accurate, and Robust Registration If Done the Right Way}}, journal = ral, pages = {1-8}, doi = {10.1109/LRA.2023.3236571}, volume = {8}, number = {2}, year = {2023}, codeurl = {https://github.com/PRBonn/kiss-icp}, videourl = {https://youtu.be/h71aGiD-uxU} }
- R. Marcuzzi, L. Nunes, L. Wiesmann, J. Behley, and C. Stachniss, “Mask-Based Panoptic LiDAR Segmentation for Autonomous Driving,” IEEE Robotics and Automation Letters (RA-L), vol. 8, iss. 2, p. 1141–1148, 2023. doi:10.1109/LRA.2023.3236568
[BibTeX] [PDF] [Code] [Video]@article{marcuzzi2023ral, author = {R. Marcuzzi and L. Nunes and L. Wiesmann and J. Behley and C. Stachniss}, title = {{Mask-Based Panoptic LiDAR Segmentation for Autonomous Driving}}, journal = ral, volume = {8}, number = {2}, pages = {1141--1148}, year = 2023, doi = {10.1109/LRA.2023.3236568}, videourl = {https://youtu.be/I8G9VKpZux8}, codeurl = {https://github.com/PRBonn/MaskPLS}, }
- L. Wiesmann, L. Nunes, J. Behley, and C. Stachniss, “KPPR: Exploiting Momentum Contrast for Point Cloud-Based Place Recognition,” IEEE Robotics and Automation Letters (RA-L), vol. 8, iss. 2, pp. 592-599, 2023. doi:10.1109/LRA.2022.3228174
[BibTeX] [PDF] [Code] [Video]@article{wiesmann2023ral, author = {L. Wiesmann and L. Nunes and J. Behley and C. Stachniss}, title = {{KPPR: Exploiting Momentum Contrast for Point Cloud-Based Place Recognition}}, journal = ral, volume = {8}, number = {2}, pages = {592-599}, year = 2023, issn = {2377-3766}, doi = {10.1109/LRA.2022.3228174}, codeurl = {https://github.com/PRBonn/kppr}, videourl = {https://youtu.be/bICz1sqd8Xs} }
- Y. Wu, J. Kuang, X. Niu, J. Behley, L. Klingbeil, and H. Kuhlmann, “Wheel-SLAM: Simultaneous Localization and Terrain Mapping Using One Wheel-mounted IMU,” IEEE Robotics and Automation Letters (RA-L), vol. 8, iss. 1, p. 280–287, 2023. doi:10.1109/LRA.2022.3226071
[BibTeX] [PDF] [Code]@article{wu2023ral, author = {Y. Wu and J. Kuang and X. Niu and J. Behley and L. Klingbeil and H. Kuhlmann}, title = {{Wheel-SLAM: Simultaneous Localization and Terrain Mapping Using One Wheel-mounted IMU}}, journal = ral, volume = {8}, number = {1}, pages = {280--287}, year = 2023, doi = {10.1109/LRA.2022.3226071}, codeurl = {https://github.com/i2Nav-WHU/Wheel-SLAM} }
- M. Zeller, J. Behley, M. Heidingsfeld, and C. Stachniss, “Gaussian Radar Transformer for Semantic Segmentation in Noisy Radar Data,” IEEE Robotics and Automation Letters (RA-L), vol. 8, iss. 1, p. 344–351, 2023. doi:10.1109/LRA.2022.3226030
[BibTeX] [PDF] [Video]@article{zeller2023ral, author = {M. Zeller and J. Behley and M. Heidingsfeld and C. Stachniss}, title = {{Gaussian Radar Transformer for Semantic Segmentation in Noisy Radar Data}}, journal = ral, volume = {8}, number = {1}, pages = {344--351}, year = 2023, doi = {10.1109/LRA.2022.3226030}, videourl = {https://youtu.be/uNlNkYoG-tA} }
- N. Zimmerman, T. Guadagnino, X. Chen, J. Behley, and C. Stachniss, “Long-Term Localization using Semantic Cues in Floor Plan Maps,” IEEE Robotics and Automation Letters (RA-L), vol. 8, iss. 1, pp. 176-183, 2023. doi:10.1109/LRA.2022.3223556
[BibTeX] [PDF] [Code]@article{zimmerman2023ral, author = {N. Zimmerman and T. Guadagnino and X. Chen and J. Behley and C. Stachniss}, title = {{Long-Term Localization using Semantic Cues in Floor Plan Maps}}, journal = ral, year = {2023}, volume = {8}, number = {1}, pages = {176-183}, issn = {2377-3766}, doi = {10.1109/LRA.2022.3223556}, codeurl = {https://github.com/PRBonn/hsmcl} }
- H. Müller, N. Zimmerman, T. Polonelli, M. Magno, J. Behley, C. Stachniss, and L. Benini, “Fully On-board Low-Power Localization with Multizone Time-of-Flight Sensors on Nano-UAVs,” in Proc. of Design, Automation & Test in Europe Conference & Exhibition (DATE), 2023.
[BibTeX] [PDF]@inproceedings{mueller2023date, title = {{Fully On-board Low-Power Localization with Multizone Time-of-Flight Sensors on Nano-UAVs}}, author = {H. M{\"u}ller and N. Zimmerman and T. Polonelli and M. Magno and J. Behley and C. Stachniss and L. Benini}, booktitle = {Proc. of Design, Automation \& Test in Europe Conference \& Exhibition (DATE)}, year = {2023}, }
2022
- N. Zimmerman, L. Wiesmann, T. Guadagnino, T. Läbe, J. Behley, and C. Stachniss, “Robust Onboard Localization in Changing Environments Exploiting Text Spotting,” in Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), 2022.
[BibTeX] [PDF] [Code]@inproceedings{zimmerman2022iros, title = {{Robust Onboard Localization in Changing Environments Exploiting Text Spotting}}, author = {N. Zimmerman and L. Wiesmann and T. Guadagnino and T. Läbe and J. Behley and C. Stachniss}, booktitle = iros, year = {2022}, codeurl = {https://github.com/PRBonn/tmcl}, }
- F. Magistri, E. Marks, S. Nagulavancha, I. Vizzo, T. Läbe, J. Behley, M. Halstead, C. McCool, and C. Stachniss, “Contrastive 3D Shape Completion and Reconstruction for Agricultural Robots using RGB-D Frames,” IEEE Robotics and Automation Letters (RA-L), vol. 7, iss. 4, pp. 10120-10127, 2022.
[BibTeX] [PDF] [Video]@article{magistri2022ral-iros, author = {Federico Magistri and Elias Marks and Sumanth Nagulavancha and Ignacio Vizzo and Thomas L{\"a}be and Jens Behley and Michael Halstead and Chris McCool and Cyrill Stachniss}, title = {Contrastive 3D Shape Completion and Reconstruction for Agricultural Robots using RGB-D Frames}, journal = ral, url = {https://www.ipb.uni-bonn.de/wp-content/papercite-data/pdf/magistri2022ral-iros.pdf}, year = {2022}, volume={7}, number={4}, pages={10120-10127}, videourl = {https://www.youtube.com/watch?v=2ErUf9q7YOI}, }
- I. Vizzo, B. Mersch, R. Marcuzzi, L. Wiesmann, J. Behley, and C. Stachniss, “Make it Dense: Self-Supervised Geometric Scan Completion of Sparse 3D LiDAR Scans in Large Outdoor Environments,” IEEE Robotics and Automation Letters (RA-L), vol. 7, iss. 3, pp. 8534-8541, 2022. doi:10.1109/LRA.2022.3187255
[BibTeX] [PDF] [Code] [Video]@article{vizzo2022ral, author = {I. Vizzo and B. Mersch and R. Marcuzzi and L. Wiesmann and J. Behley and C. Stachniss}, title = {Make it Dense: Self-Supervised Geometric Scan Completion of Sparse 3D LiDAR Scans in Large Outdoor Environments}, journal = ral, url = {https://www.ipb.uni-bonn.de/wp-content/papercite-data/pdf/vizzo2022ral-iros.pdf}, codeurl = {https://github.com/PRBonn/make_it_dense}, year = {2022}, volume = {7}, number = {3}, pages = {8534-8541}, doi = {10.1109/LRA.2022.3187255}, videourl = {https://youtu.be/NVjURcArHn8}, }
- L. Nunes, X. Chen, R. Marcuzzi, A. Osep, L. Leal-Taixé, C. Stachniss, and J. Behley, “Unsupervised Class-Agnostic Instance Segmentation of 3D LiDAR Data for Autonomous Vehicles,” IEEE Robotics and Automation Letters (RA-L), 2022. doi:10.1109/LRA.2022.3187872
[BibTeX] [PDF] [Code] [Video]@article{nunes2022ral-3duis, author = {Lucas Nunes and Xieyuanli Chen and Rodrigo Marcuzzi and Aljosa Osep and Laura Leal-Taixé and Cyrill Stachniss and Jens Behley}, title = {{Unsupervised Class-Agnostic Instance Segmentation of 3D LiDAR Data for Autonomous Vehicles}}, journal = ral, url = {https://www.ipb.uni-bonn.de/pdfs/nunes2022ral-iros.pdf}, codeurl = {https://github.com/PRBonn/3DUIS}, videourl= {https://youtu.be/cgv0wUaqLAE}, doi = {10.1109/LRA.2022.3187872}, year = 2022 }
- B. Mersch, X. Chen, I. Vizzo, L. Nunes, J. Behley, and C. Stachniss, “Receding Moving Object Segmentation in 3D LiDAR Data Using Sparse 4D Convolutions,” IEEE Robotics and Automation Letters (RA-L), vol. 7, iss. 3, p. 7503–7510, 2022. doi:10.1109/LRA.2022.3183245
[BibTeX] [PDF] [Code] [Video]@article{mersch2022ral, author = {B. Mersch and X. Chen and I. Vizzo and L. Nunes and J. Behley and C. Stachniss}, title = {{Receding Moving Object Segmentation in 3D LiDAR Data Using Sparse 4D Convolutions}}, journal = ral, year = 2022, url = {https://www.ipb.uni-bonn.de/wp-content/papercite-data/pdf/mersch2022ral.pdf}, volume = {7}, number = {3}, pages = {7503--7510}, doi = {10.1109/LRA.2022.3183245}, codeurl = {https://github.com/PRBonn/4DMOS}, videourl = {https://youtu.be/5aWew6caPNQ}, }
- T. Guadagnino, X. Chen, M. Sodano, J. Behley, G. Grisetti, and C. Stachniss, “Fast Sparse LiDAR Odometry Using Self-Supervised Feature Selection on Intensity Images,” IEEE Robotics and Automation Letters (RA-L), vol. 7, iss. 3, pp. 7597-7604, 2022. doi:10.1109/LRA.2022.3184454
[BibTeX] [PDF]@article{guadagnino2022ral, author = {T. Guadagnino and X. Chen and M. Sodano and J. Behley and G. Grisetti and C. Stachniss}, title = {{Fast Sparse LiDAR Odometry Using Self-Supervised Feature Selection on Intensity Images}}, journal = ral, year = 2022, volume = {7}, number = {3}, pages = {7597-7604}, url = {https://www.ipb.uni-bonn.de/wp-content/papercite-data/pdf/guadagnino2022ral-iros.pdf}, issn = {2377-3766}, doi = {10.1109/LRA.2022.3184454} }
- L. Wiesmann, T. Guadagnino, I. Vizzo, G. Grisetti, J. Behley, and C. Stachniss, “DCPCR: Deep Compressed Point Cloud Registration in Large-Scale Outdoor Environments,” IEEE Robotics and Automation Letters (RA-L), vol. 7, iss. 3, pp. 6327-6334, 2022. doi:10.1109/LRA.2022.3171068
[BibTeX] [PDF] [Code] [Video]@article{wiesmann2022ral-iros, author = {L. Wiesmann and T. Guadagnino and I. Vizzo and G. Grisetti and J. Behley and C. Stachniss}, title = {{DCPCR: Deep Compressed Point Cloud Registration in Large-Scale Outdoor Environments}}, journal = ral, year = 2022, volume = 7, number = 3, pages = {6327-6334}, issn = {2377-3766}, doi = {10.1109/LRA.2022.3171068}, codeurl = {https://github.com/PRBonn/DCPCR}, videourl = {https://youtu.be/RqLr2RTGy1s}, }
- X. Chen, B. Mersch, L. Nunes, R. Marcuzzi, I. Vizzo, J. Behley, and C. Stachniss, “Automatic Labeling to Generate Training Data for Online LiDAR-Based Moving Object Segmentation,” IEEE Robotics and Automation Letters (RA-L), vol. 7, iss. 3, pp. 6107-6114, 2022. doi:10.1109/LRA.2022.3166544
[BibTeX] [PDF] [Code] [Video]@article{chen2022ral, author = {X. Chen and B. Mersch and L. Nunes and R. Marcuzzi and I. Vizzo and J. Behley and C. Stachniss}, title = {{Automatic Labeling to Generate Training Data for Online LiDAR-Based Moving Object Segmentation}}, journal = ral, year = 2022, volume = 7, number = 3, pages = {6107-6114}, url = {https://arxiv.org/pdf/2201.04501}, issn = {2377-3766}, doi = {10.1109/LRA.2022.3166544}, codeurl = {https://github.com/PRBonn/auto-mos}, videourl = {https://youtu.be/3V5RA1udL4c}, }
- I. Vizzo, T. Guadagnino, J. Behley, and C. Stachniss, “VDBFusion: Flexible and Efficient TSDF Integration of Range Sensor Data,” Sensors, vol. 22, iss. 3, 2022. doi:10.3390/s22031296
[BibTeX] [PDF] [Code]@article{vizzo2022sensors, author = {Vizzo, I. and Guadagnino, T. and Behley, J. and Stachniss, C.}, title = {VDBFusion: Flexible and Efficient TSDF Integration of Range Sensor Data}, journal = {Sensors}, volume = {22}, year = {2022}, number = {3}, article-number = {1296}, url = {https://www.mdpi.com/1424-8220/22/3/1296}, issn = {1424-8220}, doi = {10.3390/s22031296}, codeurl = {https://github.com/PRBonn/vdbfusion}, }
- L. Wiesmann, R. Marcuzzi, C. Stachniss, and J. Behley, “Retriever: Point Cloud Retrieval in Compressed 3D Maps,” in Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA), 2022.
[BibTeX] [PDF]@inproceedings{wiesmann2022icra, author = {L. Wiesmann and R. Marcuzzi and C. Stachniss and J. Behley}, title = {{Retriever: Point Cloud Retrieval in Compressed 3D Maps}}, booktitle = icra, year = 2022, }
- J. Weyler, J. Quakernack, P. Lottes, J. Behley, and C. Stachniss, “Joint Plant and Leaf Instance Segmentation on Field-Scale UAV Imagery,” IEEE Robotics and Automation Letters (RA-L), vol. 7, iss. 2, pp. 3787-3794, 2022. doi:10.1109/LRA.2022.3147462
[BibTeX] [PDF]@article{weyler2022ral, author = {J. Weyler and J. Quakernack and P. Lottes and J. Behley and C. Stachniss}, title = {{Joint Plant and Leaf Instance Segmentation on Field-Scale UAV Imagery}}, journal = ral, year = 2022, doi = {10.1109/LRA.2022.3147462}, issn = {377-3766}, volume = {7}, number = {2}, pages = {3787-3794}, }
- L. Nunes, R. Marcuzzi, X. Chen, J. Behley, and C. Stachniss, “SegContrast: 3D Point Cloud Feature Representation Learning through Self-supervised Segment Discrimination,” IEEE Robotics and Automation Letters (RA-L), vol. 7, iss. 2, pp. 2116-2123, 2022. doi:10.1109/LRA.2022.3142440
[BibTeX] [PDF] [Code] [Video]@article{nunes2022ral, author = {L. Nunes and R. Marcuzzi and X. Chen and J. Behley and C. Stachniss}, title = {{SegContrast: 3D Point Cloud Feature Representation Learning through Self-supervised Segment Discrimination}}, journal = ral, year = 2022, doi = {10.1109/LRA.2022.3142440}, issn = {2377-3766}, volume = {7}, number = {2}, pages = {2116-2123}, url = {https://www.ipb.uni-bonn.de/pdfs/nunes2022ral-icra.pdf}, codeurl = {https://github.com/PRBonn/segcontrast}, videourl = {https://youtu.be/kotRb_ySnIw}, }
- R. Marcuzzi, L. Nunes, L. Wiesmann, I. Vizzo, J. Behley, and C. Stachniss, “Contrastive Instance Association for 4D Panoptic Segmentation using Sequences of 3D LiDAR Scans,” IEEE Robotics and Automation Letters (RA-L), vol. 7, iss. 2, pp. 1550-1557, 2022. doi:10.1109/LRA.2022.3140439
[BibTeX] [PDF]@article{marcuzzi2022ral, author = {R. Marcuzzi and L. Nunes and L. Wiesmann and I. Vizzo and J. Behley and C. Stachniss}, title = {{Contrastive Instance Association for 4D Panoptic Segmentation using Sequences of 3D LiDAR Scans}}, journal = ral, year = 2022, doi = {10.1109/LRA.2022.3140439}, issn = {2377-3766}, volume = 7, number = 2, pages = {1550-1557}, }
- J. Weyler, F. Magistri, P. Seitz, J. Behley, and C. Stachniss, “In-Field Phenotyping Based on Crop Leaf and Plant Instance Segmentation,” in Proc. of the Winter Conf. on Applications of Computer Vision (WACV), 2022.
[BibTeX] [PDF]@inproceedings{weyler2022wacv, author = {J. Weyler and F. Magistri and P. Seitz and J. Behley and C. Stachniss}, title = {{In-Field Phenotyping Based on Crop Leaf and Plant Instance Segmentation}}, booktitle = wacv, year = 2022, }
2021
- B. Mersch, X. Chen, J. Behley, and C. Stachniss, “Self-supervised Point Cloud Prediction Using 3D Spatio-temporal Convolutional Networks,” in Proc. of the Conf. on Robot Learning (CoRL), 2021.
[BibTeX] [PDF] [Code] [Video]@InProceedings{mersch2021corl, author = {B. Mersch and X. Chen and J. Behley and C. Stachniss}, title = {{Self-supervised Point Cloud Prediction Using 3D Spatio-temporal Convolutional Networks}}, booktitle = corl, year = {2021}, url = {https://www.ipb.uni-bonn.de/pdfs/mersch2021corl.pdf}, codeurl = {https://github.com/PRBonn/point-cloud-prediction}, videourl = {https://youtu.be/-pSZpPgFAso}, }
- J. Behley, M. Garbade, A. Milioto, J. Quenzel, S. Behnke, J. Gall, and C. Stachniss, “Towards 3D LiDAR-based semantic scene understanding of 3D point cloud sequences: The SemanticKITTI Dataset,” Intl. Journal of Robotics Research (IJRR), vol. 40, iss. 8-9, pp. 959-967, 2021. doi:10.1177/02783649211006735
[BibTeX] [PDF]@article{behley2021ijrr, author = {J. Behley and M. Garbade and A. Milioto and J. Quenzel and S. Behnke and J. Gall and C. Stachniss}, title = {Towards 3D LiDAR-based semantic scene understanding of 3D point cloud sequences: The SemanticKITTI Dataset}, journal = ijrr, volume = {40}, number = {8-9}, pages = {959-967}, year = {2021}, doi = {10.1177/02783649211006735}, url = {https://www.ipb.uni-bonn.de/pdfs/behley2021ijrr.pdf} }
- X. Chen, T. Läbe, A. Milioto, T. Röhling, J. Behley, and C. Stachniss, “OverlapNet: A Siamese Network for Computing LiDAR Scan Similarity with Applications to Loop Closing and Localization,” Autonomous Robots, vol. 46, p. 61–81, 2021. doi:10.1007/s10514-021-09999-0
[BibTeX] [PDF] [Code]@article{chen2021auro, author = {X. Chen and T. L\"abe and A. Milioto and T. R\"ohling and J. Behley and C. Stachniss}, title = {{OverlapNet: A Siamese Network for Computing LiDAR Scan Similarity with Applications to Loop Closing and Localization}}, journal = {Autonomous Robots}, year = {2021}, doi = {10.1007/s10514-021-09999-0}, issn = {1573-7527}, volume=46, pages={61--81}, codeurl = {https://github.com/PRBonn/OverlapNet}, url = {https://www.ipb.uni-bonn.de/pdfs/chen2021auro.pdf} }
- P. Rottmann, T. Posewsky, A. Milioto, C. Stachniss, and J. Behley, “Improving Monocular Depth Estimation by Semantic Pre-training,” in Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), 2021.
[BibTeX] [PDF]@inproceedings{rottmann2021iros, title = {{Improving Monocular Depth Estimation by Semantic Pre-training}}, author = {P. Rottmann and T. Posewsky and A. Milioto and C. Stachniss and J. Behley}, booktitle = iros, year = {2021}, url = {https://www.ipb.uni-bonn.de/pdfs/rottmann2021iros.pdf} }
- X. Chen, S. Li, B. Mersch, L. Wiesmann, J. Gall, J. Behley, and C. Stachniss, “Moving Object Segmentation in 3D LiDAR Data: A Learning-based Approach Exploiting Sequential Data,” IEEE Robotics and Automation Letters (RA-L), vol. 6, pp. 6529-6536, 2021. doi:10.1109/LRA.2021.3093567
[BibTeX] [PDF] [Code] [Video]@article{chen2021ral, title={{Moving Object Segmentation in 3D LiDAR Data: A Learning-based Approach Exploiting Sequential Data}}, author={X. Chen and S. Li and B. Mersch and L. Wiesmann and J. Gall and J. Behley and C. Stachniss}, year={2021}, volume=6, issue=4, pages={6529-6536}, journal=ral, url = {https://www.ipb.uni-bonn.de/pdfs/chen2021ral-iros.pdf}, codeurl = {https://github.com/PRBonn/LiDAR-MOS}, videourl = {https://youtu.be/NHvsYhk4dhw}, doi = {10.1109/LRA.2021.3093567}, issn = {2377-3766}, }
- M. Aygün, A. Osep, M. Weber, M. Maximov, C. Stachniss, J. Behley, and L. Leal-Taixe, “4D Panoptic Segmentation,” in Proc. of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR), 2021.
[BibTeX] [PDF]@inproceedings{ayguen2021cvpr, author = {M. Ayg\"un and A. Osep and M. Weber and M. Maximov and C. Stachniss and J. Behley and L. Leal-Taixe}, title = {{4D Panoptic Segmentation}}, booktitle = cvpr, year = 2021, url = {https://www.ipb.uni-bonn.de/pdfs/ayguen2021cvpr.pdf} }
- F. Magistri, N. Chebrolu, J. Behley, and C. Stachniss, “Towards In-Field Phenotyping Exploiting Differentiable Rendering with Self-Consistency Loss,” in Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA), 2021.
[BibTeX] [PDF] [Video]@inproceedings{magistri2021icra, author = {F. Magistri and N. Chebrolu and J. Behley and C. Stachniss}, title = {{Towards In-Field Phenotyping Exploiting Differentiable Rendering with Self-Consistency Loss}}, booktitle = icra, year = 2021, videourl = {https://youtu.be/MF2A4ihY2lE}, }
- I. Vizzo, X. Chen, N. Chebrolu, J. Behley, and C. Stachniss, “Poisson Surface Reconstruction for LiDAR Odometry and Mapping,” in Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA), 2021.
[BibTeX] [PDF] [Code] [Video]@inproceedings{vizzo2021icra, author = {I. Vizzo and X. Chen and N. Chebrolu and J. Behley and C. Stachniss}, title = {{Poisson Surface Reconstruction for LiDAR Odometry and Mapping}}, booktitle = icra, year = 2021, url = {https://www.ipb.uni-bonn.de/pdfs/vizzo2021icra.pdf}, codeurl = {https://github.com/PRBonn/puma}, videourl = {https://youtu.be/7yWtYWaO5Nk} }
- X. Chen, I. Vizzo, T. Läbe, J. Behley, and C. Stachniss, “Range Image-based LiDAR Localization for Autonomous Vehicles,” in Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA), 2021.
[BibTeX] [PDF] [Code] [Video]@inproceedings{chen2021icra, author = {X. Chen and I. Vizzo and T. L{\"a}be and J. Behley and C. Stachniss}, title = {{Range Image-based LiDAR Localization for Autonomous Vehicles}}, booktitle = icra, year = 2021, url = {https://www.ipb.uni-bonn.de/pdfs/chen2021icra.pdf}, codeurl = {https://github.com/PRBonn/range-mcl}, videourl = {https://youtu.be/hpOPXX9oPqI}, }
- J. Behley, A. Milioto, and C. Stachniss, “A Benchmark for LiDAR-based Panoptic Segmentation based on KITTI,” in Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA), 2021.
[BibTeX] [PDF]@inproceedings{behley2021icra, author = {J. Behley and A. Milioto and C. Stachniss}, title = {{A Benchmark for LiDAR-based Panoptic Segmentation based on KITTI}}, booktitle = icra, year = 2021, }
- N. Chebrolu, T. Läbe, O. Vysotska, J. Behley, and C. Stachniss, “Adaptive Robust Kernels for Non-Linear Least Squares Problems,” IEEE Robotics and Automation Letters (RA-L), vol. 6, pp. 2240-2247, 2021. doi:10.1109/LRA.2021.3061331
[BibTeX] [PDF] [Video]@article{chebrolu2021ral, author = {N. Chebrolu and T. L\"{a}be and O. Vysotska and J. Behley and C. Stachniss}, title = {{Adaptive Robust Kernels for Non-Linear Least Squares Problems}}, journal = ral, volume = 6, issue = 2, pages = {2240-2247}, doi = {10.1109/LRA.2021.3061331}, year = 2021, videourl = {https://youtu.be/34Zp3ZX0Bnk} }
- J. Weyler, A. Milioto, T. Falck, J. Behley, and C. Stachniss, “Joint Plant Instance Detection and Leaf Count Estimation for In-Field Plant Phenotyping,” IEEE Robotics and Automation Letters (RA-L), vol. 6, pp. 3599-3606, 2021. doi:10.1109/LRA.2021.3060712
[BibTeX] [PDF] [Video]@article{weyler2021ral, author = {J. Weyler and A. Milioto and T. Falck and J. Behley and C. Stachniss}, title = {{Joint Plant Instance Detection and Leaf Count Estimation for In-Field Plant Phenotyping}}, journal = ral, volume = 6, issue = 2, pages = {3599-3606}, doi = {10.1109/LRA.2021.3060712}, year = 2021, videourl = {https://youtu.be/Is18Rey625I}, }
- L. Wiesmann, A. Milioto, X. Chen, C. Stachniss, and J. Behley, “Deep Compression for Dense Point Cloud Maps,” IEEE Robotics and Automation Letters (RA-L), vol. 6, pp. 2060-2067, 2021. doi:10.1109/LRA.2021.3059633
[BibTeX] [PDF] [Code] [Video]@article{wiesmann2021ral, author = {L. Wiesmann and A. Milioto and X. Chen and C. Stachniss and J. Behley}, title = {{Deep Compression for Dense Point Cloud Maps}}, journal = ral, volume = 6, issue = 2, pages = {2060-2067}, doi = {10.1109/LRA.2021.3059633}, year = 2021, url = {https://www.ipb.uni-bonn.de/pdfs/wiesmann2021ral.pdf}, codeurl = {https://github.com/PRBonn/deep-point-map-compression}, videourl = {https://youtu.be/fLl9lTlZrI0} }
2020
- A. Milioto, J. Behley, C. McCool, and C. Stachniss, “LiDAR Panoptic Segmentation for Autonomous Driving,” in Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), 2020.
[BibTeX] [PDF] [Video]@inproceedings{milioto2020iros, author = {A. Milioto and J. Behley and C. McCool and C. Stachniss}, title = {{LiDAR Panoptic Segmentation for Autonomous Driving}}, booktitle = iros, year = {2020}, videourl = {https://www.youtube.com/watch?v=C9CTQSosr9I}, }
- X. Chen, T. Läbe, L. Nardi, J. Behley, and C. Stachniss, “Learning an Overlap-based Observation Model for 3D LiDAR Localization,” in Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), 2020.
[BibTeX] [PDF] [Code] [Video]@inproceedings{chen2020iros, author = {X. Chen and T. L\"abe and L. Nardi and J. Behley and C. Stachniss}, title = {{Learning an Overlap-based Observation Model for 3D LiDAR Localization}}, booktitle = iros, year = {2020}, codeurl = {https://github.com/PRBonn/overlap_localization}, url={https://www.ipb.uni-bonn.de/pdfs/chen2020iros.pdf}, videourl = {https://www.youtube.com/watch?v=BozPqy_6YcE}, }
- F. Langer, A. Milioto, A. Haag, J. Behley, and C. Stachniss, “Domain Transfer for Semantic Segmentation of LiDAR Data using Deep Neural Networks,” in Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), 2020.
[BibTeX] [PDF] [Code] [Video]@inproceedings{langer2020iros, author = {F. Langer and A. Milioto and A. Haag and J. Behley and C. Stachniss}, title = {{Domain Transfer for Semantic Segmentation of LiDAR Data using Deep Neural Networks}}, booktitle = iros, year = {2020}, url = {https://www.ipb.uni-bonn.de/pdfs/langer2020iros.pdf}, videourl = {https://youtu.be/6FNGF4hKBD0}, codeurl = {https://github.com/PRBonn/lidar_transfer}, }
- X. Chen, T. Läbe, A. Milioto, T. Röhling, O. Vysotska, A. Haag, J. Behley, and C. Stachniss, “OverlapNet: Loop Closing for LiDAR-based SLAM,” in Proc. of Robotics: Science and Systems (RSS), 2020.
[BibTeX] [PDF] [Code] [Video]@inproceedings{chen2020rss, author = {X. Chen and T. L\"abe and A. Milioto and T. R\"ohling and O. Vysotska and A. Haag and J. Behley and C. Stachniss}, title = {{OverlapNet: Loop Closing for LiDAR-based SLAM}}, booktitle = rss, year = {2020}, codeurl = {https://github.com/PRBonn/OverlapNet/}, videourl = {https://youtu.be/YTfliBco6aw}, }
- N. Chebrolu, T. Laebe, O. Vysotska, J. Behley, and C. Stachniss, “Adaptive Robust Kernels for Non-Linear Least Squares Problems,” arXiv Preprint, 2020.
[BibTeX] [PDF]@article{chebrolu2020arxiv, title={Adaptive Robust Kernels for Non-Linear Least Squares Problems}, author={N. Chebrolu and T. Laebe and O. Vysotska and J. Behley and C. Stachniss}, journal = arxiv, year=2020, eprint={2004.14938}, keywords={cs.RO}, url={https://arxiv.org/pdf/2004.14938v2} }
- J. Behley, A. Milioto, and C. Stachniss, “A Benchmark for LiDAR-based Panoptic Segmentation based on KITTI,” arXiv Preprint, 2020.
[BibTeX] [PDF]Panoptic segmentation is the recently introduced task that tackles semantic segmentation and instance segmentation jointly. In this paper, we present an extension of SemanticKITTI, which is a large-scale dataset providing dense point-wise semantic labels for all sequences of the KITTI Odometry Benchmark, for training and evaluation of laser-based panoptic segmentation. We provide the data and discuss the processing steps needed to enrich a given semantic annotation with temporally consistent instance information, i.e., instance information that supplements the semantic labels and identifies the same instance over sequences of LiDAR point clouds. Additionally, we present two strong baselines that combine state-of-the-art LiDAR-based semantic segmentation approaches with a state-of-the-art detector enriching the segmentation with instance information and that allow other researchers to compare their approaches against. We hope that our extension of SemanticKITTI with strong baselines enables the creation of novel algorithms for LiDAR-based panoptic segmentation as much as it has for the original semantic segmentation and semantic scene completion tasks. Data, code, and an online evaluation using a hidden test set will be published on https://semantic-kitti.org.
@article{behley2020arxiv, author = {J. Behley and A. Milioto and C. Stachniss}, title = {{A Benchmark for LiDAR-based Panoptic Segmentation based on KITTI}}, journal = arxiv, year = 2020, eprint = {2003.02371v1}, url = {https://arxiv.org/pdf/2003.02371v1}, keywords = {cs.CV}, abstract = {Panoptic segmentation is the recently introduced task that tackles semantic segmentation and instance segmentation jointly. In this paper, we present an extension of SemanticKITTI, which is a large-scale dataset providing dense point-wise semantic labels for all sequences of the KITTI Odometry Benchmark, for training and evaluation of laser-based panoptic segmentation. We provide the data and discuss the processing steps needed to enrich a given semantic annotation with temporally consistent instance information, i.e., instance information that supplements the semantic labels and identifies the same instance over sequences of LiDAR point clouds. Additionally, we present two strong baselines that combine state-of-the-art LiDAR-based semantic segmentation approaches with a state-of-the-art detector enriching the segmentation with instance information and that allow other researchers to compare their approaches against. We hope that our extension of SemanticKITTI with strong baselines enables the creation of novel algorithms for LiDAR-based panoptic segmentation as much as it has for the original semantic segmentation and semantic scene completion tasks. Data, code, and an online evaluation using a hidden test set will be published on https://semantic-kitti.org.} }
- P. Lottes, J. Behley, N. Chebrolu, A. Milioto, and C. Stachniss, “Robust joint stem detection and crop-weed classification using image sequences for plant-specific treatment in precision farming,” Journal of Field Robotics (JFR), vol. 37, pp. 20-34, 2020. doi:https://doi.org/10.1002/rob.21901
[BibTeX] [PDF]@Article{lottes2020jfr, title = {Robust joint stem detection and crop-weed classification using image sequences for plant-specific treatment in precision farming}, author = {Lottes, P. and Behley, J. and Chebrolu, N. and Milioto, A. and Stachniss, C.}, journal = jfr, volume = {37}, numer = {1}, pages = {20-34}, year = {2020}, doi = {https://doi.org/10.1002/rob.21901}, url = {https://www.ipb.uni-bonn.de/pdfs/lottes2019jfr.pdf}, }
2019
- J. Behley, M. Garbade, A. Milioto, J. Quenzel, S. Behnke, C. Stachniss, and J. Gall, “SemanticKITTI: A Dataset for Semantic Scene Understanding of LiDAR Sequences,” in Proc. of the IEEE/CVF Intl. Conf. on Computer Vision (ICCV), 2019.
[BibTeX] [PDF] [Video]@InProceedings{behley2019iccv, author = {J. Behley and M. Garbade and A. Milioto and J. Quenzel and S. Behnke and C. Stachniss and J. Gall}, title = {{SemanticKITTI: A Dataset for Semantic Scene Understanding of LiDAR Sequences}}, booktitle = iccv, year = {2019}, videourl = {https://www.ipb.uni-bonn.de/html/projects/semantic_kitti/videos/teaser.mp4}, }
- E. Palazzolo, J. Behley, P. Lottes, P. Giguère, and C. Stachniss, “ReFusion: 3D Reconstruction in Dynamic Environments for RGB-D Cameras Exploiting Residuals,” in Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), 2019.
[BibTeX] [PDF] [Code] [Video]@InProceedings{palazzolo2019iros, author = {E. Palazzolo and J. Behley and P. Lottes and P. Gigu\`ere and C. Stachniss}, title = {{ReFusion: 3D Reconstruction in Dynamic Environments for RGB-D Cameras Exploiting Residuals}}, booktitle = iros, year = {2019}, url = {https://www.ipb.uni-bonn.de/pdfs/palazzolo2019iros.pdf}, codeurl = {https://github.com/PRBonn/refusion}, videourl = {https://youtu.be/1P9ZfIS5-p4}, }
- X. Chen, A. Milioto, E. Palazzolo, P. Giguère, J. Behley, and C. Stachniss, “SuMa++: Efficient LiDAR-based Semantic SLAM,” in Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), 2019.
[BibTeX] [PDF] [Code] [Video]@inproceedings{chen2019iros, author = {X. Chen and A. Milioto and E. Palazzolo and P. Giguère and J. Behley and C. Stachniss}, title = {{SuMa++: Efficient LiDAR-based Semantic SLAM}}, booktitle = iros, year = 2019, codeurl = {https://github.com/PRBonn/semantic_suma/}, videourl = {https://youtu.be/uo3ZuLuFAzk}, }
- A. Milioto, I. Vizzo, J. Behley, and C. Stachniss, “RangeNet++: Fast and Accurate LiDAR Semantic Segmentation,” in Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), 2019.
[BibTeX] [PDF] [Code] [Video]@inproceedings{milioto2019iros, author = {A. Milioto and I. Vizzo and J. Behley and C. Stachniss}, title = {{RangeNet++: Fast and Accurate LiDAR Semantic Segmentation}}, booktitle = iros, year = 2019, codeurl = {https://github.com/PRBonn/lidar-bonnetal}, videourl = {https://youtu.be/wuokg7MFZyU}, }
2018
- P. Lottes, J. Behley, A. Milioto, and C. Stachniss, “Fully Convolutional Networks with Sequential Information for Robust Crop and Weed Detection in Precision Farming,” IEEE Robotics and Automation Letters (RA-L), vol. 3, pp. 3097-3104, 2018. doi:10.1109/LRA.2018.2846289
[BibTeX] [PDF] [Video]@Article{lottes2018ral, author = {P. Lottes and J. Behley and A. Milioto and C. Stachniss}, title = {Fully Convolutional Networks with Sequential Information for Robust Crop and Weed Detection in Precision Farming}, journal = ral, year = {2018}, volume = {3}, issue = {4}, pages = {3097-3104}, doi = {10.1109/LRA.2018.2846289}, url = {https://www.ipb.uni-bonn.de/pdfs/lottes2018ral.pdf}, videourl = {https://www.youtube.com/watch?v=vTepw9HRLh8}, }
- P. Lottes, J. Behley, N. Chebrolu, A. Milioto, and C. Stachniss, “Joint Stem Detection and Crop-Weed Classification for Plant-specific Treatment in Precision Farming,” in Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), 2018.
[BibTeX] [PDF] [Video]Applying agrochemicals is the default procedure for conventional weed control in crop production, but has negative impacts on the environment. Robots have the potential to treat every plant in the field individually and thus can reduce the required use of such chemicals. To achieve that, robots need the ability to identify crops and weeds in the field and must additionally select effective treatments. While certain types of weed can be treated mechanically, other types need to be treated by (selective) spraying. In this paper, we present an approach that provides the necessary information for effective plant-specific treatment. It outputs the stem location for weeds, which allows for mechanical treatments, and the covered area of the weed for selective spraying. Our approach uses an end-to- end trainable fully convolutional network that simultaneously estimates stem positions as well as the covered area of crops and weeds. It jointly learns the class-wise stem detection and the pixel-wise semantic segmentation. Experimental evaluations on different real-world datasets show that our approach is able to reliably solve this problem. Compared to state-of-the-art approaches, our approach not only substantially improves the stem detection accuracy, i.e., distinguishing crop and weed stems, but also provides an improvement in the semantic segmentation performance.
@InProceedings{lottes2018iros, author = {P. Lottes and J. Behley and N. Chebrolu and A. Milioto and C. Stachniss}, title = {Joint Stem Detection and Crop-Weed Classification for Plant-specific Treatment in Precision Farming}, booktitle = iros, year = 2018, url = {https://www.ipb.uni-bonn.de/pdfs/lottes18iros.pdf}, videourl = {https://www.youtube.com/watch?v=C9mjZxE_Sxg}, abstract = {Applying agrochemicals is the default procedure for conventional weed control in crop production, but has negative impacts on the environment. Robots have the potential to treat every plant in the field individually and thus can reduce the required use of such chemicals. To achieve that, robots need the ability to identify crops and weeds in the field and must additionally select effective treatments. While certain types of weed can be treated mechanically, other types need to be treated by (selective) spraying. In this paper, we present an approach that provides the necessary information for effective plant-specific treatment. It outputs the stem location for weeds, which allows for mechanical treatments, and the covered area of the weed for selective spraying. Our approach uses an end-to- end trainable fully convolutional network that simultaneously estimates stem positions as well as the covered area of crops and weeds. It jointly learns the class-wise stem detection and the pixel-wise semantic segmentation. Experimental evaluations on different real-world datasets show that our approach is able to reliably solve this problem. Compared to state-of-the-art approaches, our approach not only substantially improves the stem detection accuracy, i.e., distinguishing crop and weed stems, but also provides an improvement in the semantic segmentation performance.} }
- J. Behley and C. Stachniss, “Efficient Surfel-Based SLAM using 3D Laser Range Data in Urban Environments,” in Proc. of Robotics: Science and Systems (RSS), 2018.
[BibTeX] [PDF] [Video]@InProceedings{behley2018rss, author = {J. Behley and C. Stachniss}, title = {Efficient Surfel-Based SLAM using 3D Laser Range Data in Urban Environments}, booktitle = rss, year = 2018, videourl = {https://www.youtube.com/watch?v=-AEX203rXkE}, url = {https://www.roboticsproceedings.org/rss14/p16.pdf}, }
2015
- J. Behley, V. Steinhage, and A. B. Cremers, “Efficient Radius Neighbor Search in Three-dimensional Point Clouds,” in Proceedings of the IEEE Int. Conf. on Robotics & Automation (ICRA), 2015.
[BibTeX]@inproceedings{behley2015icra, author = {J. Behley and V. Steinhage and A.B. Cremers}, title = {{Efficient Radius Neighbor Search in Three-dimensional Point Clouds}}, booktitle = icra, year = 2015, }
2013
- J. Behley, V. Steinhage, and A. B. Cremers, “Laser-based Segment Classification Using a Mixture of Bag-of-Words,” in Proceedings of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), 2013.
[BibTeX]@inproceedings{behley2013iros, author = {J. Behley and V. Steinhage and A.B. Cremers}, title = {{Laser-based Segment Classification Using a Mixture of Bag-of-Words}}, booktitle = iros, year = 2013, }
- V. Steinhage, J. Behley, S. Meisel, and A. B. Cremers, “Reconstruction by components for automated updating of 3D city models,” Applied Geomatics, vol. 5, p. 285–298, 2013.
[BibTeX]@article{steinhage2013ag, author = {V. Steinhage and J. Behley and S. Meisel and A.B. Cremers}, title = {{Reconstruction by components for automated updating of 3D city models}}, journal = {Applied Geomatics}, volume = {5}, pages = {285--298}, year = {2013}, }
2012
- J. Behley, V. Steinhage, and A. B. Cremers, “Performance of Histogram Descriptors for the Classification of 3D Laser Range Data in Urban Environments,” in Proceedings of the IEEE Int. Conf. on Robotics & Automation (ICRA), 2012.
[BibTeX]@inproceedings{behley2012icra, author = {J. Behley and V. Steinhage and A.B. Cremers}, title = {{Performance of Histogram Descriptors for the Classification of 3D Laser Range Data in Urban Environments}}, booktitle = icra, year = 2012, }
2011
- F. Schöler, J. Behley, V. Steinhage, D. Schulz, and A. B. Cremers, “Person Tracking in Three-Dimensional Laser Range Data with Explicit Occlusion Adaption,” in Proceedings of the IEEE Int. Conf. on Robotics & Automation (ICRA), 2011.
[BibTeX]@inproceedings{schoeler2011icra, author = {F. Sch{\"o}ler and J. Behley and V. Steinhage and D. Schulz and A.B. Cremers}, title = {{Person Tracking in Three-Dimensional Laser Range Data with Explicit Occlusion Adaption}}, booktitle = icra, year = 2011, }
2010
- J. Behley, K. Kersting, D. Schulz, V. Steinhage, and A. B. Cremers, “Learning to Hash Logistic Regression for Fast 3D Scan Point Classification,” in Proceedings of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), 2010.
[BibTeX]@inproceedings{behley2010iros, author = {J. Behley and K. Kersting and D. Schulz and V. Steinhage and A.B. Cremers}, title = {{Learning to Hash Logistic Regression for Fast 3D Scan Point Classification}}, booktitle = iros, year = 2010, }
- V. Steinhage, J. Behley, S. Meisel, and A. B. Cremers, “Learning to Hash Logistic Regression for Fast 3D Scan Point Classification,” in Proc. of the ISPRS-Workshop on Core Spatial Databases – Updating, Maintenance and Services, 2010.
[BibTeX]@inproceedings{steinhage2010isprs, author = {V. Steinhage and J. Behley and S. Meisel and A. B. Cremers}, title = {{Learning to Hash Logistic Regression for Fast 3D Scan Point Classification}}, booktitle = {Proc. of the ISPRS-Workshop on Core Spatial Databases - Updating, Maintenance and Services}, year = 2010, }
2009
- J. Behley and V. Steinhage, “Generation of 3D City Models Using Domain-Specific Information Fusion,” in Proc. of the International Conference on Computer Vision Systems (ICVS), 2009.
[BibTeX]@inproceedings{behley2009icvs, author = {J. Behley and V. Steinhage}, title = {{Generation of 3D City Models Using Domain-Specific Information Fusion}}, booktitle = {Proc. of the International Conference on Computer Vision Systems (ICVS)}, year = 2009, }