Dr. Igor Bogoslavskyi
Scientific assistant (Graduated 2018) Contact:Email: igor.bogoslavskyi@nulluni-bonn.de
Tel: +49 – 228 – 73 –
Fax: +49 – 228 – 73 – 27 12
Office: Nussallee 15, 1. OG
Address:
University of Bonn
Photogrammetry, IGG
Nussallee 15
53115 Bonn
Short CV
Igor Bogoslavskyi is a researcher at the lab for photogrammetry at the University of Bonn led by Cyrill Stachniss. Before moving to Bonn, he has finished his Master of Science studies in the field of Applied Computer Science at the University of Freiburg in Germany in 2011 and a Bachelor of Science in Applied Mathematics in Ukraine in 2007. During his master studies, he has been working as an assistant on ROVINA project in Autonomous Intelligent Systems (AIS) laboratory led by Wolfram Burgard. His current interests lie in scene interpretation, outdoor perception, and navigation.
Research Interests
- Probabilistic robotics
- Localization, Mapping, SLAM
- Autonomous Navigation and Exploration
- Dynamic Object Detection from Laser Data
Projects
- ROVINA – Robots for Exploration, Digital Preservation and Visualization of Archeological Sites.
I have been responsible for implementing traversability analysis, robust homing and part of navigation and exploration stack that ran on the robot exploring real Roman catacombs. See website for details and my publication list for related publications. - Depth clustering – a library for fast and robust segmentation of Velodyne-generated 3D scans.
Star Fork Watch - Catkin fetch – a new verb for catkin_tools to download project dependencies automatically.
Star Fork Watch - EasyClangComplete – an easy to setup C/C++ completion plugin for Sublime Text.
Star Fork Watch - IPB homework checker – a versatile tool for checking arbitrary homework assignments defined as a yaml receipt.
Star Fork Watch - MPR – an easy to use multi-cue ICP that works by registering range images.
Teaching
- Exercises for Photogrammetry & Remote Sensing, 2014/2015
- C++ for Image Processing, 2015
- C++ for Image Processing, 2016
- 3D Mapping, 2016/2017
- C++ for Image Processing, 2017
- Modern C++ for Image Processing, 2018
Awards
- MINT Excellence Network Member
Publications
2018
- B. Della Corte, I. Bogoslavskyi, C. Stachniss, and G. Grisetti, “A general framework for flexible multi-cue photometric point cloud registration,” in Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA), 2018.
[BibTeX] [PDF] [Code] [Video]@InProceedings{della-corte2018icra, author = {Della Corte, B. and I. Bogoslavskyi and C. Stachniss and G. Grisetti}, title = {A General Framework for Flexible Multi-Cue Photometric Point Cloud Registration}, year = 2018, booktitle = icra, codeurl = {https://gitlab.com/srrg-software/srrg_mpr}, videourl = {https://www.youtube.com/watch?v=_z98guJTqfk}, url = {https://www.ipb.uni-bonn.de/wp-content/papercite-data/pdf/della-corte2018icra.pdf}, }
- I. Bogoslavskyi, “Robot mapping and navigation in real-world environments,” PhD Thesis, 2018.
[BibTeX] [PDF]@PhDThesis{bogosalvskyi2018phd, author = {I. Bogoslavskyi}, title = {Robot Mapping and Navigation in Real-World Environments}, school = {Rheinische Friedrich-Wilhelms University of Bonn}, year = 2018, url = {https://www.ipb.uni-bonn.de/pdfs/bogoslavskyi2018phd.pdf}, }
2017
- I. Bogoslavskyi and C. Stachniss, “Analyzing the quality of matched 3d point clouds of objects,” in Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), 2017.
[BibTeX] [PDF]
[none]
@InProceedings{bogoslavskyi2017iros, title = {Analyzing the Quality of Matched 3D Point Clouds of Objects}, author = {I. Bogoslavskyi and C. Stachniss}, booktitle = iros, year = {2017}, abstract = {[none]}, url = {https://www.ipb.uni-bonn.de/pdfs/bogoslavskyi17iros.pdf}, }
- I. Bogoslavskyi and C. Stachniss, “Efficient online segmentation for sparse 3d laser scans,” in Journal of photogrammetry, remote sensing and geoinformation science (pfg), 2017, p. 41–52.
[BibTeX] [PDF] [Code] [Video]
The ability to extract individual objects in the scene is key for a large number of autonomous navigation systems such as mobile robots or autonomous cars. Such systems navigating in dynamic environments need to be aware of objects that may change or move. In most perception cues, a pre-segmentation of the current image or laser scan into individual objects is the first processing step before a further analysis is performed. In this paper, we present an effective method that first removes the ground from the scan and then segments the 3D data in a range image representation into different objects. A key focus of our work is a fast execution with several hundred Hertz. Our implementation has small computational demands so that it can run online on most mobile systems. We explicitly avoid the computation of the 3D point cloud and operate directly on a 2.5D range image, which enables a fast segmentation for each 3D scan. This approach can furthermore handle sparse 3D data well, which is important for scanners such as the new Velodyne VLP-16 scanner. We implemented our approach in C++ and ROS, thoroughly tested it using different 3D scanners, and will release the source code of our implementation. Our method can operate at frame rates that are substantially higher than those of the sensors while using only a single core of a mobile CPU and producing high-quality segmentation results.
@InProceedings{bogoslavskyi2017pfg, title = {Efficient Online Segmentation for Sparse 3D Laser Scans}, author = {Bogoslavskyi, Igor and Stachniss, Cyrill}, booktitle = pfg, year = {2017}, pages = {41--52}, volume = {85}, issue = {1}, abstract = {The ability to extract individual objects in the scene is key for a large number of autonomous navigation systems such as mobile robots or autonomous cars. Such systems navigating in dynamic environments need to be aware of objects that may change or move. In most perception cues, a pre-segmentation of the current image or laser scan into individual objects is the first processing step before a further analysis is performed. In this paper, we present an effective method that first removes the ground from the scan and then segments the 3D data in a range image representation into different objects. A key focus of our work is a fast execution with several hundred Hertz. Our implementation has small computational demands so that it can run online on most mobile systems. We explicitly avoid the computation of the 3D point cloud and operate directly on a 2.5D range image, which enables a fast segmentation for each 3D scan. This approach can furthermore handle sparse 3D data well, which is important for scanners such as the new Velodyne VLP-16 scanner. We implemented our approach in C++ and ROS, thoroughly tested it using different 3D scanners, and will release the source code of our implementation. Our method can operate at frame rates that are substantially higher than those of the sensors while using only a single core of a mobile CPU and producing high-quality segmentation results.}, url = {https://www.ipb.uni-bonn.de/pdfs/bogoslavskyi16pfg.pdf}, codeurl = {https://github.com/Photogrammetry-Robotics-Bonn/depth_clustering}, videourl = {https://www.youtube.com/watch?v=6WqsOlHGTLA}, }
2016
- I. Bogoslavskyi, M. Mazuran, and C. Stachniss, “Robust homing for autonomous robots,” in Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA), 2016.
[BibTeX] [PDF] [Video]
[none]
@InProceedings{bogoslavskyi16icra, title = {Robust Homing for Autonomous Robots}, author = {I. Bogoslavskyi and M. Mazuran and C. Stachniss}, booktitle = icra, year = {2016}, abstract = {[none]}, url = {https://www.ipb.uni-bonn.de/pdfs/bogoslavskyi16icra.pdf}, videourl = {https://www.youtube.com/watch?v=sUvDvq91Vpw}, }
- I. Bogoslavskyi and C. Stachniss, “Fast range image-based segmentation of sparse 3d laser scans for online operation,” in Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), 2016.
[BibTeX] [PDF] [Code] [Video]
[none]
@InProceedings{bogoslavskyi16iros, title = {Fast Range Image-Based Segmentation of Sparse 3D Laser Scans for Online Operation}, author = {I. Bogoslavskyi and C. Stachniss}, booktitle = iros, year = {2016}, abstract = {[none]}, url = {https://www.ipb.uni-bonn.de/pdfs/bogoslavskyi16iros.pdf}, codeurl = {https://github.com/Photogrammetry-Robotics-Bonn/depth_clustering}, videourl = {https://www.youtube.com/watch?v=6WqsOlHGTLA}, }
- D. Perea-Ström, I. Bogoslavskyi, and C. Stachniss, “Robust exploration and homing for autonomous robots,” in Robotics and autonomous systems, 2016.
[BibTeX] [PDF]@InProceedings{perea16jras, title = {Robust Exploration and Homing for Autonomous Robots}, author = {D. Perea-Str{\"o}m and I. Bogoslavskyi and C. Stachniss}, booktitle = jras, year = {2016}, url = {https://www.ipb.uni-bonn.de/wp-content/papercite-data/pdf/perea16jras.pdf}, }
2015
- I. Bogoslavskyi, L. Spinello, W. Burgard, and C. Stachniss, “Where to park%3F minimizing the expected time to find a parking space,” in Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA), 2015, pp. 2147-2152. doi:10.1109/ICRA.2015.7139482
[BibTeX] [PDF]
Quickly finding a free parking spot that is close to a desired target location can be a difficult task. This holds for human drivers and autonomous cars alike. In this paper, we investigate the problem of predicting the occupancy of parking spaces and exploiting this information during route planning. We propose an MDP-based planner that considers route information as well as the occupancy probabilities of parking spaces to compute the path that minimizes the expected total time for finding an unoccupied parking space and for walking from the parking location to the target destination. We evaluated our system on real world data gathered over several days in a real parking lot. We furthermore compare our approach to three parking strategies and show that our method outperforms the alternative behaviors.
@InProceedings{bogoslavskyi15icra, title = {Where to Park? Minimizing the Expected Time to Find a Parking Space}, author = {I. Bogoslavskyi and L. Spinello and W. Burgard and C. Stachniss}, booktitle = icra, year = {2015}, pages = {2147-2152}, abstract = {Quickly finding a free parking spot that is close to a desired target location can be a difficult task. This holds for human drivers and autonomous cars alike. In this paper, we investigate the problem of predicting the occupancy of parking spaces and exploiting this information during route planning. We propose an MDP-based planner that considers route information as well as the occupancy probabilities of parking spaces to compute the path that minimizes the expected total time for finding an unoccupied parking space and for walking from the parking location to the target destination. We evaluated our system on real world data gathered over several days in a real parking lot. We furthermore compare our approach to three parking strategies and show that our method outperforms the alternative behaviors.}, doi = {10.1109/ICRA.2015.7139482}, timestamp = {2015.06.29}, url = {https://www.ipb.uni-bonn.de/pdfs/bogoslavskyi15icra.pdf}, }
2013
- I. Bogoslavskyi, O. Vysotska, J. Serafin, G. Grisetti, and C. Stachniss, “Efficient traversability analysis for mobile robots using the kinect sensor,” in Proc. of the european conf. on mobile robots (ecmr), Barcelona, Spain, 2013.
[BibTeX] [PDF]
[none]
@InProceedings{bogoslavskyi2013, title = {Efficient Traversability Analysis for Mobile Robots using the Kinect Sensor}, author = {I. Bogoslavskyi and O. Vysotska and J. Serafin and G. Grisetti and C. Stachniss}, booktitle = ecmr, year = {2013}, address = {Barcelona, Spain}, abstract = {[none]}, timestamp = {2014.04.24}, url = {https://www.ipb.uni-bonn.de/wp-content/papercite-data/pdf/bogoslavskyi13ecmr.pdf}, }