Thomas Läbe
Software Developer Contact:Email: laebe@nullipb.uni-bonn.de
Tel: +49 – 228 – 73 – 30 19
Fax: +49 – 228 – 73 – 27 12
Office: Nussallee 15, Ground floor, room 0.014
Address:
University of Bonn
Photogrammetry, IGG
Nussallee 15
53115 Bonn
Research Interests
- Image Processing and Orientation
- Bundle Adjustment
- Visual Odometry
- Camera Calibration
Short CV
Thomas Läbe is a technical associate at the professorship of Photogrammetry of the Institute of Geodesy and Geoinformation. He finished his diploma in computer sciene (“Dipl.-Inform. (FH)”) in 1994 at the Cologne University of Applied Sciences. From 1994 to 2000 he was employed at the Institute of Photogrammetry in the joint project “Passunktmodelldatenbank für die automatische Orientierung von Luftbildern” (automatic orientation of images) which was a cooperation with the land survey department (“Landesvermessungsamt”) of North-Rhine-Westfalia. Since 2001 he is a software developer and administrator at the professorship of Photogrammetry.Awards
- Hansa Luftbild Price 2001 together with Eberhard Gülch and Hardo Müller. (Best paper award for PFG papers in the year 2001)
Publications
2024
- L. Wiesmann, T. Läbe, L. Nunes, J. Behley, and C. Stachniss, “Joint Intrinsic and Extrinsic Calibration of Perception Systems Utilizing a Calibration Environment,” ral, vol. 9, iss. 10, pp. 9103-9110, 2024. doi:10.1109/LRA.2024.3457385
[BibTeX] [PDF]@article{wiesmann2024ral, author = {L. Wiesmann and T. L\"abe and L. Nunes and J. Behley and C. Stachniss}, title = {{Joint Intrinsic and Extrinsic Calibration of Perception Systems Utilizing a Calibration Environment}}, journal = ral, year = {2024}, volume = {9}, number = {10}, pages = {9103-9110}, issn = {2377-3766}, doi = {10.1109/LRA.2024.3457385}, }
- F. Magistri, T. Läbe, E. Marks, S. Nagulavancha, Y. Pan, C. Smitt, L. Klingbeil, M. Halstead, H. Kuhlmann, C. McCool, J. Behley, and C. Stachniss, “A Dataset and Benchmark for Shape Completion of Fruits for Agricultural Robotics,” arXiv Preprint, 2024.
[BibTeX] [PDF]@article{magistri2024arxiv, title={{A Dataset and Benchmark for Shape Completion of Fruits for Agricultural Robotics}}, author={F. Magistri and T. L\"abe and E. Marks and S. Nagulavancha and Y. Pan and C. Smitt and L. Klingbeil and M. Halstead and H. Kuhlmann and C. McCool and J. Behley and C. Stachniss}, journal = arxiv, year=2024, eprint={2407.13304}, }
- J. Weyler, T. Läbe, J. Behley, and C. Stachniss, “Panoptic Segmentation with Partial Annotations for Agricultural Robots,” ral, vol. 9, iss. 2, pp. 1660-1667, 2024. doi:10.1109/LRA.2023.3346760
[BibTeX] [PDF] [Code]@article{weyler2024ral, author = {J. Weyler and T. L\"abe and J. Behley and C. Stachniss}, title = {{Panoptic Segmentation with Partial Annotations for Agricultural Robots}}, journal = ral, year = {2024}, volume = {9}, number = {2}, pages = {1660-1667}, issn = {2377-3766}, doi = {10.1109/LRA.2023.3346760}, codeurl = {https://github.com/PRBonn/PSPA} }
- C. Smitt, M. A. Halstead, P. Zimmer, T. Läbe, E. Guclu, C. Stachniss, and C. S. McCool, “PAg-NeRF: Towards fast and efficient end-to-end panoptic 3D representations for agricultural robotics,” ral, vol. 9, iss. 1, pp. 907-914, 2024. doi:10.1109/LRA.2023.3338515
[BibTeX] [PDF] [Code]@article{smitt2024ral-pagn, author = {C. Smitt and M.A. Halstead and P. Zimmer and T. L\"abe and E. Guclu and C. Stachniss and C.S. McCool}, title = {{PAg-NeRF: Towards fast and efficient end-to-end panoptic 3D representations for agricultural robotics}}, journal = ral, year = {2024}, volume = {9}, number = {1}, pages = {907-914}, issn = {2377-3766}, doi = {10.1109/LRA.2023.3338515}, codeurl = {https://github.com/Agricultural-Robotics-Bonn/pagnerf} }
2023
- Y. Pan, F. Magistri, T. Läbe, E. Marks, C. Smitt, C. S. McCool, J. Behley, and C. Stachniss, “Panoptic Mapping with Fruit Completion and Pose Estimation for Horticultural Robots,” in iros, 2023.
[BibTeX] [PDF] [Code] [Video]@inproceedings{pan2023iros, author = {Y. Pan and F. Magistri and T. L\"abe and E. Marks and C. Smitt and C.S. McCool and J. Behley and C. Stachniss}, title = {{Panoptic Mapping with Fruit Completion and Pose Estimation for Horticultural Robots}}, booktitle = iros, year = 2023, codeurl = {https://github.com/PRBonn/HortiMapping}, videourl = {https://youtu.be/fSyHBhskjqA} }
- J. Weyler, T. Läbe, F. Magistri, J. Behley, and C. Stachniss, “Towards Domain Generalization in Crop and Weed Segmentation for Precision Farming Robots,” ral, vol. 8, iss. 6, pp. 3310-3317, 2023. doi:10.1109/LRA.2023.3262417
[BibTeX] [PDF] [Code]@article{weyler2023ral, author = {J. Weyler and T. L\"abe and F. Magistri and J. Behley and C. Stachniss}, title = {{Towards Domain Generalization in Crop and Weed Segmentation for Precision Farming Robots}}, journal = ral, pages = {3310-3317}, volume = 8, number = 6, issn = {2377-3766}, year = {2023}, doi = {10.1109/LRA.2023.3262417}, codeurl = {https://github.com/PRBonn/DG-CWS}, }
2022
- N. Zimmerman, L. Wiesmann, T. Guadagnino, T. Läbe, J. Behley, and C. Stachniss, “Robust Onboard Localization in Changing Environments Exploiting Text Spotting,” in iros, 2022.
[BibTeX] [PDF] [Code]@inproceedings{zimmerman2022iros, title = {{Robust Onboard Localization in Changing Environments Exploiting Text Spotting}}, author = {N. Zimmerman and L. Wiesmann and T. Guadagnino and T. Läbe and J. Behley and C. Stachniss}, booktitle = iros, year = {2022}, codeurl = {https://github.com/PRBonn/tmcl}, }
- F. Magistri, E. Marks, S. Nagulavancha, I. Vizzo, T. Läbe, J. Behley, M. Halstead, C. McCool, and C. Stachniss, “Contrastive 3D Shape Completion and Reconstruction for Agricultural Robots using RGB-D Frames,” ral, vol. 7, iss. 4, pp. 10120-10127, 2022.
[BibTeX] [PDF] [Video]@article{magistri2022ral-iros, author = {Federico Magistri and Elias Marks and Sumanth Nagulavancha and Ignacio Vizzo and Thomas L{\"a}be and Jens Behley and Michael Halstead and Chris McCool and Cyrill Stachniss}, title = {Contrastive 3D Shape Completion and Reconstruction for Agricultural Robots using RGB-D Frames}, journal = ral, url = {https://www.ipb.uni-bonn.de/wp-content/papercite-data/pdf/magistri2022ral-iros.pdf}, year = {2022}, volume={7}, number={4}, pages={10120-10127}, videourl = {https://www.youtube.com/watch?v=2ErUf9q7YOI}, }
2021
- X. Chen, T. Läbe, A. Milioto, T. Röhling, J. Behley, and C. Stachniss, “OverlapNet: A Siamese Network for Computing LiDAR Scan Similarity with Applications to Loop Closing and Localization,” Autonomous Robots, vol. 46, p. 61–81, 2021. doi:10.1007/s10514-021-09999-0
[BibTeX] [PDF] [Code]@article{chen2021auro, author = {X. Chen and T. L\"abe and A. Milioto and T. R\"ohling and J. Behley and C. Stachniss}, title = {{OverlapNet: A Siamese Network for Computing LiDAR Scan Similarity with Applications to Loop Closing and Localization}}, journal = {Autonomous Robots}, year = {2021}, doi = {10.1007/s10514-021-09999-0}, issn = {1573-7527}, volume=46, pages={61--81}, codeurl = {https://github.com/PRBonn/OverlapNet}, url = {https://www.ipb.uni-bonn.de/pdfs/chen2021auro.pdf} }
- X. Chen, I. Vizzo, T. Läbe, J. Behley, and C. Stachniss, “Range Image-based LiDAR Localization for Autonomous Vehicles,” in Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA), 2021.
[BibTeX] [PDF] [Code] [Video]@inproceedings{chen2021icra, author = {X. Chen and I. Vizzo and T. L{\"a}be and J. Behley and C. Stachniss}, title = {{Range Image-based LiDAR Localization for Autonomous Vehicles}}, booktitle = icra, year = 2021, url = {https://www.ipb.uni-bonn.de/pdfs/chen2021icra.pdf}, codeurl = {https://github.com/PRBonn/range-mcl}, videourl = {https://youtu.be/hpOPXX9oPqI}, }
- N. Chebrolu, T. Läbe, O. Vysotska, J. Behley, and C. Stachniss, “Adaptive Robust Kernels for Non-Linear Least Squares Problems,” ral, vol. 6, pp. 2240-2247, 2021. doi:10.1109/LRA.2021.3061331
[BibTeX] [PDF] [Video]@article{chebrolu2021ral, author = {N. Chebrolu and T. L\"{a}be and O. Vysotska and J. Behley and C. Stachniss}, title = {{Adaptive Robust Kernels for Non-Linear Least Squares Problems}}, journal = ral, volume = 6, issue = 2, pages = {2240-2247}, doi = {10.1109/LRA.2021.3061331}, year = 2021, videourl = {https://youtu.be/34Zp3ZX0Bnk} }
- N. Chebrolu, F. Magistri, T. Läbe, and C. Stachniss, “Registration of Spatio-Temporal Point Clouds of Plants for Phenotyping,” PLoS ONE, vol. 16, iss. 2, 2021.
[BibTeX] [PDF] [Video]@article{chebrolu2021plosone, author = {N. Chebrolu and F. Magistri and T. L{\"a}be and C. Stachniss}, title = {{Registration of Spatio-Temporal Point Clouds of Plants for Phenotyping}}, journal = plosone, year = 2021, volume = 16, number = 2, videourl = {https://youtu.be/OV39kb5Nqg8}, }
2020
- X. Chen, T. Läbe, L. Nardi, J. Behley, and C. Stachniss, “Learning an Overlap-based Observation Model for 3D LiDAR Localization,” in iros, 2020.
[BibTeX] [PDF] [Code] [Video]@inproceedings{chen2020iros, author = {X. Chen and T. L\"abe and L. Nardi and J. Behley and C. Stachniss}, title = {{Learning an Overlap-based Observation Model for 3D LiDAR Localization}}, booktitle = iros, year = {2020}, codeurl = {https://github.com/PRBonn/overlap_localization}, url={https://www.ipb.uni-bonn.de/pdfs/chen2020iros.pdf}, videourl = {https://www.youtube.com/watch?v=BozPqy_6YcE}, }
- X. Chen, T. Läbe, A. Milioto, T. Röhling, O. Vysotska, A. Haag, J. Behley, and C. Stachniss, “OverlapNet: Loop Closing for LiDAR-based SLAM,” in rss, 2020.
[BibTeX] [PDF] [Code] [Video]@inproceedings{chen2020rss, author = {X. Chen and T. L\"abe and A. Milioto and T. R\"ohling and O. Vysotska and A. Haag and J. Behley and C. Stachniss}, title = {{OverlapNet: Loop Closing for LiDAR-based SLAM}}, booktitle = rss, year = {2020}, codeurl = {https://github.com/PRBonn/OverlapNet/}, videourl = {https://youtu.be/YTfliBco6aw}, }
- N. Chebrolu, T. Laebe, O. Vysotska, J. Behley, and C. Stachniss, “Adaptive Robust Kernels for Non-Linear Least Squares Problems,” arXiv Preprint, 2020.
[BibTeX] [PDF]@article{chebrolu2020arxiv, title={Adaptive Robust Kernels for Non-Linear Least Squares Problems}, author={N. Chebrolu and T. Laebe and O. Vysotska and J. Behley and C. Stachniss}, journal = arxiv, year=2020, eprint={2004.14938}, keywords={cs.RO}, url={https://arxiv.org/pdf/2004.14938v2} }
- N. Chebrolu, T. Laebe, and C. Stachniss, “Spatio-Temporal Non-Rigid Registration of 3D Point Clouds of Plants,” in Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA), 2020.
[BibTeX] [PDF] [Video]@InProceedings{chebrolu2020icra, title = {Spatio-Temporal Non-Rigid Registration of 3D Point Clouds of Plants}, author = {N. Chebrolu and T. Laebe and C. Stachniss}, booktitle = icra, year = {2020}, url = {https://www.ipb.uni-bonn.de/pdfs/chebrolu2020icra.pdf}, videourl = {https://www.youtube.com/watch?v=uGkep_aelBc}, }
- J. Quenzel, R. A. Rosu, T. Laebe, C. Stachniss, and S. Behnke, “Beyond Photometric Consistency: Gradient-based Dissimilarity for Improving Visual Odometry and Stereo Matching,” in Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA), 2020.
[BibTeX] [PDF] [Video]@InProceedings{quenzel020icra, title = {Beyond Photometric Consistency: Gradient-based Dissimilarity for Improving Visual Odometry and Stereo Matching}, author = {J. Quenzel and R.A. Rosu and T. Laebe and C. Stachniss and S. Behnke}, booktitle = icra, year = {2020}, url = {https://www.ipb.uni-bonn.de/pdfs/quenzel2020icra.pdf}, videourl = {https://www.youtube.com/watch?v=cqv7k-BK0g0}, }
2019
- N. Chebrolu, P. Lottes, T. Laebe, and C. Stachniss, “Robot Localization Based on Aerial Images for Precision Agriculture Tasks in Crop Fields,” in Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA), 2019.
[BibTeX] [PDF] [Video]@InProceedings{chebrolu2019icra, author = {N. Chebrolu and P. Lottes and T. Laebe and C. Stachniss}, title = {{Robot Localization Based on Aerial Images for Precision Agriculture Tasks in Crop Fields}}, booktitle = icra, year = 2019, url = {https://www.ipb.uni-bonn.de/pdfs/chebrolu2019icra.pdf}, videourl = {https://youtu.be/TlijLgoRLbc}, }
- L. Klingbeil, E. Heinz, M. Wieland, J. Eichel, T. Läbe, and H. Kuhlmann, “On the UAV based Analysis of Slow Geomorphological Processes: A Case Study at a Solifluction Lobe in the Turtmann Valley,” in Proc. of the 4th Joint International Symposium on Deformation Monitoring (JISDM), 2019.
[BibTeX] [PDF]@InProceedings{klingbeil19jisdm, author = {L. Klingbeil and E. Heinz and M. Wieland and J. Eichel and T. L\"abe and H. Kuhlmann}, title = {On the UAV based Analysis of Slow Geomorphological Processes: A Case Study at a Solifluction Lobe in the Turtmann Valley}, booktitle = {Proc. of the 4th Joint International Symposium on Deformation Monitoring (JISDM)}, year = 2019, url = {https://www.ipb.uni-bonn.de/pdfs/klingbeil19jisdm.pdf}, }
2018
- N. Chebrolu, T. Läbe, and C. Stachniss, “Robust Long-Term Registration of UAV Images of Crop Fields for Precision Agriculture,” ral, vol. 3, iss. 4, pp. 3097-3104, 2018. doi:10.1109/LRA.2018.2849603
[BibTeX] [PDF]@Article{chebrolu2018ral, author={N. Chebrolu and T. L\"abe and C. Stachniss}, journal=ral, title={Robust Long-Term Registration of UAV Images of Crop Fields for Precision Agriculture}, year={2018}, volume={3}, number={4}, pages={3097-3104}, keywords={Agriculture;Cameras;Geometry;Monitoring;Robustness;Three-dimensional displays;Visualization;Robotics in agriculture and forestry;SLAM}, doi={10.1109/LRA.2018.2849603}, url={https://www.ipb.uni-bonn.de/pdfs/chebrolu2018ral.pdf} }
2017
- C. Beekmans, J. Schneider, T. Laebe, M. Lennefer, C. Stachniss, and C. Simmer, “3D-Cloud Morphology and Motion from Dense Stereo for Fisheye Cameras,” in In Proc. of the European Geosciences Union General Assembly (EGU), 2017.
[BibTeX] [PDF]@InProceedings{beekmans2017egu, title = {3D-Cloud Morphology and Motion from Dense Stereo for Fisheye Cameras}, author = {Ch. Beekmans and J. Schneider and T. Laebe and M. Lennefer and C. Stachniss and C. Simmer}, booktitle = {In Proc. of the European Geosciences Union General Assembly (EGU)}, year = {2017}, }
- A. Kicherer, K. Herzog, N. Bendel, H. Klück, A. Backhaus, M. Wieland, J. C. Rose, L. Klingbeil, T. Läbe, C. Hohl, W. Petry, H. Kuhlmann, U. Seiffert, and R. Töpfer, “Phenoliner: A New Field Phenotyping Platform for Grapevine Research,” Sensors, vol. 17, iss. 7, 2017. doi:10.3390/s17071625
[BibTeX] [PDF]
In grapevine research the acquisition of phenotypic data is largely restricted to the field due to its perennial nature and size. The methodologies used to assess morphological traits and phenology are mainly limited to visual scoring. Some measurements for biotic and abiotic stress, as well as for quality assessments, are done by invasive measures. The new evolving sensor technologies provide the opportunity to perform non-destructive evaluations of phenotypic traits using different field phenotyping platforms. One of the biggest technical challenges for field phenotyping of grapevines are the varying light conditions and the background. In the present study the Phenoliner is presented, which represents a novel type of a robust field phenotyping platform. The vehicle is based on a grape harvester following the concept of a moveable tunnel. The tunnel it is equipped with different sensor systems (RGB and NIR camera system, hyperspectral camera, RTK-GPS, orientation sensor) and an artificial broadband light source. It is independent from external light conditions and in combination with artificial background, the Phenoliner enables standardised acquisition of high-quality, geo-referenced sensor data.
@Article{kicherer2017phenoliner, author = {Kicherer, Anna and Herzog, Katja and Bendel, Nele and Klück, Hans-Christian and Backhaus, Andreas and Wieland, Markus and Rose, Johann Christian and Klingbeil, Lasse and Läbe, Thomas and Hohl, Christian and Petry, Willi and Kuhlmann, Heiner and Seiffert, Udo and Töpfer, Reinhard}, title = {Phenoliner: A New Field Phenotyping Platform for Grapevine Research}, journal = {Sensors}, volume = {17}, year = {2017}, number = {7}, url = {https://www.mdpi.com/1424-8220/17/7/1625/pdf}, issn = {1424-8220}, abstract = {In grapevine research the acquisition of phenotypic data is largely restricted to the field due to its perennial nature and size. The methodologies used to assess morphological traits and phenology are mainly limited to visual scoring. Some measurements for biotic and abiotic stress, as well as for quality assessments, are done by invasive measures. The new evolving sensor technologies provide the opportunity to perform non-destructive evaluations of phenotypic traits using different field phenotyping platforms. One of the biggest technical challenges for field phenotyping of grapevines are the varying light conditions and the background. In the present study the Phenoliner is presented, which represents a novel type of a robust field phenotyping platform. The vehicle is based on a grape harvester following the concept of a moveable tunnel. The tunnel it is equipped with different sensor systems (RGB and NIR camera system, hyperspectral camera, RTK-GPS, orientation sensor) and an artificial broadband light source. It is independent from external light conditions and in combination with artificial background, the Phenoliner enables standardised acquisition of high-quality, geo-referenced sensor data.}, doi = {10.3390/s17071625}, }
2016
- C. Beekmans, J. Schneider, T. Läbe, M. Lennefer, C. Stachniss, and C. Simmer, “Cloud Photogrammetry with Dense Stereo for Fisheye Cameras,” Atmospheric Chemistry and Physics (ACP), vol. 16, iss. 22, pp. 14231-14248, 2016. doi:10.5194/acp-16-14231-2016
[BibTeX] [PDF]
We present a novel approach for dense 3-D cloud reconstruction above an area of 10 × 10 km2 using two hemispheric sky imagers with fisheye lenses in a stereo setup. We examine an epipolar rectification model designed for fisheye cameras, which allows the use of efficient out-of-the-box dense matching algorithms designed for classical pinhole-type cameras to search for correspondence information at every pixel. The resulting dense point cloud allows to recover a detailed and more complete cloud morphology compared to previous approaches that employed sparse feature-based stereo or assumed geometric constraints on the cloud field. Our approach is very efficient and can be fully automated. From the obtained 3-D shapes, cloud dynamics, size, motion, type and spacing can be derived, and used for radiation closure under cloudy conditions, for example. Fisheye lenses follow a different projection function than classical pinhole-type cameras and provide a large field of view with a single image. However, the computation of dense 3-D information is more complicated and standard implementations for dense 3-D stereo reconstruction cannot be easily applied. Together with an appropriate camera calibration, which includes internal camera geometry, global position and orientation of the stereo camera pair, we use the correspondence information from the stereo matching for dense 3-D stereo reconstruction of clouds located around the cameras. We implement and evaluate the proposed approach using real world data and present two case studies. In the first case, we validate the quality and accuracy of the method by comparing the stereo reconstruction of a stratocumulus layer with reflectivity observations measured by a cloud radar and the cloud-base height estimated from a Lidar-ceilometer. The second case analyzes a rapid cumulus evolution in the presence of strong wind shear.
@Article{beekmans16acp, title = {Cloud Photogrammetry with Dense Stereo for Fisheye Cameras}, author = {C. Beekmans and J. Schneider and T. L\"abe and M. Lennefer and C. Stachniss and C. Simmer}, journal = {Atmospheric Chemistry and Physics (ACP)}, year = {2016}, number = {22}, pages = {14231-14248}, volume = {16}, abstract = {We present a novel approach for dense 3-D cloud reconstruction above an area of 10 × 10 km2 using two hemispheric sky imagers with fisheye lenses in a stereo setup. We examine an epipolar rectification model designed for fisheye cameras, which allows the use of efficient out-of-the-box dense matching algorithms designed for classical pinhole-type cameras to search for correspondence information at every pixel. The resulting dense point cloud allows to recover a detailed and more complete cloud morphology compared to previous approaches that employed sparse feature-based stereo or assumed geometric constraints on the cloud field. Our approach is very efficient and can be fully automated. From the obtained 3-D shapes, cloud dynamics, size, motion, type and spacing can be derived, and used for radiation closure under cloudy conditions, for example. Fisheye lenses follow a different projection function than classical pinhole-type cameras and provide a large field of view with a single image. However, the computation of dense 3-D information is more complicated and standard implementations for dense 3-D stereo reconstruction cannot be easily applied. Together with an appropriate camera calibration, which includes internal camera geometry, global position and orientation of the stereo camera pair, we use the correspondence information from the stereo matching for dense 3-D stereo reconstruction of clouds located around the cameras. We implement and evaluate the proposed approach using real world data and present two case studies. In the first case, we validate the quality and accuracy of the method by comparing the stereo reconstruction of a stratocumulus layer with reflectivity observations measured by a cloud radar and the cloud-base height estimated from a Lidar-ceilometer. The second case analyzes a rapid cumulus evolution in the presence of strong wind shear.}, doi = {10.5194/acp-16-14231-2016}, url = {https://www.ipb.uni-bonn.de/pdfs/beekmans16acp.pdf}, }
2014
- K. Herzog, R. Roscher, M. Wieland, A. Kicherer, T. Läbe, W. Förstner, H. Kuhlmann, and R. Töpfer, “Initial steps for high-throughput phenotyping in vineyards,” VITIS – Journal of Grapevine Research, vol. 53, iss. 1, p. 1–8, 2014.
[BibTeX]
The evaluation of phenotypic characters of grape- vines is required directly in the vineyard and is strongly limited by time, costs and the subjectivity of person in charge. Sensor-based techniques are prerequisite to al- low non-invasive phenotyping of individual plant traits, to increase the quantity of object records and to reduce error variation. Thus, a Prototype-Image-Acquisition- System (PIAS) was developed for semi-automated cap- ture of geo-referenced RGB images in an experimental vineyard. Different strategies were tested for image in- terpretation using Matlab. The interpretation of imag- es from the vineyard with the real background is more practice-oriented but requires the calculation of depth maps. Images were utilised to verify the phenotyping results of two semi-automated and one automated pro- totype image interpretation framework. The semi-auto- mated procedures enable contactless and non-invasive detection of bud burst and quantification of shoots at an early developmental stage (BBCH 10) and enable fast and accurate determination of the grapevine berry size at BBCH 89. Depending on the time of image ac- quisition at BBCH 10 up to 94 \% of green shoots were visible in images. The mean berry size (BBCH 89) was recorded non-invasively with a precision of 1 mm.
@Article{herzog2014initial, title = {Initial steps for high-throughput phenotyping in vineyards}, author = {Herzog, Katja and Roscher, Ribana and Wieland, Markus and Kicherer,Anna and L\"abe, Thomas and F\"orstner, Wolfgang and Kuhlmann, Heiner and T\"opfer, Reinhard}, journal = {VITIS - Journal of Grapevine Research}, year = {2014}, month = jan, number = {1}, pages = {1--8}, volume = {53}, abstract = {The evaluation of phenotypic characters of grape- vines is required directly in the vineyard and is strongly limited by time, costs and the subjectivity of person in charge. Sensor-based techniques are prerequisite to al- low non-invasive phenotyping of individual plant traits, to increase the quantity of object records and to reduce error variation. Thus, a Prototype-Image-Acquisition- System (PIAS) was developed for semi-automated cap- ture of geo-referenced RGB images in an experimental vineyard. Different strategies were tested for image in- terpretation using Matlab. The interpretation of imag- es from the vineyard with the real background is more practice-oriented but requires the calculation of depth maps. Images were utilised to verify the phenotyping results of two semi-automated and one automated pro- totype image interpretation framework. The semi-auto- mated procedures enable contactless and non-invasive detection of bud burst and quantification of shoots at an early developmental stage (BBCH 10) and enable fast and accurate determination of the grapevine berry size at BBCH 89. Depending on the time of image ac- quisition at BBCH 10 up to 94 \% of green shoots were visible in images. The mean berry size (BBCH 89) was recorded non-invasively with a precision of 1 mm.}, }
- L. Klingbeil, M. Nieuwenhuisen, J. Schneider, C. Eling, D. Droeschel, D. Holz, T. Läbe, W. Förstner, S. Behnke, and H. Kuhlmann, “Towards Autonomous Navigation of an UAV-based Mobile Mapping System,” in 4th International Conf. on Machine Control & Guidance, 2014, p. 136–147.
[BibTeX] [PDF]
For situations, where mapping is neither possible from high altitudes nor from the ground, we are developing an autonomous micro aerial vehicle able to fly at low altitudes in close vicinity of obstacles. This vehicle is based on a MikroKopterTM octocopter platform (maximum total weight: 5kg), and contains a dual frequency GPS board, an IMU, a compass, two stereo camera pairs with fisheye lenses, a rotating 3D laser scanner, 8 ultrasound sensors, a real-time processing unit, and a compact PC for on-board ego-motion estimation and obstacle detection for autonomous navigation. A high-resolution camera is used for the actual mapping task, where the environment is reconstructed in three dimensions from images, using a highly accurate bundle adjustment. In this contribution, we describe the sensor system setup and present results from the evaluation of several aspects of the different subsystems as well as initial results from flight tests.
@InProceedings{klingbeil14mcg, title = {Towards Autonomous Navigation of an UAV-based Mobile Mapping System}, author = {Klingbeil, Lasse and Nieuwenhuisen, Matthias and Schneider, Johannes and Eling, Christian and Droeschel, David and Holz, Dirk and L\"abe, Thomas and F\"orstner, Wolfgang and Behnke, Sven and Kuhlmann, Heiner}, booktitle = {4th International Conf. on Machine Control \& Guidance}, year = {2014}, pages = {136--147}, abstract = {For situations, where mapping is neither possible from high altitudes nor from the ground, we are developing an autonomous micro aerial vehicle able to fly at low altitudes in close vicinity of obstacles. This vehicle is based on a MikroKopterTM octocopter platform (maximum total weight: 5kg), and contains a dual frequency GPS board, an IMU, a compass, two stereo camera pairs with fisheye lenses, a rotating 3D laser scanner, 8 ultrasound sensors, a real-time processing unit, and a compact PC for on-board ego-motion estimation and obstacle detection for autonomous navigation. A high-resolution camera is used for the actual mapping task, where the environment is reconstructed in three dimensions from images, using a highly accurate bundle adjustment. In this contribution, we describe the sensor system setup and present results from the evaluation of several aspects of the different subsystems as well as initial results from flight tests.}, url = {https://www.ipb.uni-bonn.de/pdfs/klingbeil14mcg.pdf}, }
- J. Schneider, T. Läbe, and W. Förstner, “Real-Time Bundle Adjustment with an Omnidirectional Multi-Camera System and GPS,” in Proc. of the 4th International Conf. on Machine Control & Guidance, 2014, p. 98–103.
[BibTeX] [PDF]
In this paper we present our system for visual odometry that performs a fast incremental bundle adjustment for real-time structure and motion estimation in an unknown scene. It is applicable to image streams of a calibrated multi-camera system with omnidirectional cameras. In this paper we use an autonomously flying octocopter that is equipped for visual odometry and obstacle detection with four fisheye cameras, which provide a large field of view. For real-time ego-motion estimation the platform is equipped, besides the cameras, with a dual frequency GPS board, an IMU and a compass. In this paper we show how we apply our system for visual odometry using the synchronized video streams of the four fisheye cameras. The position and orientation information from the GPS-unit and the inertial sensors can optionally be integrated into our system. We will show the obtained accuracy of pure odometry and compare it with the solution from GPS/INS.
@InProceedings{schneider14mcg, title = {Real-Time Bundle Adjustment with an Omnidirectional Multi-Camera System and GPS}, author = {J. Schneider and T. L\"abe and W. F\"orstner}, booktitle = {Proc. of the 4th International Conf. on Machine Control \& Guidance}, year = {2014}, pages = {98--103}, abstract = {In this paper we present our system for visual odometry that performs a fast incremental bundle adjustment for real-time structure and motion estimation in an unknown scene. It is applicable to image streams of a calibrated multi-camera system with omnidirectional cameras. In this paper we use an autonomously flying octocopter that is equipped for visual odometry and obstacle detection with four fisheye cameras, which provide a large field of view. For real-time ego-motion estimation the platform is equipped, besides the cameras, with a dual frequency GPS board, an IMU and a compass. In this paper we show how we apply our system for visual odometry using the synchronized video streams of the four fisheye cameras. The position and orientation information from the GPS-unit and the inertial sensors can optionally be integrated into our system. We will show the obtained accuracy of pure odometry and compare it with the solution from GPS/INS.}, city = {Braunschweig}, url = {https://www.ipb.uni-bonn.de/pdfs/schneider14mcg.pdf}, }
2013
- M. Nieuwenhuisen, D. Droeschel, J. Schneider, D. Holz, T. Läbe, and S. Behnke, “Multimodal Obstacle Detection and Collision Avoidance for Micro Aerial Vehicles,” in Proc. of the 6th European Conf. on Mobile Robots (ECMR), 2013. doi:10.1109/ECMR.2013.6698812
[BibTeX] [PDF]
Reliably perceiving obstacles and avoiding collisions is key for the fully autonomous application of micro aerial vehicles (MAVs). Limiting factors for increasing autonomy and complexity of MAVs (without external sensing and control) are limited onboard sensing and limited onboard processing power. In this paper, we propose a complete system with a multimodal sensor setup for omnidirectional obstacle perception. We developed a lightweight 3D laser scanner setup and visual obstacle detection using wide-angle stereo cameras. Together with our fast reactive collision avoidance approach based on local egocentric grid maps of the environment we aim at safe operation in the vicinity of structures like buildings or vegetation.
@InProceedings{nieuwenhuisen13ecmr, title = {Multimodal Obstacle Detection and Collision Avoidance for Micro Aerial Vehicles}, author = {Nieuwenhuisen, Matthias and Droeschel, David and Schneider, Johannes and Holz, Dirk and L\"abe, Thomas and Behnke, Sven}, booktitle = {Proc. of the 6th European Conf. on Mobile Robots (ECMR)}, year = {2013}, abstract = {Reliably perceiving obstacles and avoiding collisions is key for the fully autonomous application of micro aerial vehicles (MAVs). Limiting factors for increasing autonomy and complexity of MAVs (without external sensing and control) are limited onboard sensing and limited onboard processing power. In this paper, we propose a complete system with a multimodal sensor setup for omnidirectional obstacle perception. We developed a lightweight 3D laser scanner setup and visual obstacle detection using wide-angle stereo cameras. Together with our fast reactive collision avoidance approach based on local egocentric grid maps of the environment we aim at safe operation in the vicinity of structures like buildings or vegetation.}, city = {Barcelona}, doi = {10.1109/ECMR.2013.6698812}, url = {https://www.ais.uni-bonn.de/papers/ECMR_2013_Nieuwenhuisen_Multimodal_Obstacle_Avoidance.pdf}, }
- J. Schneider, T. Läbe, and W. Förstner, “Incremental Real-time Bundle Adjustment for Multi-camera Systems with Points at Infinity,” in ISPRS Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, 2013, pp. 355-360. doi:10.5194/isprsarchives-XL-1-W2-355-2013
[BibTeX] [PDF]
This paper presents a concept and first experiments on a keyframe-based incremental bundle adjustment for real-time structure and motion estimation in an unknown scene. In order to avoid periodic batch steps, we use the software iSAM2 for sparse nonlinear incremental optimization, which is highly efficient through incremental variable reordering and fluid relinearization. We adapted the software to allow for (1) multi-view cameras by taking the rigid transformation between the cameras into account, (2) omni-directional cameras as it can handle arbitrary bundles of rays and (3) scene points at infinity, which improve the estimation of the camera orientation as points at the horizon can be observed over long periods of time. The real-time bundle adjustment refers to sets of keyframes, consisting of frames, one per camera, taken in a synchronized way, that are initiated if a minimal geometric distance to the last keyframe set is exceeded. It uses interest points in the keyframes as observations, which are tracked in the synchronized video streams of the individual cameras and matched across the cameras, if possible. First experiments show the potential of the incremental bundle adjustment \wrt time requirements. Our experiments are based on a multi-camera system with four fisheye cameras, which are mounted on a UAV as two stereo pairs, one looking ahead and one looking backwards, providing a large field of view.
@InProceedings{schneider13isprs, title = {Incremental Real-time Bundle Adjustment for Multi-camera Systems with Points at Infinity}, author = {J. Schneider and T. L\"abe and W. F\"orstner}, booktitle = {ISPRS Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences}, year = {2013}, pages = {355-360}, volume = {XL-1/W2}, abstract = {This paper presents a concept and first experiments on a keyframe-based incremental bundle adjustment for real-time structure and motion estimation in an unknown scene. In order to avoid periodic batch steps, we use the software iSAM2 for sparse nonlinear incremental optimization, which is highly efficient through incremental variable reordering and fluid relinearization. We adapted the software to allow for (1) multi-view cameras by taking the rigid transformation between the cameras into account, (2) omni-directional cameras as it can handle arbitrary bundles of rays and (3) scene points at infinity, which improve the estimation of the camera orientation as points at the horizon can be observed over long periods of time. The real-time bundle adjustment refers to sets of keyframes, consisting of frames, one per camera, taken in a synchronized way, that are initiated if a minimal geometric distance to the last keyframe set is exceeded. It uses interest points in the keyframes as observations, which are tracked in the synchronized video streams of the individual cameras and matched across the cameras, if possible. First experiments show the potential of the incremental bundle adjustment \wrt time requirements. Our experiments are based on a multi-camera system with four fisheye cameras, which are mounted on a UAV as two stereo pairs, one looking ahead and one looking backwards, providing a large field of view.}, doi = {10.5194/isprsarchives-XL-1-W2-355-2013}, url = {https://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XL-1-W2/355/2013/isprsarchives-XL-1-W2-355-2013.pdf}, }
2012
- J. Schneider, F. Schindler, T. Läbe, and W. Förstner, “Bundle Adjustment for Multi-camera Systems with Points at Infinity,” in ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences, 2012, p. 75–80. doi:10.5194/isprsannals-I-3-75-2012
[BibTeX] [PDF]
We present a novel approach for a rigorous bundle adjustment for omnidirectional and multi-view cameras, which enables an efficient maximum-likelihood estimation with image and scene points at infinity. Multi-camera systems are used to increase the resolution, to combine cameras with different spectral sensitivities (Z/I DMC, Vexcel Ultracam) or – like omnidirectional cameras – to augment the effective aperture angle (Blom Pictometry, Rollei Panoscan Mark III). Additionally multi-camera systems gain in importance for the acquisition of complex 3D structures. For stabilizing camera orientations – especially rotations – one should generally use points at the horizon over long periods of time within the bundle adjustment that classical bundle adjustment programs are not capable of. We use a minimal representation of homogeneous coordinates for image and scene points. Instead of eliminating the scale factor of the homogeneous vectors by Euclidean normalization, we normalize the homogeneous coordinates spherically. This way we can use images of omnidirectional cameras with single-view point like fisheye cameras and scene points, which are far away or at infinity. We demonstrate the feasibility and the potential of our approach on real data taken with a single camera, the stereo camera FinePix Real 3D W3 from Fujifilm and the multi-camera system Ladybug3 from Point Grey.
@InProceedings{schneider12isprs, title = {Bundle Adjustment for Multi-camera Systems with Points at Infinity}, author = {J. Schneider and F. Schindler and T. L\"abe and W. F\"orstner}, booktitle = {ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences}, year = {2012}, pages = {75--80}, volume = {I-3}, abstract = {We present a novel approach for a rigorous bundle adjustment for omnidirectional and multi-view cameras, which enables an efficient maximum-likelihood estimation with image and scene points at infinity. Multi-camera systems are used to increase the resolution, to combine cameras with different spectral sensitivities (Z/I DMC, Vexcel Ultracam) or - like omnidirectional cameras - to augment the effective aperture angle (Blom Pictometry, Rollei Panoscan Mark III). Additionally multi-camera systems gain in importance for the acquisition of complex 3D structures. For stabilizing camera orientations - especially rotations - one should generally use points at the horizon over long periods of time within the bundle adjustment that classical bundle adjustment programs are not capable of. We use a minimal representation of homogeneous coordinates for image and scene points. Instead of eliminating the scale factor of the homogeneous vectors by Euclidean normalization, we normalize the homogeneous coordinates spherically. This way we can use images of omnidirectional cameras with single-view point like fisheye cameras and scene points, which are far away or at infinity. We demonstrate the feasibility and the potential of our approach on real data taken with a single camera, the stereo camera FinePix Real 3D W3 from Fujifilm and the multi-camera system Ladybug3 from Point Grey.}, city = {Melbourne}, doi = {10.5194/isprsannals-I-3-75-2012}, url = {https://www.isprs-ann-photogramm-remote-sens-spatial-inf-sci.net/I-3/75/2012/isprsannals-I-3-75-2012.pdf}, }
2011
- B. Schmeing, T. Läbe, and W. Förstner, “Trajectory Reconstruction Using Long Sequences of Digital Images From an Omnidirectional Camera,” in Proc. of the 31th DGPF Conf. (Jahrestagung), Mainz, 2011, p. 443–452.
[BibTeX] [PDF]
We present a method to perform bundle adjustment using long sequences of digital images from an omnidirectional camera. We use the Ladybug3 camera from PointGrey, which consists of six individual cameras pointing in different directions. There is large overlap between successive images but only a few loop closures provide connections between distant camera positions. We face two challenges: (1) to perform a bundle adjustment with images of an omnidirectional camera and (2) implement outlier detection and estimation of initial parameters for the geometry described above. Our program combines the Ladybug?s individual cameras to a single virtual camera and uses a spherical imaging model within the bundle adjustment, solving problem (1). Outlier detection (2) is done using bundle adjustments with small subsets of images followed by a robust adjustment of all images. Approximate values in our context are taken from an on-board inertial navigation system.
@InProceedings{schmeing2011trajectory, title = {Trajectory Reconstruction Using Long Sequences of Digital Images From an Omnidirectional Camera}, author = {Schmeing, Benno and L\"abe, Thomas and F\"orstner, Wolfgang}, booktitle = {Proc. of the 31th DGPF Conf. (Jahrestagung)}, year = {2011}, address = {Mainz}, pages = {443--452}, abstract = {We present a method to perform bundle adjustment using long sequences of digital images from an omnidirectional camera. We use the Ladybug3 camera from PointGrey, which consists of six individual cameras pointing in different directions. There is large overlap between successive images but only a few loop closures provide connections between distant camera positions. We face two challenges: (1) to perform a bundle adjustment with images of an omnidirectional camera and (2) implement outlier detection and estimation of initial parameters for the geometry described above. Our program combines the Ladybug?s individual cameras to a single virtual camera and uses a spherical imaging model within the bundle adjustment, solving problem (1). Outlier detection (2) is done using bundle adjustments with small subsets of images followed by a robust adjustment of all images. Approximate values in our context are taken from an on-board inertial navigation system.}, city = {Mainz}, proceeding = {Proc. of the 31th DGPF Conf. (Jahrestagung)}, url = {https://www.ipb.uni-bonn.de/pdfs/Schmeing2011Trajectory.pdf}, }
2009
- M. Drauschke, R. Roscher, T. Läbe, and W. Förstner, “Improving Image Segmentation using Multiple View Analysis,” in Object Extraction for 3D City Models, Road Databases and Traffic Monitoring – Concepts, Algorithms and Evaluatin (CMRT09), 2009, pp. 211-216.
[BibTeX] [PDF]
In our contribution, we improve image segmentation by integrating depth information from multi-view analysis. We assume the object surface in each region can be represented by a low order polynomial, and estimate the best fitting parameters of a plane using those points of the point cloud, which are mapped to the specific region. We can merge adjacent image regions, which cannot be distinguished geometrically. We demonstrate the approach for finding spatially planar regions on aerial images. Furthermore, we discuss the possibilities of extending of our approach towards segmenting terrestrial facade images.
@InProceedings{drauschke2009improving, title = {Improving Image Segmentation using Multiple View Analysis}, author = {Drauschke, Martin and Roscher, Ribana and L\"abe, Thomas and F\"orstner, Wolfgang}, booktitle = {Object Extraction for 3D City Models, Road Databases and Traffic Monitoring - Concepts, Algorithms and Evaluatin (CMRT09)}, year = {2009}, pages = {211-216}, abstract = {In our contribution, we improve image segmentation by integrating depth information from multi-view analysis. We assume the object surface in each region can be represented by a low order polynomial, and estimate the best fitting parameters of a plane using those points of the point cloud, which are mapped to the specific region. We can merge adjacent image regions, which cannot be distinguished geometrically. We demonstrate the approach for finding spatially planar regions on aerial images. Furthermore, we discuss the possibilities of extending of our approach towards segmenting terrestrial facade images.}, city = {Paris}, url = {https://www.ipb.uni-bonn.de/pdfs/Drauschke2009Improving.pdf}, }
2008
- T. Dickscheid, T. Läbe, and W. Förstner, “Benchmarking Automatic Bundle Adjustment Results,” in 21st Congress of the International Society for Photogrammetry and Remote Sensing (ISPRS), Beijing, China, 2008, p. 7–12, Part B3a.
[BibTeX] [PDF]
In classical photogrammetry, point observations are manually determined by an operator for performing the bundle adjustment of a sequence of images. In such cases, a comparison of different estimates is usually carried out with respect to the estimated 3D object points. Today, a broad range of automatic methods are available for extracting and matching point features across images, even in the case of widely separated views and under strong deformations. This allows for fully automatic solutions to the relative orientation problem, and even to the bundle triangulation in case that manually measured control points are available. However, such systems often contain random subprocedures like RANSAC for eliminating wrong correspondences, yielding different 3D points but hopefully similar orientation parameters. This causes two problems for the evaluation: First, the randomness of the algorithm has an influence on its stability, and second, we are constrained to compare the orientation parameters instead of the 3D points. We propose a method for benchmarking automatic bundle adjustments which takes these constraints into account and uses the orientation parameters directly. Given sets of corresponding orientation parameters, we require our benchmark test to address their consistency of the form deviation and the internal precision and their precision level related to the precision of a reference data set. Besides comparing different bundle adjustment methods, the approach may be used to safely evaluate effects of feature operators, matching strategies, control parameters and other design decisions for a particular method. The goal of this paper is to derive appropriate measures to cover these aspects, describe a coherent benchmarking scheme and show the feasibility of the approach using real data.
@InProceedings{dickscheid2008benchmarking, title = {Benchmarking Automatic Bundle Adjustment Results}, author = {Dickscheid, Timo and L\"abe, Thomas and F\"orstner, Wolfgang}, booktitle = {21st Congress of the International Society for Photogrammetry and Remote Sensing (ISPRS)}, year = {2008}, address = {Beijing, China}, pages = {7--12, Part B3a}, abstract = {In classical photogrammetry, point observations are manually determined by an operator for performing the bundle adjustment of a sequence of images. In such cases, a comparison of different estimates is usually carried out with respect to the estimated 3D object points. Today, a broad range of automatic methods are available for extracting and matching point features across images, even in the case of widely separated views and under strong deformations. This allows for fully automatic solutions to the relative orientation problem, and even to the bundle triangulation in case that manually measured control points are available. However, such systems often contain random subprocedures like RANSAC for eliminating wrong correspondences, yielding different 3D points but hopefully similar orientation parameters. This causes two problems for the evaluation: First, the randomness of the algorithm has an influence on its stability, and second, we are constrained to compare the orientation parameters instead of the 3D points. We propose a method for benchmarking automatic bundle adjustments which takes these constraints into account and uses the orientation parameters directly. Given sets of corresponding orientation parameters, we require our benchmark test to address their consistency of the form deviation and the internal precision and their precision level related to the precision of a reference data set. Besides comparing different bundle adjustment methods, the approach may be used to safely evaluate effects of feature operators, matching strategies, control parameters and other design decisions for a particular method. The goal of this paper is to derive appropriate measures to cover these aspects, describe a coherent benchmarking scheme and show the feasibility of the approach using real data.}, url = {https://www.ipb.uni-bonn.de/pdfs/Dickscheid2008Benchmarking.pdf}, }
- T. Läbe, T. Dickscheid, and W. Förstner, “On the Quality of Automatic Relative Orientation Procedures,” in 21st Congress of the International Society for Photogrammetry and Remote Sensing (ISPRS), Beijing, China, 2008, p. 37–42 Part B3b-1.
[BibTeX] [PDF]
This paper presents an empirical investigation into the quality of automatic relative orientation procedures. The results of an in-house developed automatic orientation software called aurelo (Laebe and Foerstner, 2006) are evaluated. For this evaluation a recently proposed consistency measure for two sets of orientation parameters (Dickscheid et. al., 2008) and the ratio of two covariances matrices is used. Thus we evaluate the consistency of bundle block adjustments and the precision level achievable. We use different sets of orientation results related to the same set of images but computed under differing conditions. As reference datasets results on a much higher image resolution and ground truth data from artificial images rendered with computer graphics software are used. Six different effects are analysed: varying results due to random procedures in aurelo, computations on different image pyramid levels and with or without points with only two or three observations, the effect of replacing the used SIFT operator with an approximation of SIFT features, called SURF, repetitive patterns in the scene and remaining non-linear distortions. These experiments show under which conditions the bundle adjustment results reflect the true errors and thus give valuable hints for the use of automatic relative orientation procedures and possible improvements of the software.
@InProceedings{labe2008quality, title = {On the Quality of Automatic Relative Orientation Procedures}, author = {L\"abe, Thomas and Dickscheid, Timo and F\"orstner, Wolfgang}, booktitle = {21st Congress of the International Society for Photogrammetry and Remote Sensing (ISPRS)}, year = {2008}, address = {Beijing, China}, pages = {37--42 Part B3b-1}, abstract = {This paper presents an empirical investigation into the quality of automatic relative orientation procedures. The results of an in-house developed automatic orientation software called aurelo (Laebe and Foerstner, 2006) are evaluated. For this evaluation a recently proposed consistency measure for two sets of orientation parameters (Dickscheid et. al., 2008) and the ratio of two covariances matrices is used. Thus we evaluate the consistency of bundle block adjustments and the precision level achievable. We use different sets of orientation results related to the same set of images but computed under differing conditions. As reference datasets results on a much higher image resolution and ground truth data from artificial images rendered with computer graphics software are used. Six different effects are analysed: varying results due to random procedures in aurelo, computations on different image pyramid levels and with or without points with only two or three observations, the effect of replacing the used SIFT operator with an approximation of SIFT features, called SURF, repetitive patterns in the scene and remaining non-linear distortions. These experiments show under which conditions the bundle adjustment results reflect the true errors and thus give valuable hints for the use of automatic relative orientation procedures and possible improvements of the software.}, url = {https://www.ipb.uni-bonn.de/pdfs/Labe2008Quality.pdf}, }
2006
- T. Läbe and W. Förstner, “Automatic Relative Orientation of Images,” in Proc. of the 5th Turkish-German Joint Geodetic Days, Berlin, 2006.
[BibTeX] [PDF]
This paper presents a new full automatic approach for the relative orientation of several digital images taken with a calibrated camera. This approach uses new algorithms for feature extraction and relative orientation developed in the last few years. There is no need for special markers in the scene nor for approximate values of the orientation data. We use the point operator developed by D. G. Lowe (2004), which extracts points with scale- and rotation-invariant descriptors (SIFT-features). These descriptors allow a successful matching of image points even when dealing with highly convergent or rotated images. The approach consists of the following steps: After extracting image points on all images a matching between every image pair is calculated using the SIFT parameters only. No prior information about the pose of the images or the overlapping parts of the images is used. For every image pair a relative orientation is computed with the help of a RANSAC procedure. Here we use the new 5-point algorithm from D. Nister (2004). Out of this set of orientations approximate values for the orientation parameters and the object coordinates are calculated by computing the relative scales and transforming the models into a common coordinate system. Several tests are made in order to get a reliable input for the currently final step: a bundle block adjustment. The paper discusses the practical impacts of the used algorithms. Examples of different indoor- and outdoor-scenes including a data set of oblique images taken from a helicopter are presented and the results of the approach applied to these data sets are evaluated. These results show that the approach can be used for a wide range of scenes with different types of the image geometry and taken with different types of cameras including inexpensive consumer cameras. In particular we investigate in the robustness of the algorithms, e. g. in geometric tests on image triplets. Further developments like the use of image pyramids with a modified matching are discussed in the outlook. Literature: David G. Lowe, Distinctive image features from scale-invariant keypoints, International Journal of Computer Vision, 60, 2 (2004), pp. 91-110. D. Nister, An efficient solution to the five-point relative pose problem, IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 26(6):756-770, June 2004.
@InProceedings{labe2006automatic, title = {Automatic Relative Orientation of Images}, author = {L\"abe, Thomas and F\"orstner, Wolfgang}, booktitle = {Proc. of the 5th Turkish-German Joint Geodetic Days}, year = {2006}, address = {Berlin}, abstract = {This paper presents a new full automatic approach for the relative orientation of several digital images taken with a calibrated camera. This approach uses new algorithms for feature extraction and relative orientation developed in the last few years. There is no need for special markers in the scene nor for approximate values of the orientation data. We use the point operator developed by D. G. Lowe (2004), which extracts points with scale- and rotation-invariant descriptors (SIFT-features). These descriptors allow a successful matching of image points even when dealing with highly convergent or rotated images. The approach consists of the following steps: After extracting image points on all images a matching between every image pair is calculated using the SIFT parameters only. No prior information about the pose of the images or the overlapping parts of the images is used. For every image pair a relative orientation is computed with the help of a RANSAC procedure. Here we use the new 5-point algorithm from D. Nister (2004). Out of this set of orientations approximate values for the orientation parameters and the object coordinates are calculated by computing the relative scales and transforming the models into a common coordinate system. Several tests are made in order to get a reliable input for the currently final step: a bundle block adjustment. The paper discusses the practical impacts of the used algorithms. Examples of different indoor- and outdoor-scenes including a data set of oblique images taken from a helicopter are presented and the results of the approach applied to these data sets are evaluated. These results show that the approach can be used for a wide range of scenes with different types of the image geometry and taken with different types of cameras including inexpensive consumer cameras. In particular we investigate in the robustness of the algorithms, e. g. in geometric tests on image triplets. Further developments like the use of image pyramids with a modified matching are discussed in the outlook. Literature: David G. Lowe, Distinctive image features from scale-invariant keypoints, International Journal of Computer Vision, 60, 2 (2004), pp. 91-110. D. Nister, An efficient solution to the five-point relative pose problem, IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 26(6):756-770, June 2004.}, city = {Bonn}, proceeding = {Proc. of the 5th Turkish-German Joint Geodetic Days}, url = {https://www.ipb.uni-bonn.de/pdfs/Labe2006Automatic.pdf}, }
2005
- T. Läbe and W. Förstner, “Erfahrungen mit einem neuen vollautomatischen Verfahren zur Orientierung digitaler Bilder,” in Proc. of DGPF Conf., Rostock, Germany, 2005.
[BibTeX] [PDF]
Der Aufsatz präsentiert ein neues vollautomatisches Verfahren zur relativen Orientierung mehrerer digitaler Bilder kalibrierter Kameras. Es nutzt die in den letzten Jahren neu entwickelten Algorithmen im Bereich der Merkmalsextraktion und der Bildgeometrie und erfordert weder das Anbringen von künstlichen Zielmarken noch die Angabe von Näherungswerten. Es basiert auf automatisch extrahierten Punkten, die mit dem von D. Lowe (2004) vorgeschlagenen Verfahren zur Extraktion skaleninvarianter Bildmerkmale berechnet werden. Diese ermöglichen eine Punktzuordnung auch bei stark konvergenten Aufnahmen. Für die Bestimmung von Näherungswerten der abschließenden Bündelausgleichung wird bei der relativen Orientierung der Bildpaare das direkte Lösungsverfahren von D. Nister (2004) verwendet. Der Aufsatz diskutiert die praktischen Erfahrungen mit den verwendeten Algorithmen anhand von Beispieldatensätzen sowohl von Innenraum- als auch von Aussnaufnahmen.
@InProceedings{labe2005erfahrungen, title = {Erfahrungen mit einem neuen vollautomatischen Verfahren zur Orientierung digitaler Bilder}, author = {L\"abe, Thomas and F\"orstner, Wolfgang}, booktitle = {Proc. of DGPF Conf.}, year = {2005}, address = {Rostock, Germany}, abstract = {Der Aufsatz pr\"asentiert ein neues vollautomatisches Verfahren zur relativen Orientierung mehrerer digitaler Bilder kalibrierter Kameras. Es nutzt die in den letzten Jahren neu entwickelten Algorithmen im Bereich der Merkmalsextraktion und der Bildgeometrie und erfordert weder das Anbringen von k\"unstlichen Zielmarken noch die Angabe von N\"aherungswerten. Es basiert auf automatisch extrahierten Punkten, die mit dem von D. Lowe (2004) vorgeschlagenen Verfahren zur Extraktion skaleninvarianter Bildmerkmale berechnet werden. Diese erm\"oglichen eine Punktzuordnung auch bei stark konvergenten Aufnahmen. F\"ur die Bestimmung von N\"aherungswerten der abschlie{\ss}enden B\"undelausgleichung wird bei der relativen Orientierung der Bildpaare das direkte L\"osungsverfahren von D. Nister (2004) verwendet. Der Aufsatz diskutiert die praktischen Erfahrungen mit den verwendeten Algorithmen anhand von Beispieldatens\"atzen sowohl von Innenraum- als auch von Aussnaufnahmen.}, city = {Bonn}, proceeding = {Proc. of DGPF Conf.}, url = {https://www.ipb.uni-bonn.de/pdfs/Labe2005Erfahrungen.pdf}, }
2004
- T. Läbe and W. Förstner, “Geometric Stability of Low-Cost Digital Consumer Cameras,” in Proc. 20th ISPRS Congress, Istanbul, Turkey, 2004, p. 528–535.
[BibTeX] [PDF]
During the last years the number of available low-cost digital consumer cameras has significantly increased while their prices decrease. Therefore for many applications with no high-end accuracy requirements it is an important consideration whether to use low-cost cameras. This paper investigates in the use of consumer cameras for photogrammetric measurements and vision systems. An important aspect of the suitability of these cameras is their geometric stability. Two aspects should be considered: The change of calibration parameters when using the camera’s features such as zoom or auto focus and the time invariance of the calibration parameters. Therefore laboratory calibrations of different cameras have been carried out at different times. The resulting calibration parameters, especially the principal distance and the principal point, and their accuracies are given. The usefulness of the information given in the image header, especially the focal length, is compared to the results of the calibration.
@InProceedings{labe2004geometric, title = {Geometric Stability of Low-Cost Digital Consumer Cameras}, author = {L\"abe, Thomas and F\"orstner, Wolfgang}, booktitle = {Proc. 20th ISPRS Congress}, year = {2004}, address = {Istanbul, Turkey}, pages = {528--535}, abstract = {During the last years the number of available low-cost digital consumer cameras has significantly increased while their prices decrease. Therefore for many applications with no high-end accuracy requirements it is an important consideration whether to use low-cost cameras. This paper investigates in the use of consumer cameras for photogrammetric measurements and vision systems. An important aspect of the suitability of these cameras is their geometric stability. Two aspects should be considered: The change of calibration parameters when using the camera's features such as zoom or auto focus and the time invariance of the calibration parameters. Therefore laboratory calibrations of different cameras have been carried out at different times. The resulting calibration parameters, especially the principal distance and the principal point, and their accuracies are given. The usefulness of the information given in the image header, especially the focal length, is compared to the results of the calibration.}, city = {Bonn}, proceeding = {Proc. of XXth ISPRS Congress 2004}, url = {https://www.ipb.uni-bonn.de/pdfs/Labe2004Geometric.pdf}, }
2003
- W. Förstner and T. Läbe, “Learning Optimal Parameters for Self-diagnosis in a System for Automatic Exterior Orientation,” in Vision Systems (ICVS) 2003, Graz, 2003, p. 236–246. doi:10.1007/3-540-36592-3_23
[BibTeX] [PDF]
The paper describes the automatic learning of parameters for self-diagnosis of a system for automatic orientation of single aerial images used by the State Survey Department of Northrhine–Westfalia. The orientation is based on 3D lines as ground control features, and uses a sequence of probabilistic clustering, search and ML-estimation for robustly estimating the 6 parameters of the exterior orientation of an aerial image. The system is interpreted as a classifier, making an internal evaluation of its success. The classification is based on a number of parameters possibly relevant for self-diagnosis. A hand designed classifier reached 11% false negatives and 2% false positives on appr. 17000 images. A first version of a new classifier using support vector machines is evaluated. Based on appr. 650 images the classifier reaches 2 % false negatives and 4% false positives, indicating an increase in performance.
@InProceedings{forstner2003learning, title = {Learning Optimal Parameters for Self-diagnosis in a System for Automatic Exterior Orientation}, author = {F\"orstner, Wolfgang and L\"abe, Thomas}, booktitle = {Vision Systems (ICVS) 2003}, year = {2003}, address = {Graz}, editor = {Crowley, James L. and Piater, Justus H. and Vincze, M. and Paletta, L.}, pages = {236--246}, abstract = {The paper describes the automatic learning of parameters for self-diagnosis of a system for automatic orientation of single aerial images used by the State Survey Department of Northrhine--Westfalia. The orientation is based on 3D lines as ground control features, and uses a sequence of probabilistic clustering, search and ML-estimation for robustly estimating the 6 parameters of the exterior orientation of an aerial image. The system is interpreted as a classifier, making an internal evaluation of its success. The classification is based on a number of parameters possibly relevant for self-diagnosis. A hand designed classifier reached 11% false negatives and 2% false positives on appr. 17000 images. A first version of a new classifier using support vector machines is evaluated. Based on appr. 650 images the classifier reaches 2 % false negatives and 4% false positives, indicating an increase in performance.}, city = {Bonn}, doi = {10.1007/3-540-36592-3_23}, proceeding = {Computer Vision Systems (ICVS) 2003}, url = {https://www.ipb.uni-bonn.de/pdfs/Forstner2003Learning.pdf}, }
2002
- T. Läbe and M. Henze, “Automatische äussere Orientierung in der Orthophotoproduktion – ein Erfahrungsbericht,” , Neubrandenburg, Germany, 2002, p. 245–252.
[BibTeX] [PDF]
Eine der notwendigen Voraussetzungen zur Erstellung eines Orthophotos ist die Kenntnis der äußeren Orientierung des zu bearbeitenden Bildes. Hierfür wurde am Institut für Photogrammetrie der Universität Bonn innerhalb eines Kooperationsprojektes mit dem Landesvermessungsamt Nordrhein-Westfalen ein vollautomatisches Verfahren entwickelt, das auf der Suche von projizierten 3D-Kanten im Bild basiert. Das Programm trägt den Namen “AMOR” (Automatische Modellgestützte ORientierung) und beinhaltet sowohl Bildverarbeitung (Kantenextraktion) als auch robuste Schätzverfahren für die Bestimmung der Orientierungselemente. Als Datenbasis zur Bestimmung der äußeren Orientierung werden anders als beim konventionellen manuellen Vorgehen keine Passpunkte sondern sogenannte “Passpunktmodelle” verwendet. Dies sind Mengen georeferenzierter 3D-Kanten, wofür sich insbesondere Gebäudekanten eines digitalen Gebäudemodells eignen. Zur fächendeckenden Orthophotoproduktion wurden in Nordrhein-Westfalen landesweit Gebäude als Passpunktmodelle erfasst. AMOR ist in den Produktionsablauf der Orthophotoherstellung beim Landesvermessungsamt integriert worden und kann aufgrund der Passpunktmodelldatenbank auf einem Großteil der Landesfläche angewendet werden. Der Aufsatz gibt einen Überblick über das Verfahren zur automatischen Orientierungsbestimmung und dessen Integration mit besonderem Schwerpunkt auf den praktischen Einsatz beim Landesvermessungsamt NRW.
@InProceedings{labe2002automatische, title = {Automatische \"aussere Orientierung in der Orthophotoproduktion - ein Erfahrungsbericht}, author = {L\"abe, Thomas and Henze, Manfred}, year = {2002}, address = {Neubrandenburg, Germany}, pages = {245--252}, abstract = {Eine der notwendigen Voraussetzungen zur Erstellung eines Orthophotos ist die Kenntnis der \"au{\ss}eren Orientierung des zu bearbeitenden Bildes. Hierf\"ur wurde am Institut f\"ur Photogrammetrie der Universit\"at Bonn innerhalb eines Kooperationsprojektes mit dem Landesvermessungsamt Nordrhein-Westfalen ein vollautomatisches Verfahren entwickelt, das auf der Suche von projizierten 3D-Kanten im Bild basiert. Das Programm tr\"agt den Namen "AMOR" (Automatische Modellgest\"utzte ORientierung) und beinhaltet sowohl Bildverarbeitung (Kantenextraktion) als auch robuste Sch\"atzverfahren f\"ur die Bestimmung der Orientierungselemente. Als Datenbasis zur Bestimmung der \"au{\ss}eren Orientierung werden anders als beim konventionellen manuellen Vorgehen keine Passpunkte sondern sogenannte "Passpunktmodelle" verwendet. Dies sind Mengen georeferenzierter 3D-Kanten, wof\"ur sich insbesondere Geb\"audekanten eines digitalen Geb\"audemodells eignen. Zur f\"achendeckenden Orthophotoproduktion wurden in Nordrhein-Westfalen landesweit Geb\"aude als Passpunktmodelle erfasst. AMOR ist in den Produktionsablauf der Orthophotoherstellung beim Landesvermessungsamt integriert worden und kann aufgrund der Passpunktmodelldatenbank auf einem Gro{\ss}teil der Landesfl\"ache angewendet werden. Der Aufsatz gibt einen \"Uberblick \"uber das Verfahren zur automatischen Orientierungsbestimmung und dessen Integration mit besonderem Schwerpunkt auf den praktischen Einsatz beim Landesvermessungsamt NRW.}, city = {Bonn}, proceeding = {Proc. of DGPF Conf.}, url = {https://www.ipb.uni-bonn.de/pdfs/Labe2002Automatische.pdf}, }
1999
- E. Gülch, H. Müller, and T. Läbe, “Integration of Automatic Processes Into Semi-Automatic Building Extraction,” in Proc. of ISPRS Conf. “Automatic Extraction Of GIS Objects From Digital Imagery”, 1999.
[BibTeX] [PDF]
The modeling of three-dimensional objects is a current topic in digital photogrammetric research. The modeling of buildings in digital imagery or digital surface models involving automation processes has reached a level where it can compete with classical photogrammetric stereo measurements. There are many different ways on how to integrate automation. We describe our system and its automated features that support the operator in the adaption of parametric models to multiple overlapping images. There do exist tools to automate the measurement of heights, to automate the estimation of the form parameters or for the handling of building aggregates. With such tools we can reach about 20 seconds for the modeling of a volumetric primitive which is fully comparable to the currently used photogrammetric methods.
@InProceedings{gulch1999integration, title = {Integration of Automatic Processes Into Semi-Automatic Building Extraction}, author = {G\"ulch, Eberhard and M\"uller, Hardo and L\"abe, Thomas}, booktitle = {Proc. of ISPRS Conf. "Automatic Extraction Of GIS Objects From Digital Imagery"}, year = {1999}, abstract = {The modeling of three-dimensional objects is a current topic in digital photogrammetric research. The modeling of buildings in digital imagery or digital surface models involving automation processes has reached a level where it can compete with classical photogrammetric stereo measurements. There are many different ways on how to integrate automation. We describe our system and its automated features that support the operator in the adaption of parametric models to multiple overlapping images. There do exist tools to automate the measurement of heights, to automate the estimation of the form parameters or for the handling of building aggregates. With such tools we can reach about 20 seconds for the modeling of a volumetric primitive which is fully comparable to the currently used photogrammetric methods.}, city = {Bonn}, proceeding = {Proc. of ISPRS Conf. #Automatic##Extraction##Of##GIS##Objects##From##Digital##Imagery#}, url = {https://www.ipb.uni-bonn.de/pdfs/Gulch1999Integration.pdf}, }
- T. Läbe, “Contribution to the OEEPE-Test on Automatic Orientation of Aerial Images, Task A – Experiences with AMOR,” in OEEPE Seminar on Automatic Orientation of Aerial Images on Database Information, Aalborg, Denmark, 1999.
[BibTeX] [PDF]
This paper describes the contribution of the University of Bonn to the OEEPE test on Automatic Orientation of Aerial Images (Task A). A program for the automatic exterior orientation called AMOR was developed by Wolfgang Schickler at the Institute of Photogrammetry, Bonn. The methods and ideas of this approach are summarized. This program was used to compute the exterior orientation parameters of the given two test images successfully. Results and new solved problems are reported.
@InProceedings{labe1999contribution, title = {Contribution to the OEEPE-Test on Automatic Orientation of Aerial Images, Task A - Experiences with AMOR}, author = {L\"abe, Thomas}, booktitle = {OEEPE Seminar on Automatic Orientation of Aerial Images on Database Information}, year = {1999}, address = {Aalborg, Denmark}, abstract = {This paper describes the contribution of the University of Bonn to the OEEPE test on Automatic Orientation of Aerial Images (Task A). A program for the automatic exterior orientation called AMOR was developed by Wolfgang Schickler at the Institute of Photogrammetry, Bonn. The methods and ideas of this approach are summarized. This program was used to compute the exterior orientation parameters of the given two test images successfully. Results and new solved problems are reported.}, city = {Bonn}, proceeding = {OEEPE Seminar on Automatic Orientation of Aerial Images on Database Information}, url = {https://www.ipb.uni-bonn.de/pdfs/Labe1999Contribution.pdf}, }
1998
- E. Gülch, H. Müller, T. Läbe, and L. Ragia, “On the performance of semi-automatic building extraction,” in Proc. of ISPRS Commission III Symposium, Columbus, Ohio, 1998.
[BibTeX] [PDF]
A Semi-Automatic Building Extraction system using two or more digitized overlapping aerial images has been enhanced by increased automation for the measurement of saddle-back-roof buildings, hip-roof buildings and boxes. All newly developed modules have been incorporated in the object oriented design of the system. The new methods consist of a ground-point and roof-top matching tool and a robust determination of shape parameters, like e.g. gutter length and width. The current performance of building extraction is quantitatively and qualitatively evaluated. We examine the increased efficiency using the automated tools, the success rate of individual modules and the overall success rate using a combination of methods. A methodology for quantitative comparison is tested on footprints of buildings from classical stereo measurements and from semi-automatic measurements. A qualitative comparison in 3D of multiple measurements of complete buildings is performed on three different datasets.
@InProceedings{gulch1998performance, title = {On the performance of semi-automatic building extraction}, author = {G\"ulch, Eberhard and M\"uller, Hardo and L\"abe, Thomas and Ragia, Lemonia}, booktitle = {Proc. of ISPRS Commission III Symposium}, year = {1998}, address = {Columbus, Ohio}, abstract = {A Semi-Automatic Building Extraction system using two or more digitized overlapping aerial images has been enhanced by increased automation for the measurement of saddle-back-roof buildings, hip-roof buildings and boxes. All newly developed modules have been incorporated in the object oriented design of the system. The new methods consist of a ground-point and roof-top matching tool and a robust determination of shape parameters, like e.g. gutter length and width. The current performance of building extraction is quantitatively and qualitatively evaluated. We examine the increased efficiency using the automated tools, the success rate of individual modules and the overall success rate using a combination of methods. A methodology for quantitative comparison is tested on footprints of buildings from classical stereo measurements and from semi-automatic measurements. A qualitative comparison in 3D of multiple measurements of complete buildings is performed on three different datasets.}, city = {Bonn}, proceeding = {Proc. of ISPRS Commission III Symposium}, url = {https://www.ipb.uni-bonn.de/pdfs/Gulch1998performance.pdf}, }
- T. Läbe and E. Gülch, “Robust Techniques for Estimating Parameters of 3D Building Primitives,” in Proc. of ISPRS Commission II Symposium, Cambridge, UK, 1998.
[BibTeX] [PDF]
A semi-automatic building extraction system using two or more digitized overlapping aerial images has been enhanced by increased automation for the measurement of saddleback-roof (lopsided and symmetric) buildings, hip-roof buildings and flat-roof building (boxes). The goal is to minimize the interaction an operator has to do for measuring the form and pose parameters of 3D building models of the above mentioned types. The automated tasks are computed on-line and fully integrated in the work flow. Thus accepting or correcting the results or adapting the automated calculation is possible. The used methods are grey value correlation for absolute heights and the robust estimation techniques RANSAC and Clustering for the determination of heights and the other form parameters of the building primitives. These methods work on automatically extracted line segments. The automated modules have been empirically evaluated on more than 250 buildings in two datasets with different image quality and different densities of built-up areas. The results of these tests show a success rate of up to 88% for a form parameter estimation module and the height measurement.
@InProceedings{labe1998robust, title = {Robust Techniques for Estimating Parameters of 3D Building Primitives}, author = {L\"abe, Thomas and G\"ulch, Eberhard}, booktitle = {Proc. of ISPRS Commission II Symposium}, year = {1998}, address = {Cambridge, UK}, abstract = {A semi-automatic building extraction system using two or more digitized overlapping aerial images has been enhanced by increased automation for the measurement of saddleback-roof (lopsided and symmetric) buildings, hip-roof buildings and flat-roof building (boxes). The goal is to minimize the interaction an operator has to do for measuring the form and pose parameters of 3D building models of the above mentioned types. The automated tasks are computed on-line and fully integrated in the work flow. Thus accepting or correcting the results or adapting the automated calculation is possible. The used methods are grey value correlation for absolute heights and the robust estimation techniques RANSAC and Clustering for the determination of heights and the other form parameters of the building primitives. These methods work on automatically extracted line segments. The automated modules have been empirically evaluated on more than 250 buildings in two datasets with different image quality and different densities of built-up areas. The results of these tests show a success rate of up to 88% for a form parameter estimation module and the height measurement.}, city = {Bonn}, proceeding = {Proc. of ISPRS Commission II Symposium}, url = {https://www.ipb.uni-bonn.de/pdfs/Labe1998robust.pdf}, }
1997
- T. Läbe, “Automatic Exterior Orientation in Practice,” GIM Interational, Geomatics Info Magazine, vol. 11, p. 63–67, 1997.
[BibTeX]
A bottleneck of today’s automation of image orientation is the identification of control points for the exterior orientation. A solution for this problem is presented. It is based on 3D-wireframe models of buildings as ground control points. The article describes the setup of a database of such control points and the use of the data for an automatic exterior orientation.
@Article{laebe1997automatic, title = {Automatic Exterior Orientation in Practice}, author = {L\"abe, Thomas}, journal = {GIM Interational, Geomatics Info Magazine}, year = {1997}, pages = {63--67}, volume = {11}, abstract = {A bottleneck of today's automation of image orientation is the identification of control points for the exterior orientation. A solution for this problem is presented. It is based on 3D-wireframe models of buildings as ground control points. The article describes the setup of a database of such control points and the use of the data for an automatic exterior orientation.}, }
1996
- T. Läbe and K. H. Ellenbeck, “3D-Wireframe Models As Ground Control Points For The Automatic Exterior Orientation,” in Proc. of 18th ISPRS Congress, Vienna, 1996.
[BibTeX] [PDF]
The bottleneck of today’s automation of image orientation is the identification of control points for exterior orientation. A solution for this problem is presented. It is based on 3D-wireframe models of buildings as ground control points. The paper describes the setup of a database of such control points.
@InProceedings{labe19963d, title = {3D-Wireframe Models As Ground Control Points For The Automatic Exterior Orientation}, author = {L\"abe, Thomas and Ellenbeck, Karl Heiko}, booktitle = {Proc. of 18th ISPRS Congress}, year = {1996}, address = {Vienna}, abstract = {The bottleneck of today's automation of image orientation is the identification of control points for exterior orientation. A solution for this problem is presented. It is based on 3D-wireframe models of buildings as ground control points. The paper describes the setup of a database of such control points.}, city = {Bonn}, proceeding = {Proc. of 18th ISPRS Congress}, url = {https://www.ipb.uni-bonn.de/pdfs/Labe19963D.pdf}, }