Louis Wiesmann
Ph.D. Student Contact:Email: louis.wiesmann@nulligg.uni-bonn.de
Tel: +49 – 228 – 73 – 29 06
Fax: +49 – 228 – 73 – 27 12
Office: Nussallee 15, 1. OG, room 1.006
Address:
University of Bonn
Photogrammetry, IGG
Nussallee 15
53115 Bonn

Short CV
Louis Wiesmann is a PhD student at the Photogrammetry Lab at the University of Bonn since November 2019. He received his master’s degree at the Institute of Geodesy and Geoinformation in 2019.Research Interests
- SLAM
- Computer Vision
- Machine Learning
Awards
- Turbo-Preis 2019 of the DVW
Publications
2025
- Y. Pan, X. Zhong, L. Jin, L. Wiesmann, M. Popović, J. Behley, and C. Stachniss, “PINGS: Gaussian Splatting Meets Distance Fields within a Point-Based Implicit Neural Map,” Arxiv preprint, vol. arXiv:2502.05752, 2025.
[BibTeX] [PDF]
Robots require high-fidelity reconstructions of their environment for effective operation. Such scene representations should be both, geometrically accurate and photorealistic to support downstream tasks. While this can be achieved by building distance fields from range sensors and radiance fields from cameras, the scalable incremental mapping of both fields consistently and at the same time with high quality remains challenging. In this paper, we propose a novel map representation that unifies a continuous signed distance field and a Gaussian splatting radiance field within an elastic and compact point-based implicit neural map. By enforcing geometric consistency between these fields, we achieve mutual improvements by exploiting both modalities. We devise a LiDAR-visual SLAM system called PINGS using the proposed map representation and evaluate it on several challenging large-scale datasets. Experimental results demonstrate that PINGS can incrementally build globally consistent distance and radiance fields encoded with a compact set of neural points. Compared to the state-of-the-art methods, PINGS achieves superior photometric and geometric rendering at novel views by leveraging the constraints from the distance field. Furthermore, by utilizing dense photometric cues and multi-view consistency from the radiance field, PINGS produces more accurate distance fields, leading to improved odometry estimation and mesh reconstruction.
@article{pan2025arxiv, author = {Y. Pan and X. Zhong and L. Jin and L. Wiesmann and M. Popovi\'c and J. Behley and C. Stachniss}, title = {{PINGS: Gaussian Splatting Meets Distance Fields within a Point-Based Implicit Neural Map}}, journal = arxiv, year = 2025, volume = {arXiv:2502.05752}, url = {https://arxiv.org/pdf/2502.05752}, abstract = {Robots require high-fidelity reconstructions of their environment for effective operation. Such scene representations should be both, geometrically accurate and photorealistic to support downstream tasks. While this can be achieved by building distance fields from range sensors and radiance fields from cameras, the scalable incremental mapping of both fields consistently and at the same time with high quality remains challenging. In this paper, we propose a novel map representation that unifies a continuous signed distance field and a Gaussian splatting radiance field within an elastic and compact point-based implicit neural map. By enforcing geometric consistency between these fields, we achieve mutual improvements by exploiting both modalities. We devise a LiDAR-visual SLAM system called PINGS using the proposed map representation and evaluate it on several challenging large-scale datasets. Experimental results demonstrate that PINGS can incrementally build globally consistent distance and radiance fields encoded with a compact set of neural points. Compared to the state-of-the-art methods, PINGS achieves superior photometric and geometric rendering at novel views by leveraging the constraints from the distance field. Furthermore, by utilizing dense photometric cues and multi-view consistency from the radiance field, PINGS produces more accurate distance fields, leading to improved odometry estimation and mesh reconstruction.} }
- H. Kuang, Y. Pan, X. Zhong, L. Wiesmann, J. Behley, and C. Stachniss, “Improving Indoor Localization Accuracy by Using an Efficient Implicit Neural Map Representation,” in Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA), 2025.
[BibTeX]@inproceedings{huang2025icra, author = {H. Kuang and Y. Pan and X. Zhong and L. Wiesmann and J. Behley and Stachniss, C.}, title = {{Improving Indoor Localization Accuracy by Using an Efficient Implicit Neural Map Representation}}, booktitle = icra, year = {2025}, note = {Accepted}, }
2024
- L. Wiesmann, E. Marks, S. Gupta, T. Guadagnino, J. Behley, and C. Stachniss, “Efficient LiDAR Bundle Adjustment for Multi-Scan Alignment Utilizing Continuous-Time Trajectories,” Arxiv preprint, vol. arXiv:2412.11760, 2024.
[BibTeX] [PDF]@article{wiesmann2024arxiv, author = {L. Wiesmann and E. Marks and S. Gupta and T. Guadagnino and J. Behley and C. Stachniss}, title = {{Efficient LiDAR Bundle Adjustment for Multi-Scan Alignment Utilizing Continuous-Time Trajectories}}, journal = arxiv, year = 2024, volume = {arXiv:2412.11760}, url = {https://arxiv.org/pdf/2412.11760}, }
- L. Wiesmann, T. Läbe, L. Nunes, J. Behley, and C. Stachniss, “Joint Intrinsic and Extrinsic Calibration of Perception Systems Utilizing a Calibration Environment,” Ieee robotics and automation letters (ra-l), vol. 9, iss. 10, p. 9103–9110, 2024. doi:10.1109/LRA.2024.3457385
[BibTeX] [PDF]@article{wiesmann2024ral, author = {L. Wiesmann and T. L\"abe and L. Nunes and J. Behley and C. Stachniss}, title = {{Joint Intrinsic and Extrinsic Calibration of Perception Systems Utilizing a Calibration Environment}}, journal = ral, year = {2024}, volume = {9}, number = {10}, pages = {9103--9110}, issn = {2377-3766}, doi = {10.1109/LRA.2024.3457385}, }
- Y. Pan, X. Zhong, L. Wiesmann, T. Posewsky, J. Behley, and C. Stachniss, “PIN-SLAM: LiDAR SLAM Using a Point-Based Implicit Neural Representation for Achieving Global Map Consistency,” IEEE Trans. on Robotics (TRO), vol. 40, p. 4045–4064, 2024. doi:10.1109/TRO.2024.3422055
[BibTeX] [PDF] [Code]@article{pan2024tro, author = {Y. Pan and X. Zhong and L. Wiesmann and T. Posewsky and J. Behley and C. Stachniss}, title = {{PIN-SLAM: LiDAR SLAM Using a Point-Based Implicit Neural Representation for Achieving Global Map Consistency}}, journal = tro, year = {2024}, pages = {4045--4064}, volume = {40}, doi = {10.1109/TRO.2024.3422055}, codeurl = {https://github.com/PRBonn/PIN_SLAM}, }
- D. Casado Herraez, L. Chang, M. Zeller, L. Wiesmann, J. Behley, M. Heidingsfeld, and C. Stachniss, “SPR: Single-Scan Radar Place Recognition,” Ieee robotics and automation letters (ra-l), vol. 9, iss. 10, pp. 9079-9086, 2024.
[BibTeX] [PDF]@article{casado-herraez2024ral, author = {Casado Herraez, D. and L. Chang and M. Zeller and L. Wiesmann and J. Behley and M. Heidingsfeld and C. Stachniss}, title = {{SPR: Single-Scan Radar Place Recognition}}, journal = ral, year = {2024}, volume = {9}, number = {10}, pages = {9079-9086}, }
- Y. Wu, T. Guadagnino, L. Wiesmann, L. Klingbeil, C. Stachniss, and H. Kuhlmann, “LIO-EKF: High Frequency LiDAR-Inertial Odometry using Extended Kalman Filters,” in Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA), 2024. doi:10.1109/ICRA57147.2024.10610667
[BibTeX] [PDF] [Code] [Video]@inproceedings{wu2024icra, author = {Y. Wu and T. Guadagnino and L. Wiesmann and L. Klingbeil and C. Stachniss and H. Kuhlmann}, title = {{LIO-EKF: High Frequency LiDAR-Inertial Odometry using Extended Kalman Filters}}, booktitle = icra, year = 2024, doi = {10.1109/ICRA57147.2024.10610667}, codeurl = {https://github.com/YibinWu/LIO-EKF}, videourl = {https://youtu.be/MoJTqEYl1ME}, }
2023
- R. Marcuzzi, L. Nunes, L. Wiesmann, E. Marks, J. Behley, and C. Stachniss, “Mask4D: End-to-End Mask-Based 4D Panoptic Segmentation for LiDAR Sequences,” Ieee robotics and automation letters (ra-l), vol. 8, iss. 11, pp. 7487-7494, 2023. doi:10.1109/LRA.2023.3320020
[BibTeX] [PDF] [Code] [Video]@article{marcuzzi2023ral-meem, author = {R. Marcuzzi and L. Nunes and L. Wiesmann and E. Marks and J. Behley and C. Stachniss}, title = {{Mask4D: End-to-End Mask-Based 4D Panoptic Segmentation for LiDAR Sequences}}, journal = ral, year = {2023}, volume = {8}, number = {11}, pages = {7487-7494}, issn = {2377-3766}, doi = {10.1109/LRA.2023.3320020}, codeurl = {https://github.com/PRBonn/Mask4D}, videourl = {https://youtu.be/4WqK_gZlpfA}, }
- I. Vizzo, B. Mersch, L. Nunes, L. Wiesmann, T. Guadagnino, and C. Stachniss, “Toward Reproducible Version-Controlled Perception Platforms: Embracing Simplicity in Autonomous Vehicle Dataset Acquisition,” in Proc. of the intl. conf. on intelligent transportation systems workshops, 2023.
[BibTeX] [PDF] [Code]@inproceedings{vizzo2023itcsws, author = {I. Vizzo and B. Mersch and L. Nunes and L. Wiesmann and T. Guadagnino and C. Stachniss}, title = {{Toward Reproducible Version-Controlled Perception Platforms: Embracing Simplicity in Autonomous Vehicle Dataset Acquisition}}, booktitle = {Proc. of the Intl. Conf. on Intelligent Transportation Systems Workshops}, year = 2023, codeurl = {https://github.com/ipb-car/meta-workspace}, note = {accepted} }
- L. Wiesmann, T. Guadagnino, I. Vizzo, N. Zimmerman, Y. Pan, H. Kuang, J. Behley, and C. Stachniss, “LocNDF: Neural Distance Field Mapping for Robot Localization,” Ieee robotics and automation letters (ra-l), vol. 8, iss. 8, p. 4999–5006, 2023. doi:10.1109/LRA.2023.3291274
[BibTeX] [PDF] [Code] [Video]@article{wiesmann2023ral-icra, author = {L. Wiesmann and T. Guadagnino and I. Vizzo and N. Zimmerman and Y. Pan and H. Kuang and J. Behley and C. Stachniss}, title = {{LocNDF: Neural Distance Field Mapping for Robot Localization}}, journal = ral, volume = {8}, number = {8}, pages = {4999--5006}, year = 2023, url = {https://www.ipb.uni-bonn.de/wp-content/papercite-data/pdf/wiesmann2023ral-icra.pdf}, issn = {2377-3766}, doi = {10.1109/LRA.2023.3291274}, codeurl = {https://github.com/PRBonn/LocNDF}, videourl = {https://youtu.be/-0idH21BpMI}, }
- E. Marks, M. Sodano, F. Magistri, L. Wiesmann, D. Desai, R. Marcuzzi, J. Behley, and C. Stachniss, “High Precision Leaf Instance Segmentation in Point Clouds Obtained Under Real Field Conditions,” Ieee robotics and automation letters (ra-l), vol. 8, iss. 8, pp. 4791-4798, 2023. doi:10.1109/LRA.2023.3288383
[BibTeX] [PDF] [Code] [Video]@article{marks2023ral, author = {E. Marks and M. Sodano and F. Magistri and L. Wiesmann and D. Desai and R. Marcuzzi and J. Behley and C. Stachniss}, title = {{High Precision Leaf Instance Segmentation in Point Clouds Obtained Under Real Field Conditions}}, journal = ral, pages = {4791-4798}, volume = {8}, number = {8}, issn = {2377-3766}, year = {2023}, doi = {10.1109/LRA.2023.3288383}, codeurl = {https://github.com/PRBonn/plant_pcd_segmenter}, videourl = {https://youtu.be/dvA1SvQ4iEY} }
- L. Nunes, L. Wiesmann, R. Marcuzzi, X. Chen, J. Behley, and C. Stachniss, “Temporal Consistent 3D LiDAR Representation Learning for Semantic Perception in Autonomous Driving,” in Proc. of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR), 2023.
[BibTeX] [PDF] [Code] [Video]@inproceedings{nunes2023cvpr, author = {L. Nunes and L. Wiesmann and R. Marcuzzi and X. Chen and J. Behley and C. Stachniss}, title = {{Temporal Consistent 3D LiDAR Representation Learning for Semantic Perception in Autonomous Driving}}, booktitle = cvpr, year = 2023, codeurl = {https://github.com/PRBonn/TARL}, videourl = {https://youtu.be/0CtDbwRYLeo}, }
- I. Vizzo, T. Guadagnino, B. Mersch, L. Wiesmann, J. Behley, and C. Stachniss, “KISS-ICP: In Defense of Point-to-Point ICP – Simple, Accurate, and Robust Registration If Done the Right Way,” Ieee robotics and automation letters (ra-l), vol. 8, iss. 2, pp. 1-8, 2023. doi:10.1109/LRA.2023.3236571
[BibTeX] [PDF] [Code] [Video]@article{vizzo2023ral, author = {Vizzo, Ignacio and Guadagnino, Tiziano and Mersch, Benedikt and Wiesmann, Louis and Behley, Jens and Stachniss, Cyrill}, title = {{KISS-ICP: In Defense of Point-to-Point ICP -- Simple, Accurate, and Robust Registration If Done the Right Way}}, journal = ral, pages = {1-8}, doi = {10.1109/LRA.2023.3236571}, volume = {8}, number = {2}, year = {2023}, codeurl = {https://github.com/PRBonn/kiss-icp}, videourl = {https://youtu.be/h71aGiD-uxU} }
- R. Marcuzzi, L. Nunes, L. Wiesmann, J. Behley, and C. Stachniss, “Mask-Based Panoptic LiDAR Segmentation for Autonomous Driving,” Ieee robotics and automation letters (ra-l), vol. 8, iss. 2, p. 1141–1148, 2023. doi:10.1109/LRA.2023.3236568
[BibTeX] [PDF] [Code] [Video]@article{marcuzzi2023ral, author = {R. Marcuzzi and L. Nunes and L. Wiesmann and J. Behley and C. Stachniss}, title = {{Mask-Based Panoptic LiDAR Segmentation for Autonomous Driving}}, journal = ral, volume = {8}, number = {2}, pages = {1141--1148}, year = 2023, doi = {10.1109/LRA.2023.3236568}, videourl = {https://youtu.be/I8G9VKpZux8}, codeurl = {https://github.com/PRBonn/MaskPLS}, }
- L. Wiesmann, L. Nunes, J. Behley, and C. Stachniss, “KPPR: Exploiting Momentum Contrast for Point Cloud-Based Place Recognition,” Ieee robotics and automation letters (ra-l), vol. 8, iss. 2, pp. 592-599, 2023. doi:10.1109/LRA.2022.3228174
[BibTeX] [PDF] [Code] [Video]@article{wiesmann2023ral, author = {L. Wiesmann and L. Nunes and J. Behley and C. Stachniss}, title = {{KPPR: Exploiting Momentum Contrast for Point Cloud-Based Place Recognition}}, journal = ral, volume = {8}, number = {2}, pages = {592-599}, year = 2023, issn = {2377-3766}, doi = {10.1109/LRA.2022.3228174}, codeurl = {https://github.com/PRBonn/kppr}, videourl = {https://youtu.be/bICz1sqd8Xs} }
- M. Arora, L. Wiesmann, X. Chen, and C. Stachniss, “Static Map Generation from 3D LiDAR Point Clouds Exploiting Ground Segmentation,” Journal on robotics and autonomous systems (ras), vol. 159, p. 104287, 2023. doi:10.1016/j.robot.2022.104287
[BibTeX] [PDF] [Code]@article{arora2023jras, author = {M. Arora and L. Wiesmann and X. Chen and C. Stachniss}, title = {{Static Map Generation from 3D LiDAR Point Clouds Exploiting Ground Segmentation}}, journal = jras, volume = {159}, pages = {104287}, year = {2023}, issn = {0921-8890}, doi = {10.1016/j.robot.2022.104287}, codeurl = {https://github.com/PRBonn/dynamic-point-removal}, }
2022
- N. Zimmerman, L. Wiesmann, T. Guadagnino, T. Läbe, J. Behley, and C. Stachniss, “Robust Onboard Localization in Changing Environments Exploiting Text Spotting,” in Proc. of the ieee/rsj intl. conf. on intelligent robots and systems (iros), 2022.
[BibTeX] [PDF] [Code]@inproceedings{zimmerman2022iros, title = {{Robust Onboard Localization in Changing Environments Exploiting Text Spotting}}, author = {N. Zimmerman and L. Wiesmann and T. Guadagnino and T. Läbe and J. Behley and C. Stachniss}, booktitle = iros, year = {2022}, codeurl = {https://github.com/PRBonn/tmcl}, }
- I. Vizzo, B. Mersch, R. Marcuzzi, L. Wiesmann, J. Behley, and C. Stachniss, “Make it dense: self-supervised geometric scan completion of sparse 3d lidar scans in large outdoor environments,” Ieee robotics and automation letters (ra-l), vol. 7, iss. 3, pp. 8534-8541, 2022. doi:10.1109/LRA.2022.3187255
[BibTeX] [PDF] [Code] [Video]@article{vizzo2022ral, author = {I. Vizzo and B. Mersch and R. Marcuzzi and L. Wiesmann and J. Behley and C. Stachniss}, title = {Make it Dense: Self-Supervised Geometric Scan Completion of Sparse 3D LiDAR Scans in Large Outdoor Environments}, journal = ral, url = {https://www.ipb.uni-bonn.de/wp-content/papercite-data/pdf/vizzo2022ral-iros.pdf}, codeurl = {https://github.com/PRBonn/make_it_dense}, year = {2022}, volume = {7}, number = {3}, pages = {8534-8541}, doi = {10.1109/LRA.2022.3187255}, videourl = {https://youtu.be/NVjURcArHn8}, }
- L. Wiesmann, T. Guadagnino, I. Vizzo, G. Grisetti, J. Behley, and C. Stachniss, “DCPCR: Deep Compressed Point Cloud Registration in Large-Scale Outdoor Environments,” Ieee robotics and automation letters (ra-l), vol. 7, iss. 3, pp. 6327-6334, 2022. doi:10.1109/LRA.2022.3171068
[BibTeX] [PDF] [Code] [Video]@article{wiesmann2022ral-iros, author = {L. Wiesmann and T. Guadagnino and I. Vizzo and G. Grisetti and J. Behley and C. Stachniss}, title = {{DCPCR: Deep Compressed Point Cloud Registration in Large-Scale Outdoor Environments}}, journal = ral, year = 2022, volume = 7, number = 3, pages = {6327-6334}, issn = {2377-3766}, doi = {10.1109/LRA.2022.3171068}, codeurl = {https://github.com/PRBonn/DCPCR}, videourl = {https://youtu.be/RqLr2RTGy1s}, }
- L. Wiesmann, R. Marcuzzi, C. Stachniss, and J. Behley, “Retriever: Point Cloud Retrieval in Compressed 3D Maps,” in Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA), 2022.
[BibTeX] [PDF]@inproceedings{wiesmann2022icra, author = {L. Wiesmann and R. Marcuzzi and C. Stachniss and J. Behley}, title = {{Retriever: Point Cloud Retrieval in Compressed 3D Maps}}, booktitle = icra, year = 2022, }
- R. Marcuzzi, L. Nunes, L. Wiesmann, I. Vizzo, J. Behley, and C. Stachniss, “Contrastive Instance Association for 4D Panoptic Segmentation using Sequences of 3D LiDAR Scans,” Ieee robotics and automation letters (ra-l), vol. 7, iss. 2, pp. 1550-1557, 2022. doi:10.1109/LRA.2022.3140439
[BibTeX] [PDF]@article{marcuzzi2022ral, author = {R. Marcuzzi and L. Nunes and L. Wiesmann and I. Vizzo and J. Behley and C. Stachniss}, title = {{Contrastive Instance Association for 4D Panoptic Segmentation using Sequences of 3D LiDAR Scans}}, journal = ral, year = 2022, doi = {10.1109/LRA.2022.3140439}, issn = {2377-3766}, volume = 7, number = 2, pages = {1550-1557}, }
2021
- M. Arora, L. Wiesmann, X. Chen, and C. Stachniss, “Mapping the Static Parts of Dynamic Scenes from 3D LiDAR Point Clouds Exploiting Ground Segmentation,” in Proc. of the European Conf. on Mobile Robots (ECMR), 2021.
[BibTeX] [PDF] [Code]@InProceedings{arora2021ecmr, author = {M. Arora and L. Wiesmann and X. Chen and C. Stachniss}, title = {{Mapping the Static Parts of Dynamic Scenes from 3D LiDAR Point Clouds Exploiting Ground Segmentation}}, booktitle = ecmr, codeurl = {https://github.com/humbletechy/Dynamic-Point-Removal}, year = {2021}, }
- X. Chen, S. Li, B. Mersch, L. Wiesmann, J. Gall, J. Behley, and C. Stachniss, “Moving Object Segmentation in 3D LiDAR Data: A Learning-based Approach Exploiting Sequential Data,” Ieee robotics and automation letters (ra-l), vol. 6, pp. 6529-6536, 2021. doi:10.1109/LRA.2021.3093567
[BibTeX] [PDF] [Code] [Video]@article{chen2021ral, title={{Moving Object Segmentation in 3D LiDAR Data: A Learning-based Approach Exploiting Sequential Data}}, author={X. Chen and S. Li and B. Mersch and L. Wiesmann and J. Gall and J. Behley and C. Stachniss}, year={2021}, volume=6, issue=4, pages={6529-6536}, journal=ral, url = {https://www.ipb.uni-bonn.de/pdfs/chen2021ral-iros.pdf}, codeurl = {https://github.com/PRBonn/LiDAR-MOS}, videourl = {https://youtu.be/NHvsYhk4dhw}, doi = {10.1109/LRA.2021.3093567}, issn = {2377-3766}, }
- L. Wiesmann, A. Milioto, X. Chen, C. Stachniss, and J. Behley, “Deep Compression for Dense Point Cloud Maps,” Ieee robotics and automation letters (ra-l), vol. 6, pp. 2060-2067, 2021. doi:10.1109/LRA.2021.3059633
[BibTeX] [PDF] [Code] [Video]@article{wiesmann2021ral, author = {L. Wiesmann and A. Milioto and X. Chen and C. Stachniss and J. Behley}, title = {{Deep Compression for Dense Point Cloud Maps}}, journal = ral, volume = 6, issue = 2, pages = {2060-2067}, doi = {10.1109/LRA.2021.3059633}, year = 2021, url = {https://www.ipb.uni-bonn.de/pdfs/wiesmann2021ral.pdf}, codeurl = {https://github.com/PRBonn/deep-point-map-compression}, videourl = {https://youtu.be/fLl9lTlZrI0} }
2020
- C. Stachniss, I. Vizzo, L. Wiesmann, and N. Berning, How To Setup and Run a 100\% Digital Conf.: DIGICROP 2020, 2020.
[BibTeX] [PDF]
The purpose of this record is to document the setup and execution of DIGICROP 2020 and to simplify conducting future online events of that kind. DIGICROP 2020 was a 100\% virtual conference run via Zoom with around 900 registered people in November 2020. It consisted of video presentations available via our website and a single-day live event for Q&A. We had around 450 people attending the Q&A session overall, most of the time 200-250 people have been online at the same time. This document is a collection of notes, instructions, and todo lists. It is not a polished manual, however, we believe these notes will be useful for other conference organizers and for us in the future.
@misc{stachniss2020digitalconf, author = {C. Stachniss and I. Vizzo and L. Wiesmann and N. Berning}, title = {{How To Setup and Run a 100\% Digital Conf.: DIGICROP 2020}}, year = {2020}, url = {https://www.ipb.uni-bonn.de/pdfs/stachniss2020digitalconf.pdf}, abstract = {The purpose of this record is to document the setup and execution of DIGICROP 2020 and to simplify conducting future online events of that kind. DIGICROP 2020 was a 100\% virtual conference run via Zoom with around 900 registered people in November 2020. It consisted of video presentations available via our website and a single-day live event for Q&A. We had around 450 people attending the Q&A session overall, most of the time 200-250 people have been online at the same time. This document is a collection of notes, instructions, and todo lists. It is not a polished manual, however, we believe these notes will be useful for other conference organizers and for us in the future.}, }