We gratefully acknowledge the support by NVIDIA providing us with the GPU grant to support our research on semantic segmentation and object instance detection for scene understanding and agricultural robotics.
Author: stachnis
2018-06-29: Hearing of the Cluster of Excellence proposal “PhenoRob” in Bonn
On June 29, 2018, we have conducted the hearing for our cluster of excellence proposal PhenoRob. The decision about the funding will be made on September 28.
2018-03: Code Available: Bonnet – Tensorflow Convolutional Semantic Segmentation Pipeline by Andres Milioto and Cyrill Stachniss
Bonnet: Tensorflow Convolutional Semantic Segmentation pipeline by Andres Milioto and Cyrill Stachniss
Bonnet provides a framework to easily add architectures and datasets, in order to train and deploy CNNs for a robot. It contains a full training pipeline in python using Tensorflow and OpenCV, and it also some C++ apps to deploy a frozen protobuf in ROS and standalone. The C++ library is made in a way which allows to add other backends (such as TensorRT and MvNCS), but only Tensorflow and TensorRT are implemented for now. For now, we will keep it this way because we are mostly interested in deployment for the Jetson and Drive platforms, but if you have a specific need, we accept pull requests!
The networks included is based of of many other architectures (see below), but not exactly a copy of any of them. As seen in the videos, they run very fast in both GPU and CPU, and they are designed with performance in mind, at the cost of a slight accuracy loss. Feel free to use it as a model to implement your own architecture.
All scripts have been tested on the following configurations:
- x86 Ubuntu 16.04 with an NVIDIA GeForce 940MX GPU (nvidia-384, CUDA8, CUDNN6, TF 1.4.1, TensorRT3)
- x86 Ubuntu 16.04 with an NVIDIA GTX1080Ti GPU (nvidia-375, CUDA8, CUDNN6, TF 1.4.1, TensorRT3)
- x86 Ubuntu 16.04 and 14.04 with no GPU (TF 1.4.1, running on CPU in NHWC mode, no TensorRT support)
- Jetson TX2 (full Jetpack 3.2)
We also provide a Dockerfile to make it easy to run without worrying about the dependencies, which is based on the official nvidia/cuda image containing cuda9 and cudnn7.
This code is related to the following publications:
2018-02: Code Available: Fast Change Detection by Emanuele Palazzolo and Cyrill Stachniss
Fast Change Detection by Emanuele Palazzolo and Cyrill Stachniss
Fast Change Detection is available on GitHub
The program allows for identifying, in real-time, changes on a 3D model from a sequence of images. The idea is to first detect inconsistencies between pairs of images by reprojecting an image onto another one by passing through the 3D model. Ambiguities about possible inconsistencies resulting from this process are then resolved by combining multiple images. Finally, the 3D location of the change is estimated by projecting in 3D these inconsistencies.
This code is related to the following publications:
E. Palazzolo and C. Stachniss, “Fast Image-Based Geometric Change Detection Given a 3D Model”, in Proceedings of the IEEE Int. Conf. on Robotics and Automation (ICRA), 2018.
2017-12: We will be organizing the ICRA’18 workshop “Robotic Vision and Action in Agriculture” in Brisbane jointly with ETH Zürich and the Australian Centre of Excellence for Robotic Vision
Robotic Vision and Action in Agriculture: the future of agri-food systems and its deployment to the real-world
This workshop will bring together researchers and practitioners to discuss advances in robotics applications and the intersection of these advances with agricultural practices. As such, the workshop will focus not only on recent advancements in vision systems and action but it will also explore what this means for agricultural practices and how robotics can be used to better manage and understand the crop and environment.
Motivation and Objectives
Agriculture robotics faces a number of unique challenges and operates at the intersection of applied robotic vision, manipulation and crop science. Robotics will provide a key role in improving productivity, increasing crop quality and even enabling individualised weed and crop treatment. All of these advancements are integral for providing food to a growing population expected to reach 9 billion by 2050, requiring agricultural production to double in order to meet food demands.
This workshop brings together researchers and industry working on novel approaches for long term operation across changing agricultural environments, including broad-acre crops, orchard crops, nurseries and greenhouses, and horticulture. It will also host, Prof. Achim Walter, an internationally renowned crop scientist who will provide a unique perspective on the far future of how robotics can further revolutionise agriculture.
The goal of the workshop is to discuss the future of agricultural robotics and how thinking and acting with a robot in the field enables a range of different applications and approaches. A particular emphasis will be placed on vision and action that works in the field by coping with changes in appearance and geometry of the environment. Learning how to interact within this complicated environment will also be of special interest to the workshop as will be the alternative applications enabled by better understanding and exploiting the link between robotics and crop science.
List of Topics
Topics of interest to this workshop include, but are not necessarily limited to:
- Novel perception for agricultural robots including passive and active methods
- Manipulators for harvesting, soil preparation and crop protection
- Long-term autonomy and navigation in unstructured environments
- Data analytics and real-time decision making with robots-in-the-loop
- Low-cost sensing and algorithms for day/night operation, and
- User interfaces for end-users
Invited Presenters
The workshop will feature the following distinguished experts for invited talks:
Prof. Achim Walter (ETHZ Department of Environmental Systems Science)
Prof. Qin Zhang (Washington State University)
Organisers
Chris McCool
Australian Centre of Excellence for Robotic Vision
Queensland University of Technology
c.mccool@nullqut.edu.au
Chris Lehnert
Australian Centre of Excellence for Robotic Vision
Queensland University of Technology
c.lehnert@nullqut.edu.au
Inkyu Sa
ETH Zurich
Autonomous Systems Laboratory
inkyu.sa@nullmavt.ethz.ch
Juan Nieto
ETH Zurich
Autonomous Systems Laboratory
jnieto@nullethz.ch
Cyrill Stachniss
University of Bonn
Photogrammetry, IGG
cyrill.stachniss@nulligg.uni-bonn.de
2017-11: GA Coverage of the Bonner Success so far in the Excellence Initiative
2017-11: Booth at Agritechnica
Members of the Photogrammetry and Robotics Team participate at the Agritechnica 2017 presenting recent results of the EU project Flourish.
2017-11: Code Available: Extended Version of Visual Place Recognition using Hashing by Olga Vysotska
Visual Place Recognition using Hashing by Olga Vysotska and Cyrill Stachniss
Localization system is available on GitHub
Given two sequences of images represented by the descriptors, the code constructs a data association graph and performs a search within this graph, so that for every query image, the code computes a matching hypothesis to an image in a database sequence as well as matching hypotheses for the previous images. The matching procedure can be performed in two modes: feature based and cost matrix based mode. The new version using an own hashing approach to quickly relocalize.
This code is related to the following publications:
O. Vysotska and C. Stachniss, “Relocalization under Substantial Appearance Changes using Hashing,” in Proc. of the 9th Workshop on Planning, Perception, and Navigation for Intelligent Vehicles at the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), 2017.
2017-11: KDnuggets Reports on the Work by Andres Milioto and Philipp Lottes
KDnuggets reports on our work on plant classification using neural networks.
2017-10: Successful Flourish P2 Review Meeting at Campus Klein Altendorf
The Flourish P2 review meeting was successfully conducted on October 11 and 12 at the Campus Klein Altendorf of the University of Bonn, including demonstrations with the BoniRob ground vehilce and unmanned aerial vehicles, and received an excellent evaluation.