Christian Witte

External PhD Student
Contact:
Email: christian.witte@nullcariad.technology
Company:
CARIAD

Research Interests

  • AI Fusion for Computer Vision
  • Machine Learning
  • Autonomous Driving

Publications

2024

  • C. Witte, J. Behley, C. Stachniss, and M. Raaijmakers, “Epipolar Attention Field Transformers for Bird’s Eye View Semantic Segmentation,” Arxiv preprint, vol. arXiv:2412.01595, 2024.
    [BibTeX] [PDF]

    Spatial understanding of the semantics of the surroundings is a key capability needed by autonomous cars to enable safe driving decisions. Recently, purely vision-based solutions have gained increasing research interest. In particular, approaches extracting a bird’s eye view (BEV) from multiple cameras have demonstrated great performance for spatial understanding. This paper addresses the dependency on learned positional encodings to correlate image and BEV feature map elements for transformer-based methods. We propose leveraging epipolar geometric constraints to model the relationship between cameras and the BEV by Epipolar Attention Fields. They are incorporated into the attention mechanism as a novel attribution term, serving as an alternative to learned positional encodings. Experiments show that our method EAFormer outperforms previous BEV approaches by 2\% mIoU for map semantic segmentation and exhibits superior generalization capabilities compared to implicitly learning the camera configuration.

    @article{witte2024arxiv,
    author = {C. Witte and J. Behley and C. Stachniss and M. Raaijmakers},
    title = {{Epipolar Attention Field Transformers for Bird's Eye View Semantic Segmentation}},
    journal = arxiv,
    year = 2024,
    volume = {arXiv:2412.01595},
    url = {http://arxiv.org/pdf/2412.01595v1},
    abstract = {Spatial understanding of the semantics of the surroundings is a key capability needed by autonomous cars to enable safe driving decisions. Recently, purely vision-based solutions have gained increasing research interest. In particular, approaches extracting a bird's eye view (BEV) from multiple cameras have demonstrated great performance for spatial understanding. This paper addresses the dependency on learned positional encodings to correlate image and BEV feature map elements for transformer-based methods. We propose leveraging epipolar geometric constraints to model the relationship between cameras and the BEV by Epipolar Attention Fields. They are incorporated into the attention mechanism as a novel attribution term, serving as an alternative to learned positional encodings. Experiments show that our method EAFormer outperforms previous BEV approaches by 2\% mIoU for map semantic segmentation and exhibits superior generalization capabilities compared to implicitly learning the camera configuration.}
    }