Simultaneous Localization And Mapping (SLAM) is one of the fundamental challenges in mobile robotics. Reconstruction and tracking have been considered purely geometric problems for decades with occasional assistance from perception techniques like object detection/segmentation or relocalization. However, in recent past, end-to-end deep learning methods have not only produced great results for perception tasks but their use in multiview/stereo reconstruction and ego-motion estimation have begun to show promise as well. This poses an interesting open question: To what extent SLAM can be posed as a pure machine learning problem?

This workshop aims to look the at the intersection of SLAM and deep learning and debate if an end-to-end learnable large scale SLAM solution is realistic. We aim to discuss which components of our existing traditional SLAM frameworks are essential for robustness and accuracy and where a learning based solution is more appropriate. A fair comparison of the traditional and learning based approach is essential to answer these questions. Thus, the workshop also aims to discuss scaling learning based SLAM pipelines and rethink the datasets and evaluation techniques of robust SLAM systems in light of deep learning requiring large datasets.

The topics of interest includes but are not limited to:

  • Deep learning for single/two/multiview SFM
  • Learnable drop-in replacements for existing components of large scale SLAM frameworks
  • Efficient static and dynamic scene representations suitable for deep learning
  • Evaluation methodologies and datasets for comparing learning based and traditional robust SLAM solutions
  • Integrating semantics with SLAM
  • Learning image based scene priors
  • Learning-based methods for place recognition under challenging conditions
  • Utilizing deep learning methods to adapt current SLAM methods for lifelong operation

Organizers

Porgram Committee