Is Visual SLAM Tackling the Right Challenges? While fields like image segmentation and reinforcement learning have thrived by leveraging vast amounts of unstructured data, visual SLAM remains constrained by a handful of curated benchmarks. This limitation hinders the development of scalable, robust systems capable of operating in a wide range of complex, real-world environments.
Fragmentation across datasets, pipelines, and evaluation metrics remains a core obstacle. Each dataset follows its own structure, making reproducibility and benchmarking difficult. Researchers frequently spend time adapting to dataset-specific requirements instead of driving fundamental advancements. Additionally, compiling baseline systems is a challenge due to the lack of standardized pipelines, undocumented dataset-specific issues, and inconsistent failure cases. This fragmentation significantly slows down progress in the field.
To move forward, SLAM needs unified dataset formats, common pipelines for baselines, and evaluation metrics that go beyond traditional ATE. Standardization will enable scalable benchmarking and the development of more generalizable SLAM systems that function reliably in real-world scenarios.
Workshop Goal: This workshop aims to bring researchers together to discuss best practices, establish dataset and pipeline standards, and streamline SLAM development. We have invited experts who have created benchmarks that foster SLAM challenges, unify SLAM practices, and develop tools that support not only SLAM but broader robotic applications.
One key output of the workshop will be a curated “Unifying Visual SLAM” list of development tools, datasets, pipelines, and benchmarks—compiled by organizers, speakers, and attendees—to serve as a future reference for the research community. By reducing implementation overhead, improving reproducibility, increasing the amount and diversity of benchmarks and fostering collaboration, this workshop seeks to advance SLAM’s ability to process large-scale data and build more scalable, real-world solutions for robotics and computer vision.
We invite short papers of novel or recently published research relevant to the topics of the workshop.
We invite submissions including, but not limited to:
A submission portal will open via CMT on April 7.
April 7 | Call for submissions |
May 5 | Submissions due |
June 2 | Notification of acceptance |
June 21 | Workshop at RSS! |
Time | Planned Event | Comments |
---|---|---|
08:00 | Opening Remarks | Organizing Committee |
08:05 | PySLAM and SlamPlay | Luigi Freda |
08:30 | ROS2WASM: Bringing the Robot Operating System to the Web | Tobias Fischer |
09:00 | Isaac ROS Visual SLAM | Tomasz Bednarz |
09:30 | Tartanair and Subt-mrs datasets to push the limits of visual SLAM | Wenshan Wang |
10:00 | Poster Session/Coffee Break | |
10:30 | Scannet++: A high-fidelity dataset of 3d indoor scenes | Angela Dai |
11:00 | Simplifying visual SLAM for large-scale and multi-device solutions: Do we really need maps? | Hermann Blum |
11:30 | Present and future of SLAM in extreme environments | Shehryar Khattak |
12:00 | Unifying Visual SLAM: From Fragmented Datasets to Scalable, Real-World Solutions |