Over the course of its rich history, object tracking has been tackled under many disguises: multi-object tracking, single-object tracking, video object segmentation, video instance segmentation, and more.
Most such tasks are evaluated on benchmarks limited to a small number of common classes.
Practical applications require trackers that go beyond these common classes, detecting and tracking rare and even never-before-seen objects.
Our workshop contains challenges and talks focused on bringing tracking to the open-world.
Challenges: We have opened two challenges towards this end:
(1) Open-World Tracking, which requires building trackers
that can generalize to never-before-seen objects and (2)
Long Tail Tracking, which requires building trackers
that work for rare objects, that may only contain a few
examples in the training set. See below for more details.
Time | June 18, 2023 |
Venue | CVPR 2023 Vancouver, Canada |
Location | East 11, Vancouver Convention Center |
Time | Speaker | Topic |
---|---|---|
9:00-9:15 AM PST | Organizers | Workshop Introduction |
9:15-9:45 AM PST | Laura Leal-Taixé | Generalization in Dynamic Scene Understanding |
9:45-10:15 AM PST | Adam Harley | Tracking Any Pixel in a Video |
10:15-10:45 AM PST | Coffee Break | |
10:45-11:15 AM PST | Fisher Yu | Tracking Every Thing without Pains |
11:15-11:26 AM PST | Jenny Seidenschwarz | Simple Cues Lead to a Strong Multi-Object Tracker |
11:26-11:37 AM PST | Martin Danelljan | OVTrack: Open-Vocabulary Multiple Object Tracking |
11:37-11:48 AM PST | Orcun Cetintas | Unifying Short and Long-Term Tracking with Graph Hierarchies |
11:48-11:59 AM PST | Tony Huang | OpenVIS: Open-vocabulary Video Instance Segmentation |
Time | Speaker | Topic |
---|---|---|
1:30-2:00 PM PST | Jiri Matas | The Visual Object Tracking Challenge - the Benchmark, the Evolution, the Open Problems |
2:00-2:30 PM PST | Du Tran | Can Machines Understand Long-from Videos with Complex Tasks? |
2:30-2:45 PM PST | Challenge Session | |
2:45-3:15 PM PST | Coffee Break | |
3:15-4:00 PM PST | All Speakers | Round Table: Quo Vadis, Tracking? |
4:00-4:30 PM PST | Alexander Kirillov | Segment Anything |
4:30-5:00 PM PST | Zeynep Akata | Explainability in Deep Learning Through Communication |
5:00-5:05 PM PST | Organizers | Concluding Remarks |
We are excited to announce two Multi-Object Tracking (MOT) competitions: the Long-Tail Challenge and the Open-World Challenge. With these challenges, we aim to advance multi-object tracking and segmentation research in challenging few-shot and open-world conditions.
We base our challenges on TAO (Tracking Any Object) dataset and BURST (Benchmark for Unifying Object Recognition, Segmentation, and Tracking) video segmentation labels. We provide 2,914 videos with pixel-precise labels for 16,089 unique object tracks (600,000 per-frame masks) spanning 482 object classes!
In the Long-Tail Challenge, we focus on tracking and classifying all objects within the TAO/BURST object class vocabulary. In the Open-World Challenge, we investigate multi-object tracking and segmentation in a setting where labels for only a subset of target classes are available during model training. All objects need to be tracked but not classified.
In summary, the Long-Tail and Open-World Challenges offer a unique opportunity for researchers to investigate how far we can get with object tracking in long-tailed and open-world regimes and advance the field.
The submission deadline for both challenges is June 5th, 2023. Participants can submit their results through the MOTChallenge platform. Winners will be invited to present their work at our workshop.
In the Long-Tail Tracking Challenge we ask to track and classify all objects specified in the TAO/BURST object class vocabulary. Models can leverage labeled data for all 482 semantic classes during training. The challenge emphasizes the long-tail distribution of object classes, with a few classes occurring frequently and the majority occurring rarely. Participants are expected to develop methods that can handle long-tail distribution and are robust to highly imbalanced datasets.
The challenge’s goal is to advance the state-of-the-art in multi-object tracking and segmentation. Participants are encouraged to use creative and innovative approaches to achieve the highest possible performance on this challenging dataset.
The Open-World Challenge focuses on multi-object tracking and segmentation in a setting where only a limited number of labeled classes are available during training (see Opening up Open-World Tracking Paper). This is a challenging problem, as methods need to track all objects, including those not presented as labeled instances, during the model training.
Unlike the Long-Tail Challenge, in the Open-World Challenge, we (i) limit the number of labeled classes used for model training (i.e., only labels for classes within COCO class vocabulary can be used), and (ii) do not require classifying tracked object instances.
We are excited to announce that our 2nd Workshop on Tracking & Its Many Guises at CVPR 2023 is now accepting paper submissions!
We are accepting papers in CVPR 2023 proceedings that are directly relevant to the workshop topic. We also accept peer-reviewed contributions recently accepted to related venues (e.g., ECCV, ICCV, NeurIPS). The aim of this call is to discuss recent developments in this exciting field of research at our workshop.
A selection of papers, reviewed by our committee, will be featured and highlighted on the workshop web-page. A selection of outstanding and relevant contributions will be chosen for invited talks at our workshop. Don’t miss this opportunity to showcase and discuss your work at our workshop!
The deadline for the submission is June 1, 2023. Please send your submissions aosep@andrew.cmu.edu.
We feature the following papers at our workshop, that will be presented in the paper session: