2nd Workshop on Tracking and Its Many Guises

Over the course of its rich history, object tracking has been tackled under many disguises: multi-object tracking, single-object tracking, video object segmentation, video instance segmentation, and more. Most such tasks are evaluated on benchmarks limited to a small number of common classes. Practical applications require trackers that go beyond these common classes, detecting and tracking rare and even never-before-seen objects. Our workshop contains challenges and talks focused on bringing tracking to the open-world.
Challenges: We have opened two challenges towards this end: (1) Open-World Tracking, which requires building trackers that can generalize to never-before-seen objects and (2) Long Tail Tracking, which requires building trackers that work for rare objects, that may only contain a few examples in the training set. See below for more details.

Time June 18, 2023
Venue CVPR 2023 Vancouver, Canada

Schedule

The workshop will take place on June 18, 2023, and will contain two sessions.

Session 1

Time Speaker Topic
09:00-09:20 PST Organizers Introduction, Challenge Description
09:20-09:50 PST Fisher Yu TBA
09:50-10:00 PST TAO Long-Tail Challenge 2nd Place Winner TBA
10:00-10:30 PST Adam Harley TBA
10:30-10:40 PST TAO Open-World Challenge 2nd Place Winner TBA
10:40-11:00 PST Coffee Break
11:00-11:30 PST Zeynep Akata TBA
11:30-11:40 PST TAO Long-Tail Challenge 1st Place Winner TBA
11:40-12:10 PST Laura Leal-Taixé TBA
12:10-12:20 PST TAO Open-World Challenge 1st Place Winner TBA

Session 2

Time Speaker Topic
13:40-14:10 PST Jiri Matas TBA
14:10-14:20 PST TAO Long-Tail Challenge Submission Track Best Paper TBA
14:20-14:50 PST Du Tran TBA
14:50-15:00 PST TAO Open-World Challenge Submission Track Best Paper TBA
15:00-15:20 PST Closing Remarks Organizers
15:20-16:00 PST Round Table: Quo Vadis, Tracking? All Speakers

Competition

We are excited to announce two Multi-Object Tracking (MOT) competitions: the Long-Tail Challenge and the Open-World Challenge. With these challenges, we aim to advance multi-object tracking and segmentation research in challenging few-shot and open-world conditions.

We base our challenges on TAO (Tracking Any Object) dataset and BURST (Benchmark for Unifying Object Recognition, Segmentation, and Tracking) video segmentation labels. We provide 2,914 videos with pixel-precise labels for 16,089 unique object tracks (600,000 per-frame masks) spanning 482 object classes!

In the Long-Tail Challenge, we focus on tracking and classifying all objects within the TAO/BURST object class vocabulary. In the Open-World Challenge, we investigate multi-object tracking and segmentation in a setting where labels for only a subset of target classes are available during model training. All objects need to be tracked but not classified.

In summary, the Long-Tail and Open-World Challenges offer a unique opportunity for researchers to investigate how far we can get with object tracking in long-tailed and open-world regimes and advance the field.

The submission deadline for both challenges is June 5th, 2023. Participants can submit their results through the MOTChallenge platform. Winners will be invited to present their work at our workshop.

Important Information

  • Challenge closes: June 5th, 2023
  • Abstract submission deadline: June 6th, 2023
  • Technical report deadline: June 10th, 2023
  • Each participant of the challenge should send an abstract (max. 1200 characters) to Idil Esen Zulfikar (zulfikar@vision.rwth-aachen.de) by June 6th to publish a short paper or to present their method in the workshop.

General Challenge Information/Rules

Important Links

Challenge 1: Long-Tail Tracking Challenge (Long-Tail)

In the Long-Tail Tracking Challenge we ask to track and classify all objects specified in the TAO/BURST object class vocabulary. Models can leverage labeled data for all 482 semantic classes during training. The challenge emphasizes the long-tail distribution of object classes, with a few classes occurring frequently and the majority occurring rarely. Participants are expected to develop methods that can handle long-tail distribution and are robust to highly imbalanced datasets.

The challenge’s goal is to advance the state-of-the-art in multi-object tracking and segmentation. Participants are encouraged to use creative and innovative approaches to achieve the highest possible performance on this challenging dataset.

Important:

  • In addition to the benchmark provided on the MOTChallenge platform, the challenge organizers will evaluate submissions to identify the best-performing method and the most innovative approach. Winners will be invited to present their work at a workshop.
  • To ensure fairness and prevent overfitting, participants must submit their code for the challenge organizers to review. The code will be used to verify that no training or tuning was performed on the test set, and that the results truly represent the performance of the submitted methods.
  • A benchmark with the current results is available here.

Challenge 2: Open-World Tracking Challenge (Open-World)

The Open-World Challenge focuses on multi-object tracking and segmentation in a setting where only a limited number of labeled classes are available during training (see Opening up Open-World Tracking Paper). This is a challenging problem, as methods need to track all objects, including those not presented as labeled instances, during the model training.

Unlike the Long-Tail Challenge, in the Open-World Challenge, we (i) limit the number of labeled classes used for model training (i.e., only labels for classes within COCO class vocabulary can be used), and (ii) do not require classifying tracked object instances.

Important:

  • This challenge’s rules limit the use of labeled data to encourage participants to develop methods that can learn from few examples and generalize to unseen classes. Specifically, participants may use only labels for classes within COCO class vocabulary. Participants are encouraged to use unsupervised or self-supervised learning methods to augment labeled data and improve their models’ performance.
  • Participants may train methods on the COCO dataset (without LVIS labels).
  • Participants may use our pre-computed object proposals, available here.
  • As with the Long-Tail Challenge, the challenge organizers will evaluate submissions to identify the best-performing method and the most innovative approach. Winners will be invited to present their work at a workshop.
  • Participants are required to submit their code and allow the challenge organizers to review their work to ensure fairness and prevent overfitting. The code will be used to verify that no training or tuning was performed on the test set or the held-out classes, and that the results represent the performance of the submitted methods.
  • A benchmark with the current results is available here.

Call for Papers

We are excited to announce that our 2nd Workshop on Tracking & Its Many Guises at CVPR 2023 is now accepting paper submissions!

We are accepting papers in CVPR 2023 proceedings that are directly relevant to the workshop topic. We also accept peer-reviewed contributions recently accepted to related venues (e.g., ECCV, ICCV, NeurIPS). The aim of this call is to discuss recent developments in this exciting field of research at our workshop.

A selection of papers, reviewed by our committee, will be featured and highlighted on the workshop web-page. A selection of outstanding and relevant contributions will be chosen for invited talks at our workshop. Don’t miss this opportunity to showcase and discuss your work at our workshop!

The deadline for the submission is June 1, 2023. Please send your submissions aosep@andrew.cmu.edu.