Details

In this work, we introduce SoccerNet, a benchmark for action spotting in soccer videos. The dataset is composed of 500 complete soccer games from six main European leagues, covering three seasons from 2014 to 2017 and a total duration of 764 hours. A total of 6,637 temporal annotations are automatically parsed from online match reports at a one-minute resolution for three main classes of events (Goal, Yellow/Red Card, and Substitution). As such, the dataset is easily scalable. These annotations are manually refined to a one-second resolution by anchoring them at a single timestamp following well-defined soccer rules. With an average of one event every 6.9 minutes, this dataset focuses on the problem of localizing very sparse events within long videos. We define the task of spotting as finding the anchors of soccer events in a video. Making use of recent developments in the realm of generic action recognition and detection in video, we provide strong baselines for detecting soccer events. We show that our best model for classifying temporal segments of length one minute reaches a mean Average Precision (mAP) of 67.8%. For the spotting task, our baseline reaches an Average-mAP of 49.7% for tolerances δ ranging from 5 to 60 seconds.​

​​Please visit the SoccerNet website​ for general inquiries and check the GitHub page​ to download the dataset. Note that you need to fill the following NDA​ to access the videos.

Collaborators

​Silvio Giancola, Mohieddine Amine, Tarek Dghaily, Bernard Ghanem

Publications

​Silvio Giancola, Mohieddine Amine, Tarek Dghaily, Bernard Ghanem,

"SoccerNet: A Scalable Dataset for Action Spotting in Soccer Videos"

IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW 2018) [Oral]