The first, lead by Rachel Bittner (MARL @ NYU), describes MedleyDB, a new dataset of multitrack recordings we have compiled and annotated, primarily for melody extraction evaluation. Unlike previous datasets, it contains over 100 songs, most of which are full-length (rather than excerpts), in a variety of musical genres, and of professional quality (not only in the recording, but also in the content):
- R. Bittner, J. Salamon, M. Tierney, M. Mauch, C. Cannam and J. P. Bello. "MedleyDB: A Multitrack Dataset for Annotation-Intensive MIR Research", in Proc. 15th International Society for Music Information Retrieval Conference (ISMIR 2014), Taipei, Taiwan, October 2014.
We hope this new dataset will help shed light on the remaining challenges in melody extraction (we have identified a few ourselves in the paper), and allow researchers to evaluate their algorithms on a more realistic dataset. The dataset can also be used for research in musical instrument identification, source separation, multiple f0 tracking, and any other MIR task that benefits from the availability of multitrack audio data. Congratulations to my co-authors Rachel, Mike, Matthias, Chris and Juan!
The second paper, lead by Eric Humphrey (MARL @ NYU), introduces JAMS, a new specification we've been working on for representing MIR annotations. JAMS = JSON Annotated Music Specification, and as you can imagine, is JSON based:
- E. J. Humphrey, J. Salamon, O. Nieto, J. Forsyth, R. M. Bittner and J. P. Bello. "JAMS: A JSON Annotated Music Specification for Reproducible MIR Research", in Proc. 15th International Society for Music Information Retrieval Conference (ISMIR 2014), Taipei, Taiwan, October 2014.
The three main concepts behind JAMS are:
- Comprehensive annotation: moving away from lab files, a JAMS file can store comprehensive annotation data and annotation metadata in a structured way that can be easily loaded from and saved to disk.
- Multiple annotations: sometimes an annotation should be considered more of a reference than a ground truth, in that different annotators may produce different references (e.g. chord annotations). JAMS allows to store multiple annotations for the same recording in a single file.
- Multiple tasks: traditionally, the annotation for each MIR task (e.g. melody extraction, chord recognition, genre identification, etc.) is stored in a separate file. JAMS allows to store the annotations of different tasks for the same recording in a single JAMS file which, in addition to keeping things tidy, facilitates the development/evaluation of algorithms that use / extract multiple musical facets at once.
As with all new specifications / protocols / conventions, the real success of JAMS depends on its adoption by the community. We are fully aware that this is but a proposal, a first step, and hope to develop / improve JAMS by actively discussing it with the MIR community. To ease adoption, we're providing a python library for loading / saving / manipulating JAMS files, and have ported the annotations of several of the most commonly used corpora in MIR into JAMS. Congratulations to my co-authors Eric, Uri (Oriol), Jon, Rachel and Juan!
The third paper, lead by Colin Raffel (LabROSA @ Columbia), describes mir_eval, an open-source python library that implements the most common evaluation measures for a large selection of MIREX tasks including melody extraction, chord recognition, beat detection, onset detection, structural segmentation and source separation:
- C. Raffel, B. McFee, E. J. Humphrey, J. Salamon, O. Nieto, D. Liang and D. P. W. Ellis. "mir_eval: A Transparent Implementation of Common MIR Metrics", Proc. 15th International Society for Music Information Retrieval Conference (ISMIR 2014), Taipei, Taiwan, October 2014.
Looking forward to discussing these papers and ideas with everyone at ISMIR 2014! See you in Taipei ^_^