Justin Salamon
  • Home
  • News
  • Research
  • Publications
  • Code/Data
  • Melody Extraction
  • PhD Thesis
  • Contact
    • Music
    • Music Technology

Justin Salamon talks to Fox 5 News about SONYC

30/11/2016

0 Comments

 
Fox 5 New's Jessica Formoso interviewed me for their feature article about our SONYC project:
The full article is available here: ​http://www.fox5ny.com/news/new-york-city-noise-pollution-research

Look mum, I'm on TV! :)
Picture
0 Comments

Three New Datasets For Bioacoustic Machine Learning

23/11/2016

0 Comments

 
We're happy to announce the release of 3 new datasets for research on automatic bioacoustic bird species recognition. The datasets were compiled for our recently published study "Towards the Automatic Classification of Avian Flight Calls for Bioacoustic Monitoring", and are freely available on the Dryad Digital Repository:
​
  • CLO-43SD: 5,428 labeled audio clips of flight calls from 43 different species of North American woodwarblers (in the family Parulidae). The clips came from a variety of recording conditions, including clean recordings obtained using highly-directional shotgun microphones, recordings obtained from noisier field recordings using omnidirectional microphones, and recordings obtained from birds in captivity.
Picture
Rosetta Stone For Warblers’ Migration Calls. Source: https://www.allaboutbirds.org/a-rosetta-stone-for-identifying-warblers-migration-calls/
  • CLO-WTSP: 16,703 labeled audio clips captured by remote acoustic sensors deployed in Ithaca, NY and NYC over the fall 2014 and spring 2015 migration seasons. Each clip is labeled to indicate whether it contains a flight call from the target species White-Throated Sparrow (WTSP), a flight call from a non-target species, or no flight call at all.​
  • CLO-SWTH: 179,111 labeled audio clips captured by remote acoustic sensors deployed in Ithaca, NY and NYC over the fall 2014 and spring 2015 migration seasons. Each clip is labeled to indicate whether it contains a flight call from the target species Swainson's Thrush (SWTH), a flight call from a non-target species, or no flight call at all.
​
CLO-43SD is targeted at the closed-set N-class problem (identify which of of these 43 known species produced the flight call in this clip), while CLO-WTSP and CLO-SWTH are targeted at the binary open-set problem (given a clip determine whether it contains a flight call from the target species or not). The latter two come pre-sorted into two subsets: Fall 2014 and Spring 2015. In our study we used the fall subset for training and the spring subset for testing, simulating adversarial yet realistic conditions that require a high level of model generalization.

For further details about the datasets see our article:

​
Towards the Automatic Classification of Avian Flight Calls for Bioacoustic Monitoring
J. Salamon , J. P. Bello, A. Farnsworth, M. Robbins, S. Keen, H. Klinck and S. Kelling
PLOS ONE 11(11): e0166866, 2016. doi: 10.1371/journal.pone.0166866. 
[PLOS ONE][PDF][BibTeX]

You can download all 3 datasets from the Dryad Digital Repository at this link.
0 Comments

Towards the Automatic Classification of Avian Flight Calls for Bioacoustic Monitoring

23/11/2016

0 Comments

 
PictureA white-throated sparrow, one of the species targeted in the study. Image by Simon Pierre Barrette, license CC-BY-SA 3.0.
Automatic classification of animal vocalizations has great potential to enhance the monitoring of species movements and behaviors. This is particularly true for monitoring nocturnal bird migration, where automated classification of migrants’ flight calls could yield new biological insights and conservation applications for birds that vocalize during migration. In this paper we investigate the automatic classification of bird species from flight calls, and in particular the relationship between two different problem formulations commonly found in the literature: classifying a short clip containing one of a fixed set of known species (N-class problem) and the continuous monitoring problem, the latter of which is relevant to migration monitoring. We implemented a state-of-the-art audio classification model based on unsupervised feature learning and evaluated it on three novel datasets, one for studying the N-class problem including over 5000 flight calls from 43 different species, and two realistic datasets for studying the monitoring scenario comprising hundreds of thousands of audio clips that were compiled by means of remote acoustic sensors deployed in the field during two migration seasons. We show that the model achieves high accuracy when classifying a clip to one of N known species, even for a large number of species. In contrast, the model does not perform as well in the continuous monitoring case. Through a detailed error analysis (that included full expert review of false positives and negatives) we show the model is confounded by varying background noise conditions and previously unseen vocalizations. We also show that the model needs to be parameterized and benchmarked differently for the continuous monitoring scenario. Finally, we show that despite the reduced performance, given the right conditions the model can still characterize the migration pattern of a specific species. The paper concludes with directions for future research.

The full article is available freely (open access) on PLOS ONE:


​Towards the Automatic Classification of Avian Flight Calls for Bioacoustic Monitoring
J. Salamon , J. P. Bello, A. Farnsworth, M. Robbins, S. Keen, H. Klinck and S. Kelling
PLOS ONE 11(11): e0166866, 2016. doi: 10.1371/journal.pone.0166866. 
[PLOS ONE][PDF][BibTeX]

Along with this study, we have also published the three new datasets for bioacoustic machine learning that were compiled for this study.

0 Comments

SONYC featured in New York Times, NPR, Wired and more

7/11/2016

0 Comments

 
Picture
Today SONYC was featured on several major news outlets including the New York Times, NPR and Wired! This follows NYU's press release about the official launch of the SONYC project.

Needless to say I'm thrilled about the coverage the project's launch is receiving. Hopefully it is a sign of the great things yet to come from this project, though, I should note, it has already resulted in several scientific publications.

Here's the complete list of media articles (that I could find) covering SONYC. The WNYC radio segment includes a few words from yours truly :)

Picture
To Create a Quieter City, They’re Recording the Sounds of New York
Picture
BBC World Service - World Update (first minute, then from 36:21)
Picture
Mapping New York City's Excessively Loud Sounds​
Picture
New York, come usare i microfoni per una città più silenziosa​
Picture
Scientists Are Tracking New York Noisiness in Order to Quiet It Down
Picture
NYU Scientists are Trying to Reduce Noise Pollution in New York City
Picture
Researchers Are Recording New York to Make it Quieter
Picture
Sounds of New York City (German Public Radio)
Picture
NYC’s $5 Million Noise Pollution Project
Picture
Mapping the Sounds of New York City Streets
Picture
New UrbanEars project has NYU teaming up with Ohio State to battle noise pollution
Picture
NYU Launches Research Initiative to Combat NYC Noise Pollution
Picture
Smart microphones are recording city sounds to help create a quieter New York
Picture
NYU Moves Forward with Study of City Noise
Picture
How to Take on NYC’s Scary Noise Problem
Picture
Research Initiative Looks to Tame Urban Noise Pollution
If you're interested to learn more about the SONYC project have a look at the SONYC website. You can also check out the SONYC intro video:
0 Comments

SONYC awarded major grant by the National Science Foundation

7/11/2016

0 Comments

 
I'm extremely excited to report that our Sounds of New York City (SONYC) project has been granted a Frontier award from the National Science Foundation (NSF) as part of its initiative to advance research in cyber-physical systems as detailed in the NSF’s press release.

NYU has issued a press release providing further information about the SONYC project and the award. From the NYU press release:
​The project – which involves large-scale noise monitoring – leverages the latest in machine learning technology, big data analysis, and citizen science reporting to more effectively monitor, analyze, and mitigate urban noise pollution. Known as Sounds of New York City (SONYC), this multi-year project has received a $4.6 million grant from the National Science Foundation and has the support of City health and environmental agencies.
Further information about the project project can be found on the SONYC website. You can also check out the SONYC intro video: 
0 Comments

    NEWS

    Machine listening research, code, data & hacks!

    Archives

    March 2023
    April 2022
    November 2021
    October 2021
    June 2021
    January 2021
    October 2020
    June 2020
    May 2020
    April 2020
    January 2020
    November 2019
    October 2019
    June 2019
    May 2019
    March 2019
    February 2019
    January 2019
    November 2018
    October 2018
    August 2018
    July 2018
    May 2018
    April 2018
    February 2018
    October 2017
    August 2017
    July 2017
    June 2017
    April 2017
    March 2017
    January 2017
    December 2016
    November 2016
    October 2016
    August 2016
    June 2016
    May 2016
    April 2016
    February 2016
    January 2016
    November 2015
    October 2015
    July 2015
    June 2015
    April 2015
    February 2015
    November 2014
    October 2014
    September 2014
    June 2014
    April 2014
    March 2014
    February 2014
    December 2013
    September 2013
    July 2013
    May 2013
    February 2013
    January 2013
    December 2012
    November 2012
    October 2012
    August 2012
    July 2012
    June 2012

    Categories

    All
    ACM MM'13
    ACM MM'14
    Acoustic Ecology
    Acoustic Event Detection
    Acoustic Sensing
    AES
    Applied Acoustics
    Article
    Audio-annotator
    Audio To Midi
    Auditory Scene Analysis
    Avian
    Award
    Baseball
    Beer
    Best Oral Presentation
    Best Paper Award
    Best Student Paper Award
    BigApps
    Bioacoustics
    BirdVox
    Book
    Chapter
    CHI
    Citizen Science
    Classification
    Computer Vision
    Conference
    Connected Cities
    Convolutional Neural Networks
    Cornell Lab Of Ornithology
    Coursera
    Cover Detection
    CREPE
    Crowdcrafting
    Crowdsourcing
    CUSP
    CVPR
    Data Augmentation
    Data Science
    Dataset
    Data Structures
    Dcase
    Deep Learning
    Domain
    Education
    Entrepreneurship
    Environmental Sound
    Essentia
    Eusipco
    Eusipco2015
    Evaluation
    Few-shot Learning
    Flight Calls
    Girl Scouts
    Grant
    Hackathon
    Hackday
    Hackfest
    HCI
    Hildegard Von Bingen
    ICASSP
    ICASSP 2020
    IEEE Signal Processing Letters
    Ieee Spm
    Indian Classical Music
    Interface
    Interspeech
    Interview
    Ismir 2012
    Ismir2014
    Ismir2015
    Ismir2016
    Ismir2017
    Ismir2020
    ITP
    Jams
    Javascript
    JNMR
    Journal
    Machine Learning
    Machine Listening
    Map
    Media
    Melodia
    Melody Extraction
    Metric Learning
    Midi
    Migration Monitoring
    MIR
    Mir_eval
    MOOC
    MTG-QBH
    Music Informatics
    Music Information Retrieval
    Music Similarity
    National Science Foundation
    Neumerator
    New York Times
    Noise Pollution
    Notebook
    NPR
    NSF
    NYC
    NYU
    Open Source
    Pitch
    Pitch Contours
    Pitch Tracking
    Plos One
    Plug In
    Plug-in
    Presentation
    Press
    PRI
    Prosody
    Publication
    Python
    Query By Humming
    Query-by-humming
    Radio
    Representation Learning
    Research
    Robots
    Scaper
    Science And The City
    Science Friday
    Self-supervision
    Sensor Network
    Sensors
    Sight And Sound Workshop
    Smart Cities
    Software
    SONYC
    Sound Classification
    Sound Education
    Sound Event Detection
    Soundscape
    Sounds Of New York City
    Sound Workshop
    Speech
    STEM
    Synthesis
    Taste Of Science
    Taxonomy
    Technical Report
    Time Series
    Tonic ID
    Tony
    Tutorial
    Unsupervised Feature Learning
    Urban
    Urban Sound Analysis
    Urban Sound Tagging
    Vamp
    Version Identification
    Visualization
    Vocaloid
    Vocoder
    Warblers
    Wav To Midi
    Welcome
    Wired
    WNYC
    Women In Science
    Workshop
    World Domination
    Wsf14
    Youtube

    RSS Feed

Powered by Create your own unique website with customizable templates.
  • Home
  • News
  • Research
  • Publications
  • Code/Data
  • Melody Extraction
  • PhD Thesis
  • Contact
    • Music
    • Music Technology