Acoustic fingerprint
This article needs additional citations for verification. (June 2011) |
An acoustic fingerprint is a condensed digital summary, a digital fingerprint, deterministically generated from an audio signal, that can be used to identify an audio sample or quickly locate similar items in a music database.[1]
Practical uses of acoustic fingerprinting include identifying songs, melodies, tunes, or advertisements; sound effect library management; and video file identification. Media identification using acoustic fingerprints can be used to monitor the use of specific musical works and performances on radio broadcast, records, CDs, streaming media, and peer-to-peer networks. This identification has been used in copyright compliance, licensing, and other monetization schemes.
Attributes
[edit]A robust acoustic fingerprint algorithm must take into account the perceptual characteristics of the audio. If two files sound alike to the human ear, their acoustic fingerprints should match, even if their binary representations are quite different. Acoustic fingerprints are not hash functions, which are sensitive to any small changes in the data. Acoustic fingerprints are more analogous to human fingerprints where small variations that are insignificant to the features the fingerprint uses are tolerated. One can imagine the case of a smeared human fingerprint impression that can accurately be matched to another fingerprint sample in a reference database; acoustic fingerprints work similarly.
Perceptual characteristics often exploited by audio fingerprints include average zero crossing rate, estimated tempo, average spectrum, spectral flatness, prominent tones across a set of frequency bands, and bandwidth.
Most audio compression techniques will make radical changes to the binary encoding of an audio file, without radically affecting the way it is perceived by the human ear. A robust acoustic fingerprint will allow a recording to be identified after it has gone through such compression, even if the audio quality has been reduced significantly. For use in radio broadcast monitoring, acoustic fingerprints should also be insensitive to analog transmission artifacts.
Spectrogram
[edit]Generating a signature from the audio is essential for searching by sound. One common technique is creating a time-frequency graph called a spectrogram.
Any piece of audio can be translated into a spectrogram. Each piece of audio is split into segments over time. In some cases, adjacent segments share a common time boundary, in other cases adjacent segments might overlap. The result is a graph that plots three dimensions of audio: frequency vs amplitude (intensity) vs time.
Shazam
[edit]Shazam's algorithm picks out points where there are peaks in the spectrogram that represent higher energy content.[2] Focusing on peaks in the audio greatly reduces the impact that background noise has on audio identification. Shazam builds their fingerprint catalog out as a hash table, where the key is the frequency. They do not just mark a single point in the spectrogram, rather they mark a pair of points: the peak intensity plus a second anchor point.[3] So their database key is not just a single frequency, it is a hash of the frequencies of both points. This leads to fewer hash collisions improving the performance of the hash table.[4]
Chromaprint, AccoustID, and MusicBrainz
[edit]When commercial acoustic fingerprinting companies were creating uncertainty over proprietary algorithms in the late 2000s, one of open data service MusicBrainz' contributors, Lukáš Lalinský developed an open source algorithm Chromaprint and the AcoustID service which uses it.[5] MusicBrainz now uses this service.
See also
[edit]- Automatic content recognition
- Digital video fingerprinting
- Feature extraction
- Parsons code
- Perceptual hashing
- Search by sound
- Sound recognition
References
[edit]- ^ ISO IEC TR 21000-11 (2004), Multimedia framework (MPEG-21) -- Part 11: Evaluation Tools for Persistent Association Technologies
- ^ Surdu, Nicolae (January 20, 2011). "How does Shazam work to recognize a song?". Archived from the original on October 24, 2016. Retrieved February 12, 2018.
- ^ Li-Chun Wang, Avery, An Industrial-Strength Audio Search Algorithm (PDF), Columbia University, retrieved April 2, 2018
- ^ "How Shazam Works". January 10, 2009. Retrieved April 2, 2018.
- ^ "Introducing Chromaprint – Lukáš Lalinský". Oxygene.sk. July 24, 2010. Archived from the original on October 10, 2018. Retrieved November 23, 2024.