Publications:Synergy of lip motion and acoustic features in biometric speech and speaker recognition

From ISLAB/CAISR
Revision as of 22:30, 7 May 2014 by Nicholas (Talk | contribs)

Do not edit this section

Keep all hand-made modifications below

Title Synergy of lip motion and acoustic features in biometric speech and speaker recognition
Author Maycel Faraj and Josef Bigun
Year 2007
PublicationType Journal Paper
Journal I.E.E.E. transactions on computers (Print)
HostPublication
Conference
DOI http://dx.doi.org/10.1109/TC.2007.1074
Diva url http://hh.diva-portal.org/smash/record.jsf?searchId=1&pid=diva2:239276
Abstract This paper presents the scheme and evaluation of a robust audio-visual digit-and-speaker-recognition system using lip motion and speech biometrics. Moreover, a liveness verification barrier based on a person's lip movement is added to the system to guard against advanced spoofing attempts such as replayed videos. The acoustic and visual features are integrated at the feature level and evaluated first by a support vector machine for digit and speaker identification and, then, by a Gaussian mixture model for speaker verification. Based on ap300 different personal identities, this paper represents, to our knowledge, the first extensive study investigating the added value of lip motion features for speaker and speech-recognition applications. Digit recognition and person-identification and verification experiments are conducted on the publicly available XM2VTS database showing favorable results (speaker verification is 98 percent, speaker identification is 100 percent, and digit identification is 83 percent to 100 percent).