Difference between revisions of "Publications:Synergy of lip motion and acoustic features in biometric speech and speaker recognition"
From ISLAB/CAISR
Line 4: | Line 4: | ||
{{PublicationSetupTemplate|Author=Maycel Faraj, Josef Bigun | {{PublicationSetupTemplate|Author=Maycel Faraj, Josef Bigun | ||
|PID=239276 | |PID=239276 | ||
− | |Name=Faraj, Maycel | + | |Name=Faraj, Maycel (mafa) (Högskolan i Halmstad (2804), Sektionen för Informationsvetenskap, Data– och Elektroteknik (IDE) (3905), Halmstad Embedded and Intelligent Systems Research (EIS) (3938));Bigun, Josef (josef) (Högskolan i Halmstad (2804), Sektionen för Informationsvetenskap, Data– och Elektroteknik (IDE) (3905), Halmstad Embedded and Intelligent Systems Research (EIS) (3938)) |
|Title=Synergy of lip motion and acoustic features in biometric speech and speaker recognition | |Title=Synergy of lip motion and acoustic features in biometric speech and speaker recognition | ||
|PublicationType=Journal Paper | |PublicationType=Journal Paper | ||
|ContentType=Refereegranskat | |ContentType=Refereegranskat | ||
|Language=eng | |Language=eng | ||
− | |Journal= | + | |Journal=I.E.E.E. transactions on computers (Print) |
|JournalISSN=0018-9340 | |JournalISSN=0018-9340 | ||
|Status=published | |Status=published |
Latest revision as of 21:39, 30 September 2016
Title | Synergy of lip motion and acoustic features in biometric speech and speaker recognition |
---|---|
Author | Maycel Faraj and Josef Bigun |
Year | 2007 |
PublicationType | Journal Paper |
Journal | I.E.E.E. transactions on computers (Print) |
HostPublication | |
Conference | |
DOI | http://dx.doi.org/10.1109/TC.2007.1074 |
Diva url | http://hh.diva-portal.org/smash/record.jsf?searchId=1&pid=diva2:239276 |
Abstract | This paper presents the scheme and evaluation of a robust audio-visual digit-and-speaker-recognition system using lip motion and speech biometrics. Moreover, a liveness verification barrier based on a person's lip movement is added to the system to guard against advanced spoofing attempts such as replayed videos. The acoustic and visual features are integrated at the feature level and evaluated first by a support vector machine for digit and speaker identification and, then, by a Gaussian mixture model for speaker verification. Based on ap300 different personal identities, this paper represents, to our knowledge, the first extensive study investigating the added value of lip motion features for speaker and speech-recognition applications. Digit recognition and person-identification and verification experiments are conducted on the publicly available XM2VTS database showing favorable results (speaker verification is 98 percent, speaker identification is 100 percent, and digit identification is 83 percent to 100 percent). |