A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2017; you can also visit the original URL.
The file type is
This paper formulates an evidence theoretic multimodal fusion approach using belief functions that takes into account the variability in image characteristics. When processing non-ideal images the variation in the quality of features at different levels of abstraction may cause individual classifiers to generate conflicting genuine-impostor decisions. Existing fusion approaches are non-adaptive and do not always guarantee optimum performance improvements. We propose a contextual unificationdoi:10.1109/tsmca.2008.2007981 fatcat:xsavaekx4nf6zn7rgitddzighi