Improved robustness of signature-based near-replica detection via lexicon randomization
Proceedings of the 2004 ACM SIGKDD international conference on Knowledge discovery and data mining - KDD '04
Detection of near duplicate documents is an important problem in many data mining and information filtering applications. When faced with massive quantities of data, traditional duplicate detection techniques relying on direct interdocument similarity computation (e.g., using the cosine measure) are often not feasible given the time and memory performance constraints. On the other hand, fingerprint-based methods, such as I-Match, are very attractive computationally but may be brittle with
... t to small changes to document content. We focus on approaches to nearreplica detection that are based upon large-collection statistics and present a general technique of increasing their robustness via multiple lexicon randomization. In experiments with large web-page and spam-email datasets the proposed method is shown to consistently outperform traditional I-Match, with the relative improvement in duplicatedocument recall reaching as high as 40-60%. The large gains in detection accuracy are offset by only small increases in computational requirements. The problem of finding duplicate, albeit non-identical, documents has been the subject of research in the textretrieval and web-search communities, with application focus ranging from plagiarism detection in web publishing to redundancy reduction in web search and database storage.