Web spam pages use various techniques to achieve
higher-than-deserved rankings in a search engine’s
results. While human experts can identify
spam, it is too expensive to manually evaluate a
large number of pages. Instead, we propose techniques
to semi-automatically separate reputable,
good pages from spam. We first select a small set
of seed pages to be evaluated by an expert. Once
we manually identify the reputable seed pages, we
use the link structure of the web to discover other
pages that are likely to be good. In this paper
we discuss possible ways to implement the seed
selection and the discovery of good pages. We
present results of experiments run on the World
Wide Web indexed by AltaVista and evaluate the
performance of our techniques. Our results show
that we can effectively filter out spam from a significant
fraction of the web, based on a good seed
set of less than 200 sites.
L. Becchetti, C. Castillo, D. Donato, S. Leonardi, and R. Baeza-Yates. The European Integrated Project Dynamically Evolving, Large Scale Information Systems (DELIS): proceedings of the final workshop, 222, page 99--113. Heinz-Nixdorf-Institut, Universität Paderborn, (February 2008)
J. Abernethy, O. Chapelle, and C. Castillo. Proceedings of the 4th International Workshop on Adversarial Information Retrieval on the Web, page 41--44. New York, NY, USA, ACM, (2008)
J. Ayres, J. Flannick, J. Gehrke, and T. Yiu. International Conference on Knowledge Discovery and Data Mmining (SIGKDD), page 429--435. ACM, ACM, (2002)
M. McCord, and M. Chuah. Proceedings of the 8th International Conference on Autonomic and Trusted Computing, page 175--186. Berlin, Heidelberg, Springer-Verlag, (2011)
B. Krause, C. Schmitz, A. Hotho, and G. Stumme. AIRWeb '08: Proceedings of the 4th International Workshop on Adversarial Information Retrieval on the Web, page 61--68. New York, NY, USA, ACM, (April 2008)