Abstract

Retrieval models form the theoretical basis for computing the answer to a query. They differ not only in the syntax and expressiveness of the query language, but also in the representation of the documents. Following Rijsbergen's approach of regarding IR as uncertain inference, we can distinguish models according to the expressiveness of the underlying logic and the way uncertainty is handled. Classical retrieval models are based on propositional logic. In the vector space model, documents and queries are represented as vectors in a vector space spanned by the index terms, and uncertainty is modelled by considering geometric similarity. Probabilistic models make assumptions about the distribution of terms in relevant and nonrelevant documents in order to estimate the probability of relevance of a document for a query. Language models compute the probability that the query is generated from a document. All these models can be interpreted within a framework that is based on a probabilistic concept space. For IR applications dealing not only with texts, but also with multimedia or factual data, propositional logic is not sufficient. Therefore, advanced IR models use restricted forms of predicate logic as basis. Terminological/description logics are rooted in semantic networks and terminological languages like e.g. KL-ONE. Datalog uses function-free horn clauses. Probabilistic versions of both approaches are able to cope with the intrinsic uncertainty of IR.

Links and resources

Tags

community

  • @msn
  • @pprett
@pprett's tags highlighted