M. Richardson, and P. Domingos. Department of Computer Science and Engineering, University of Washington, Seattle, WA, (October 2005)
Abstract
We propose a simple approach to combining First-order logic and probabilistic
graphical models in a single representation. A Markov logic network (MLN) is a First-order
knowledge base with a weight attached to each formula (or clause). Together with a set of
constants representing objects in the domain, it specifies a ground Markov network containing
one feature for each possible grounding of a First-order formula in the KB, with the corresponding
weight. Inference in MLNs is performed by MCMC over the minimal subset of
the ground network required for answering the query. Weights are effiently learned from
relational databases by iteratively optimizing a pseudo-likelihood measure. Optionally, additional
clauses are learned using inductive logic programming techniques. Experiments with a
real-world database and knowledge base in a university domain illustrate the promise of this
approach.
Department of Computer Science and Engineering, University of Washington
comment
MLNs can work with partially inconsistent knowledge bases. Indicating how good formulars fit.
Powerful model which contains Bayesian Networks, Markov Networks and First Order Logic.
A Markov Network is a network where each node represents a Literal. For each node and clique, there is a potential function, which represents the "probability distribution" of its possible values.
The MLN takes a list of First Order Formulars, with weights that say how probable each formular is. The MLN refines the weights (via learning from data).
As expected inference and weight learning are intractable.
For inference, they provide an approximation which takes two formulars F1, F2 which are conjunctions of ground literals, which seems to be a common query type in practise.
%0 Report
%1 citeulike:379160
%A Richardson, Matthew
%A Domingos, Pedro
%C Seattle, WA
%D 2005
%K mln readinggroup socialnets
%T Markov Logic Networks
%U http://www.cs.washington.edu/homes/pedrod/papers/mlj05.pdf
%X We propose a simple approach to combining First-order logic and probabilistic
graphical models in a single representation. A Markov logic network (MLN) is a First-order
knowledge base with a weight attached to each formula (or clause). Together with a set of
constants representing objects in the domain, it specifies a ground Markov network containing
one feature for each possible grounding of a First-order formula in the KB, with the corresponding
weight. Inference in MLNs is performed by MCMC over the minimal subset of
the ground network required for answering the query. Weights are effiently learned from
relational databases by iteratively optimizing a pseudo-likelihood measure. Optionally, additional
clauses are learned using inductive logic programming techniques. Experiments with a
real-world database and knowledge base in a university domain illustrate the promise of this
approach.
@techreport{citeulike:379160,
abstract = {We propose a simple approach to combining First-order logic and probabilistic
graphical models in a single representation. A Markov logic network (MLN) is a First-order
knowledge base with a weight attached to each formula (or clause). Together with a set of
constants representing objects in the domain, it specifies a ground Markov network containing
one feature for each possible grounding of a First-order formula in the KB, with the corresponding
weight. Inference in MLNs is performed by MCMC over the minimal subset of
the ground network required for answering the query. Weights are effiently learned from
relational databases by iteratively optimizing a pseudo-likelihood measure. Optionally, additional
clauses are learned using inductive logic programming techniques. Experiments with a
real-world database and knowledge base in a university domain illustrate the promise of this
approach.},
added-at = {2006-06-16T10:34:37.000+0200},
address = {Seattle, WA},
author = {Richardson, Matthew and Domingos, Pedro},
biburl = {https://www.bibsonomy.org/bibtex/23d024d27f31396c9eb71738cc766e82f/ldietz},
citeulike-article-id = {379160},
comment = {MLNs can work with partially inconsistent knowledge bases. Indicating how good formulars fit.
Powerful model which contains Bayesian Networks, Markov Networks and First Order Logic.
A Markov Network is a network where each node represents a Literal. For each node and clique, there is a potential function, which represents the "probability distribution" of its possible values.
The MLN takes a list of First Order Formulars, with weights that say how probable each formular is. The MLN refines the weights (via learning from data).
As expected inference and weight learning are intractable.
For inference, they provide an approximation which takes two formulars F1, F2 which are conjunctions of ground literals, which seems to be a common query type in practise.},
institution = {Department of Computer Science and Engineering, University of Washington},
interhash = {598d344ed52ebff057ffef68b5c0a7e5},
intrahash = {3d024d27f31396c9eb71738cc766e82f},
keywords = {mln readinggroup socialnets},
month = {October},
priority = {0},
timestamp = {2006-06-16T10:34:37.000+0200},
title = {Markov Logic Networks},
url = {http://www.cs.washington.edu/homes/pedrod/papers/mlj05.pdf},
year = 2005
}