Abstract
Trust is a judgement of unquestionable utility — as humans we use it every day of our lives. However, trust has suffered from an imperfect understanding, a plethora of definitions, and informal use in the literature and in everyday life. It is common to say “I trust you,” but what does that mean?
This thesis provides a clarification of trust. We present a formalism for trust which provides us with a tool for precise discussion. The formalism is implementable: it can be embedded in an artificial agent, enabling the agent to make trust-based decisions. Its applicability in the domain of Distributed Artificial Intelligence (DAI) is raised. The thesis presents a testbed populated by simple trusting agents which substantiates the utility of the formalism.
The formalism provides a step in the direction of a proper understanding and definition of human trust. A contribution of the thesis is its detailed exploration of the possibilities of future work in the area.
Users
Please
log in to take part in the discussion (add own reviews or comments).