Article,

High-dimensional semantic space accounts of priming

, , and .
Journal of Memory and Language, (March 2006)

Abstract

A broad range of priming data has been used to explore the structure of semantic memory and to test between models of word representation. In this paper, we examine the computational mechanisms required to learn distributed semantic representations for words directly from unsupervised experience with language. To best account for the variety of priming data, we introduce a holographic model of the lexicon that learns word meaning and order information from experience with a large text corpus. Both context and order information are learned into the same composite representation by simple summation and convolution mechanisms (cf. Murdock, B.B. (1982). A theory for the storage and retrieval of item and associative information. Psychological Review, 89, 609­626). We compare the similarity structure of representations learned by the holographic model, Latent Semantic Analysis (LSA; Landauer, T.K., & Dumais, S.T. (1997). A solution to Plato's problem: The latent semantic analysis theory of acquisition, induction and representation of knowledge. Psychological Review, 104, 211­240), and the Hyperspace Analogue to Language (HAL; Lund, K., & Burgess, C., (1996). Producing high-dimensional semantic spaces from lexical co-occurrence. Behavior Research Methods, Instrumentation, and Computers, 28, 203­208) at predicting human data in a variety of semantic, associated, and mediated priming experiments. We found that both word context and word order information are necessary to account for trends in the human data. The representations learned from the holographic system incorporate both types of structure, and are shown to account for priming phenomena across several tasks.

Tags

Users

  • @tmalsburg
  • @stefano
  • @ciro

Comments and Reviews