Abstract

This collection of 38 research papers is an extremely valuable resource for researchers, students, and teachers in the field of natural language processing (NLP). Its 664 pages provide extraordinary breadth and will be useful to old hands as well as newcomers. Although the readings span the time period of 1961 to 1985, only 8 of the 38 papers appeared before 1977, 19 were published from 1977 to 1981, and 11 from 1982 to 1985. The readings include 18 journal papers from 7 different journals, 9 conference proceedings papers from 5 different conferences, and l0 papers drawn from 9 other research collections. The collection begins with an introduction, including a theoretical and historical overview of the field of NLP, and a discussion of the issues addressed in the six chapters that follow. The authors note that the chapter headings are broad categories and should not be taken to imply either that we are adopting a particular position about the way processing.., should be done, or that problems and solutions assigned to one category have no relevance elsewhere. Each of the six chapters also begins with an introduction describing the historical background and computational issues that gave rise to the papers in the chapter. These introductory sections, while short (3 to 5 pages), are specific and detailed enough to provide a context for the reader to appreciate the papers. They also include substantial bibliographies of important related work. Chapter I Syntactic models. Five different grammatical models are presented in this chapter (context-free grammar, augmented transition networks, Marcus's deterministic parser, definite clause grammar, and functional unification grammar). A discussion by Perrault on the generative power and computational complexity of grammatical formalisms, a description by Jane Robinson of a broad-coverage English grammar, and a 1962 paper by Kuno and Oettinger describing their predictive analyzer complete the section, which alone is worth the price of the book. Chapter II Semantic interpretation. This chapter is a diverse collection of nine papers about meaning representation and the process of translating natural language into a representation of meaning. The contributions include Schank on conceptual dependency and MOPs; Wilks on a machine translation system using preference semantics; Hendrix on the translation of English sentences into semantic networks; and Schubert and Pelletier describing an approach to semantic translation based on predicate logic. This chapter also includes two well-known papers that could just as well have been placed in Chapter 6: Woods on the semantic component of the LUNAR question-answering system, and Winograd on the simulated blocks-world robot, SHRDLU. Chapter III: Discourse interpretation. This chapter begins with a 1973 paper by Charniak discussing the need for knowledge about the events of everyday living and the ordinary motivations of people, in understanding children's stories. Following this are four papers (by Hobbs, Grosz, Sidner, and Webber) that describe computational models for interpreting pronouns and definite noun phrases, based on formal representations of discourse entities and discourse focus. Chapter IV: Language action and intention. This chapter focuses on models of language as purposeful action. A short paper by Bruce motivates this work by showing how language is used to accomplish goals of requesting, informing, etc. Two papers follow (by Philip Cohen and Perrault, James Allen and Perrault) that develop a formal representation of speech act planning and show how it can be used to model generation and interpretation of utterances. The last paper (by Wilensky) describes the use of knowledge about plans and goals in understanding stories. Chapter V: Generation. The three papers in this chapter (by McKeown, Appelt, and McDonald) are very recent contributions, the first two directed toward planning what information to communicate in an utterance, and the third describing a technique for realizing the chosen information as a grammatical text string. Chapter VI: Systems. The collection concludes with eight papers describing systems for understanding natural language. It includes papers by Burton and Brown on the use of semantic grammar in the SOPHIE computer-aided instruction system; by Cullingford on SAM (the best paper I have read on script-based NLP); by Hendrix et al on the LADDER question-answering system; and a paper by Parkison, Colby and Faught on PARRY, a program that simulated paranoid thought processes. Taken together with the Woods and Winogradpapers in Chapter 2, the collection provides the reader with a detailed picture of the experimental side of NLP research. The most striking characteristic of the papers in this collection is their uniformly high quality of exposition. Each one is important, interesting, and readable. Readability is achieved, not by sacrificing technical detail and presenting a vague summary, but by illustrating the technical points with well-chosen examples. Thus, the papers in this collection are a pleasure to teach as well as a pleasure to read. Readings in Natural Language Processing represents an exercise of good literary judgement as well as good scholarship.

Links and resources

Tags

community

  • @schaul
  • @nlp
  • @flint63
  • @idsia
  • @machinelearning
  • @porta
@porta's tags highlighted