Abstract
Children acquiring language infer the correct form of syntactic constructions for
which they appear to have little or no direct evidence, avoiding simple but incorrect
generalizations that would be consistent with the data they receive. These generalizations
must be guided by some inductive bias – some abstract knowledge – that leads them to
prefer the correct hypotheses even in the absence of directly supporting evidence. What
form do these inductive constraints take? It is often argued or assumed that they reflect
innately specified knowledge of language. A classic example of such an argument moves
from the phenomenon of auxiliary fronting in English interrogatives to the conclusion that
children must innately know that syntactic rules are defined over hierarchical phrase
structures rather than linear sequences of words (e.g., Chomsky 1965, 1971, 1980; Crain &
Nakayama, 1987). Here we use a Bayesian framework for grammar induction to argue for a
different possibility. We show that, given typical child-directed speech and certain innate
domain-general capacities, an unbiased ideal learner could recognize the hierarchical phrase
structure of language without having this knowledge innately specified as part of the
language faculty. We discuss the implications of this analysis for accounts of human
language acquisition.
Users
Please
log in to take part in the discussion (add own reviews or comments).