Bootstrapping (linguistics)

It refers to the idea that humans are born innately equipped with a mental faculty that forms the basis of language.

Similarly in computer science, booting refers to the startup of an operation system by means of first initiating a smaller program.

Bootstrapping in linguistics was first introduced by Steven Pinker as a metaphor for the idea that children are innately equipped with mental processes that help initiate language acquisition.

[2] Bootstrapping has a strong link to connectionist theories which model human cognition as a system of simple, interconnected networks.

For a child acquiring language, the challenge is to parse out discrete segments from a continuous speech stream.

Research demonstrates that, when exposed to streams of nonsense speech, children use statistical learning to determine word boundaries.

Utilizing these statistical abilities, children appear to be able to form mental representations, or neural networks, of relevant pieces of information.

[5] Pieces of relevant information include word classes, which in connectionist theory, are seen as each having an internal representation and transitional links between concepts.

[6] Neighbouring words provide concepts and links for children to bootstrap new representations on the basis of their previous knowledge.

The innateness hypothesis was originally coined by Noam Chomsky as a means to explain the universality in language acquisition.

It is also proposed that despite the supposed variation in languages, they all fall into a very restricted subset of the potential grammars that could be infinitely conceived.

[7] This intrinsic capability was hypothesized to be embedded in the brain, earning the title of language acquisition device (LAD).

In other words, the child must be able to have some mental grasp on the concept of events, memory, and general progression of time before attempting to conceive it semantically.

This bootstrapping allows them to have hierarchical, segmental steps, in which they are able to build upon their previous knowledge to aid future learning.

The main challenge this theory tackles is the lack of specific information extralinguistic-information context provides on mapping word meaning and making inferences.

It accounts for this problem by suggesting that children do not need to rely solely on environmental context to understand meaning or have the words explained to them.

This in-depth analysis of Syntactic bootstrapping provides background on the research and evidence; describing how children acquire lexical and functional categories, challenges to the theory as well as cross-linguistic applications.

Overall, prosodic bootstrapping explores determining grammatical groupings in a speech stream rather than learning word meaning.

[16] The only way that an infant could be born with this ability is if the prosodic patterns of the target language are learned in utero.

Further evidence of young infants using prosodic cues is their ability to discriminate the acoustic property of pitch change by 1–2 months old.

[14] This means that the linguistic input infants and children receive include some prosodic bracketing around syntactically relevant chunks.

Typically, articles and other unbound morphemes are unstressed and are relatively short in duration in contrast to the pronunciation of nouns.

Prosodic bootstrapping states that these naturally occurring intonation packages help infants and children to bracket linguistic input into syntactic groupings.

This reveals that while infants do not understand word meaning, they are in the process of learning about their native language and grammatical structure.

They include hand gestures, eye movement, a speaker's focus of attention, intentionality, and linguistic context.

Similarly, the parsimonious model proposes that a child learns word meaning by relating language input to their immediate environment.