Analogical modeling is related to connectionism and nearest neighbor approaches, in that it is data-based rather than abstraction-based; but it is distinguished by its ability to cope with imperfect datasets (such as caused by simulated short term memory limits) and to base predictions on all relevant segments of the dataset, whether near or far.
In language modeling, AM has successfully predicted empirically valid forms for which no theoretical explanation was known (see the discussion of Finnish morphology in Skousen et al. 2002).
Within the dataset, each exemplar (a case to be reasoned from, or an informative past experience) appears as a feature vector: a row of values for the set of parameters that define the problem.
This multilevel search exponentially magnifies the likelihood of a behavior's being predicted as it occurs reliably in settings that specifically resemble the given context.
Alternatively, each supracontext is a theory of the task or a proposed rule whose predictive power needs to be evaluated.
It is important to note that the supracontexts are not equal peers one with another; they are arranged by their distance from the given context, forming a hierarchy.
Where the ambiguous behavior of the nondeterministically homogeneous supracontext was accepted, this is rejected because the intervening subcontext demonstrates that there is a better theory to be found.
This guarantees that we see an increase in meaningfully consistent behavior in the analogical set as we approach the given context.
In the example used in the second chapter of Skousen (1989), each context consists of three variables with potential values 0-3 The two outcomes for the dataset are e and r, and the exemplars are: We define a network of pointers like so:
There is actually a 4th type of homogeneous supracontext: it contains more than one non-empty subcontext and it is non-deterministic, but the frequency of outcomes in each sub-context is exactly the same.
We can create a more detailed account by listing the pointers for each of the occurrences in the homogeneous supracontexts: We can then see the analogical effect of each of the instances in the data set.
Noam Chomsky and others have more recently criticized analogy as too vague to really be useful (Bańko 1991), an appeal to a deus ex machina.
Analogical modeling has been employed in experiments ranging from phonology and morphology (linguistics) to orthography and syntax.
This is necessary because of the so-called "exponential explosion" of processing power requirements of the computer software used to implement analogical modeling.
Recent research suggests that quantum computing could provide the solution to such performance bottlenecks (Skousen et al. 2002, see pp 45–47).