[1] Optimality Theory is popularly used for phonology, the subfield to which it was originally applied, but has been extended to other areas of linguistics such as syntax[2] and semantics.
This architecture rests on Tensor Product Representations,[4] compositional embeddings of symbolic structures in vector spaces.
[9] In work with colleagues at Microsoft Research and Johns Hopkins, Gradient Symbolic Computation has been embedded in neural networks using deep learning to address a range of problems in reasoning and natural language processing.
With Bruce Tesar (Rutgers University), Smolensky has also contributed significantly to the study of the learnability of Optimality Theoretic grammars (in the sense of computational learning theory).
Smolensky was a founding member of the Parallel Distributed Processing research group at the University of California, San Diego, and is currently a member of the Center for Language and Speech Processing at Johns Hopkins University and of the Deep Learning Group at Microsoft Research, Redmond Washington.