The framework can be used to support decisions in an expressive output space while maintaining modularity and tractability of training and inference.
attracted much attention[citation needed] within the natural language processing (NLP) community.
It allows one to focus on the modeling of problems by providing the opportunity to incorporate domain-specific knowledge as global constraints using a first order language.
Using this declarative framework frees the developer from low level feature engineering while capturing the problem's domain-specific properties and guarantying exact inference.
These settings are applicable not only to Structured Learning problems such as semantic role labeling, but also for cases that require making use of multiple pre-learned components, such as summarization, textual entailment and question answering.
Constrained conditional models form a learning and inference framework that augments the learning of conditional (probabilistic or discriminative) models with declarative constraints (written, for example, using a first-order representation) as a way to support decisions in an expressive output space while maintaining modularity and tractability of training and inference.
This flexibility distinguishes CCM from the other learning frameworks that also combine statistical information with declarative constraints, such as Markov logic network, that emphasize joint training.
CCM can help reduce supervision by using domain knowledge (expressed as constraints) to drive learning.
Identifying the correct (or optimal) learning representation is viewed as a structured prediction process and therefore modeled as a CCM.
In all cases research showed that explicitly modeling the interdependencies between representation decisions via constraints results in an improved performance.
The advantages of the CCM declarative formulation and the availability of off-the-shelf solvers have led to a large variety of natural language processing tasks being formulated within the framework, including semantic role labeling,[7] syntactic parsing,[8] coreference resolution,[9] summarization,[10][11][12] transliteration,[13] natural language generation[14] and joint information extraction.
[15][16] Most of these works use an integer linear programming (ILP) solver to solve the decision problem.
The key advantage of using an ILP solver for solving the optimization problem defined by a constrained conditional model is the declarative formulation used as input for the ILP solver, consisting of a linear objective function and a set of linear constraints.