Dialog manager

The only thing common to all DMs is that they are stateful, in contrast to other parts of the DS (such as the NLU and NLG components), which are just stateless functions.

The DM roles can roughly be divided into these groups: The human input has different meanings depending on the context.

A recent DS for technical assistance [citation needed] uses advanced machine-learned rules to select the best terms for describing items.

There are many improvements that allow authors to describe a dialog in a higher abstraction level, while putting more burden on the DM.

The chat-bot now starts telling the story under the "CHILDHOOD" title, as long as the bot is in control of the conversation (the user passively responds by saying thinks like "OK" or "right").

A common compromise between the two methods is mixed-initiative, where the system starts with asking questions, but users can barge in and change the dialog direction.

However, describing such a system manually, as a state-chart, is very tedious, since the human may first say the origin and then the destination, or vice versa.

Such DSs were developed in MIT, for example, Wheels (for searching used car ads), Jupiter (for retrieving weather forecasts), and more.

The information state approach was developed later in projects such as Siridus Archived 2012-03-23 at the Wayback Machine and the Dipper toolkit.

Each operation has preconditions and postconditions (effects), for example: The conversation can be navigated using a general planner, such as SOAR (Strengths, Opportunities, Aspirations & Results).

Using SOAR allows the incorporation of complex emotional and social models, for example: the agent can decide, based on the human actions, whether he wants to cooperate with him, avoid him, or even attack him.

The system interacts with user to supply "missing axioms" to help complete the proof (this is called "backward chaining").

This approach was implemented by: The dialog manager can be connected with an expert system, to give the ability to respond with specific expertise.

The ASR and NLU modules are usually not 100% sure they understood the user; they usually return a confidence score reflecting the quality of understanding.

In such cases, the DM should decide whether to: Choosing "no-confirmation" may make the dialog proceed quicker, but may also introduce mistakes which will take longer to correct later.

Tactical decisions of a different type are done by Cordillera (a tutorial DS for teaching physics, built using TuTalk).

In many points during the lesson, the DM should decide: These decisions affect the overall quality of learning, which can be measured by comparing pre- and post-learning exams.

Instead of letting a human expert write a complex set of decision rules, it is more common to use reinforcement learning.