Smarter Conversations. Part 2 - Open Dialogs

This post continues the smarter conversations series and today I would like to explore ways of keeping your dialogs open. Previously, in part 1, I showed how to add sentiment detection to your bot.

Waterfall

Prior to 3.5.3, the dialog routing system in the Bot Framework was not very flexible.

Imagine the following dialog:

User >> I’m looking for screws used for printer assembly
Bot >> Sure, I’m happy to help you. 
Bot >> Is the base material metal or plastic?
User >>I don't know. Does it matter?

The question that the bot asks about the material will likely be handled as a waterfall:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
const builder = require('botbuilder');

bot.dialog('productLookup', [
function (session, args, next) {
// ...
builder.Prompts.choice(session, 'Is the base material metal or plastic?',
['metal', 'plastic'],
{ listStyle: builder.ListStyle.button });
},
function (session, args, next) {
const material = args.response.entity;
// ...
}
]);

The user’s response is neither metal nor plastic and the bot would simply reprompt:

Reprompt

The builder.Prompts.choice opens up a new dialog that gets pushed onto the stack and that’s what receives the next message. We will take a closer look in a minute.

Trigger Actions

The routing system was reworked in 3.5.3 and it came with a few important enhancements.

First, you no longer need the IntentDialog to recognize your users’ intents. The UniversalBot now inherits from Library and has its own set of global recognizers:

1
2
3
4
5
6
7
8
9
10
const bot = new builder.UniversalBot(connector);

// custom recognizers
const smiles = require('./app/recognizer/smiles');
const sentiment = require('./app/recognizer/sentiment');

// set up global recognizers
bot.recognizer(smiles);
bot.recognizer(sentiment);
bot.recognizer(new builder.LuisRecognizer(process.env.LUIS_ENDPOINT));

Second, the dialogs can now define trigger actions and be picked up even while another dialog’s prompt is being processed.

1
2
3
4
5
6
7
bot.dialog('affirmation', [
function (session, args, next) {
// ...
}
]).triggerAction({ // <-- this right here
matches: 'Affirmation'
});

If our bot had an intent recognizer that could understand that the user asked a question instead of answering the metal vs. plastic question, and if we had a way to handle it, we could break out of the waterfall using the triggerAction technique. In part 3 I will show you how a simple history engine can help you attach a sentiment like that to what was happening previously in the conversation and how your bot can intelligently handle such a diversion.

Routing and Callstack

Bot Framework maintains a callstack of the triggered dialog actions. When user utterance triggered the productLookup dialog, the stack only had one item coming into the first function of the waterfall:

1
*:productLookup

The builder.Prompts.choice adds another one:

1
2
*:productLookup
BotBuilder:Prompts <--

In the routing system that came before 3.5.3, the next message would land onto BotBuilder:Prompts, would upset the choice validation logic, and would trigger a reprompt. The newer version does a much better job.

First, the UniversalBot runs the incoming message through the set of global recognizers. Then the default routing mechanism runs three parallel searches - global actions, stack actions, and active dialogs. In doing so it collects all matching route results and scores them. The best route will then be selected and executed.

Handling Interruptions

The default behavior of launching a new dialog via its triggerAction is to clean up the callstack and start fresh. You can do two things to handle the interruption.

First, you can override the default behavior with onSelectAction. Instead of resetting the callstack you can add the newly triggered dialog on top of it. This would return the conversation back to where it was interrupted at when the newly triggered dialog finishes:

1
2
3
4
5
6
7
8
9
10
11
12
bot.dialog('affirmation', [
function (session, args, next) {
// ...
}
]).triggerAction({
matches: 'Affirmation',

// <-- overwrite how the dialog is launched
onSelectAction: function (session, args, next) {
session.beginDialog(args.action, args);
}
});

And you can also attach the onInterrupted handler to the dialog that could be interrupted and message the user about what’s happening.

Open All The Way

And if that was not flexible enough, you can define your own dialog’s behaviors by overwriting begin, replyReceived, and even recognize on your dialogs:

1
2
3
4
5
6
7
8
bot.dialog('custom', Object.assign(new builder.Dialog(), {
begin: (session) => {
session.send('I am built custom');
},
replyReceived: (session) => {
session.endDialog();
}
}));

I will sure come back to this technique when I show you how to drive your dialogs from metadata and not code. Comes very handy when building product recommendation bots. Stay tuned!

Smarter Conversations. Part 1 - Sentiment

This post starts a series of short articles on building smarter conversations with Microsoft Bot Framework. I will explore detecting sentiment (part 1), keeping the dialog open-ended (part 2), using a simple history engine to help the bot be context-aware (part 3), and recording a full transcript of a conversation to intelligently hand it off to a human operator.

Affirmation

Imagine the following dialog:

User >> I’m looking for screws used for printer assembly
Bot >> Sure, I’m happy to help you. 
Bot >> Is the base material metal or plastic?
User >> metal
Bot >> [lists a few recommendations]
Bot >> [mentions screws that can form their own threads]
User >> Great! I think that's what I need
Bot >> [recommends more information and an installation video]

It’s not hard to train an NLU service like LUIS to see a product lookup intent in the first sentence. A screw would be an extracted entity. Following a database lookup, the bot then clarifies an important attribute to narrow the search down to either plastic or metal screws and presents the results.

The highlighted sentence that follows is a positive affirmation. It is not an intent that needs to be fulfilled, not an answer to the question asked by the bot. And yet it presents an opportunity for a smarter bot to be more helpful, act as an advisor.

Sentiment

Text Analytics API is part of the Microsoft’s Cognitive Services offering. The /text/analytics/v2.0/sentiment endpoint makes it a single HTTP request to score a text fragment or a sentence on a scale from 0 (negative) to 1 (positive).

I decided to make the expressed sentiment look like an intent for my bot and so I built a custom recognizer:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
const request = require('request-promise-native');

const url = process.env.SENTIMENT_ENDPOINT;
const apiKey = process.env.SENTIMENT_API_KEY;

module.exports = {
recognize: function (context, callback) {
request({
method: 'POST',
url: `${url}`,
headers: {
'Content-Type': 'application/json',
'Ocp-Apim-Subscription-Key': `${apiKey}`
},
body: {
"documents": [
{
"language": "en",
"id": "-",
"text": context.message.text
}
]
},
json: true
}).then((result) => {
if (result && result.documents) {
const positive = result.documents[0].score >= 0.5;

callback(null, {
intent: positive ? 'Affirmation' : 'Discouragement',
score: 0.11 // <-- just above the threshold
});
} else {
callback();
}
}).catch((reason) => {
console.log('Error detecting sentiment: %s', reason);
callback();
});
}
};

Context

Now I can attach a dialog that would be triggered when the bot detects an affirmation and no other intent scores higher. The default intent threshold is 0.1 and that’s why a detected sentiment is given 0.11.

1
2
3
4
5
6
7
8
9
10
11
const sentiment = require('./app/recognizer/sentiment');

bot.recognizer(sentiment);

bot.dialog('affirmation', [
function (session, args, next) {
// ...
}
]).triggerAction({
matches: 'Affirmation'
});

Unlike other intents, however, a detected sentiment is not enough to properly react to on its own.

The bot needs to understand the context to properly react to an affirmation or discouragement expressed by a user. The bot also needs to be able to handle an interrupted dialog if an affirmation (or an expression of frustration) came in the middle of a waterfall, for example.

I will get to it in part 2. Stay tuned.

Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×