Smarter Conversations. Part 2 - Open Dialogs

This post continues the smarter conversations series and today I would like to explore ways of keeping your dialogs open. Previously, in part 1, I showed how to add sentiment detection to your bot.

Waterfall

Prior to 3.5.3, the dialog routing system in the Bot Framework was not very flexible.

Imagine the following dialog:

User >> I’m looking for screws used for printer assembly
Bot >> Sure, I’m happy to help you. 
Bot >> Is the base material metal or plastic?
User >>I don't know. Does it matter?

The question that the bot asks about the material will likely be handled as a waterfall:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
const builder = require('botbuilder');

bot.dialog('productLookup', [
function (session, args, next) {
// ...
builder.Prompts.choice(session, 'Is the base material metal or plastic?',
['metal', 'plastic'],
{ listStyle: builder.ListStyle.button });
},
function (session, args, next) {
const material = args.response.entity;
// ...
}
]);

The user’s response is neither metal nor plastic and the bot would simply reprompt:

Reprompt

The builder.Prompts.choice opens up a new dialog that gets pushed onto the stack and that’s what receives the next message. We will take a closer look in a minute.

Trigger Actions

The routing system was reworked in 3.5.3 and it came with a few important enhancements.

First, you no longer need the IntentDialog to recognize your users’ intents. The UniversalBot now inherits from Library and has its own set of global recognizers:

1
2
3
4
5
6
7
8
9
10
const bot = new builder.UniversalBot(connector);

// custom recognizers
const smiles = require('./app/recognizer/smiles');
const sentiment = require('./app/recognizer/sentiment');

// set up global recognizers
bot.recognizer(smiles);
bot.recognizer(sentiment);
bot.recognizer(new builder.LuisRecognizer(process.env.LUIS_ENDPOINT));

Second, the dialogs can now define trigger actions and be picked up even while another dialog’s prompt is being processed.

1
2
3
4
5
6
7
bot.dialog('affirmation', [
function (session, args, next) {
// ...
}
]).triggerAction({ // <-- this right here
matches: 'Affirmation'
});

If our bot had an intent recognizer that could understand that the user asked a question instead of answering the metal vs. plastic question, and if we had a way to handle it, we could break out of the waterfall using the triggerAction technique. In part 3 I will show you how a simple history engine can help you attach a sentiment like that to what was happening previously in the conversation and how your bot can intelligently handle such a diversion.

Routing and Callstack

Bot Framework maintains a callstack of the triggered dialog actions. When user utterance triggered the productLookup dialog, the stack only had one item coming into the first function of the waterfall:

1
*:productLookup

The builder.Prompts.choice adds another one:

1
2
*:productLookup
BotBuilder:Prompts <--

In the routing system that came before 3.5.3, the next message would land onto BotBuilder:Prompts, would upset the choice validation logic, and would trigger a reprompt. The newer version does a much better job.

First, the UniversalBot runs the incoming message through the set of global recognizers. Then the default routing mechanism runs three parallel searches - global actions, stack actions, and active dialogs. In doing so it collects all matching route results and scores them. The best route will then be selected and executed.

Handling Interruptions

The default behavior of launching a new dialog via its triggerAction is to clean up the callstack and start fresh. You can do two things to handle the interruption.

First, you can override the default behavior with onSelectAction. Instead of resetting the callstack you can add the newly triggered dialog on top of it. This would return the conversation back to where it was interrupted at when the newly triggered dialog finishes:

1
2
3
4
5
6
7
8
9
10
11
12
bot.dialog('affirmation', [
function (session, args, next) {
// ...
}
]).triggerAction({
matches: 'Affirmation',

// <-- overwrite how the dialog is launched
onSelectAction: function (session, args, next) {
session.beginDialog(args.action, args);
}
});

And you can also attach the onInterrupted handler to the dialog that could be interrupted and message the user about what’s happening.

Open All The Way

And if that was not flexible enough, you can define your own dialog’s behaviors by overwriting begin, replyReceived, and even recognize on your dialogs:

1
2
3
4
5
6
7
8
bot.dialog('custom', Object.assign(new builder.Dialog(), {
begin: (session) => {
session.send('I am built custom');
},
replyReceived: (session) => {
session.endDialog();
}
}));

I will sure come back to this technique when I show you how to drive your dialogs from metadata and not code. Comes very handy when building product recommendation bots. Stay tuned!

Smarter Conversations. Part 1 - Sentiment

This post starts a series of short articles on building smarter conversations with Microsoft Bot Framework. I will explore detecting sentiment (part 1), keeping the dialog open-ended (part 2), using a simple history engine to help the bot be context-aware (part 3), and recording a full transcript of a conversation to intelligently hand it off to a human operator.

Affirmation

Imagine the following dialog:

User >> I’m looking for screws used for printer assembly
Bot >> Sure, I’m happy to help you. 
Bot >> Is the base material metal or plastic?
User >> metal
Bot >> [lists a few recommendations]
Bot >> [mentions screws that can form their own threads]
User >> Great! I think that's what I need
Bot >> [recommends more information and an installation video]

It’s not hard to train an NLU service like LUIS to see a product lookup intent in the first sentence. A screw would be an extracted entity. Following a database lookup, the bot then clarifies an important attribute to narrow the search down to either plastic or metal screws and presents the results.

The highlighted sentence that follows is a positive affirmation. It is not an intent that needs to be fulfilled, not an answer to the question asked by the bot. And yet it presents an opportunity for a smarter bot to be more helpful, act as an advisor.

Sentiment

Text Analytics API is part of the Microsoft’s Cognitive Services offering. The /text/analytics/v2.0/sentiment endpoint makes it a single HTTP request to score a text fragment or a sentence on a scale from 0 (negative) to 1 (positive).

I decided to make the expressed sentiment look like an intent for my bot and so I built a custom recognizer:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
const request = require('request-promise-native');

const url = process.env.SENTIMENT_ENDPOINT;
const apiKey = process.env.SENTIMENT_API_KEY;

module.exports = {
recognize: function (context, callback) {
request({
method: 'POST',
url: `${url}`,
headers: {
'Content-Type': 'application/json',
'Ocp-Apim-Subscription-Key': `${apiKey}`
},
body: {
"documents": [
{
"language": "en",
"id": "-",
"text": context.message.text
}
]
},
json: true
}).then((result) => {
if (result && result.documents) {
const positive = result.documents[0].score >= 0.5;

callback(null, {
intent: positive ? 'Affirmation' : 'Discouragement',
score: 0.11 // <-- just above the threshold
});
} else {
callback();
}
}).catch((reason) => {
console.log('Error detecting sentiment: %s', reason);
callback();
});
}
};

Context

Now I can attach a dialog that would be triggered when the bot detects an affirmation and no other intent scores higher. The default intent threshold is 0.1 and that’s why a detected sentiment is given 0.11.

1
2
3
4
5
6
7
8
9
10
11
const sentiment = require('./app/recognizer/sentiment');

bot.recognizer(sentiment);

bot.dialog('affirmation', [
function (session, args, next) {
// ...
}
]).triggerAction({
matches: 'Affirmation'
});

Unlike other intents, however, a detected sentiment is not enough to properly react to on its own.

The bot needs to understand the context to properly react to an affirmation or discouragement expressed by a user. The bot also needs to be able to handle an interrupted dialog if an affirmation (or an expression of frustration) came in the middle of a waterfall, for example.

I will get to it in part 2. Stay tuned.

Be a Human

I am not a Muslim. I am not a national of the seven countries banned from entering the United States. But I know the feeling of suddenly not being able to get back home.

It was 2009. By that time, I had lived in the US for almost two years on my L1 work visa. And so did my wife and my at-the-time six years old son. My daughter was born a US citizen just five months ago. A year before that we were among a few lucky winners of the diversity lottery and decided to do the consular processing. Instead of filing for adjustment of status while in the states, you basically go back to your home country and visit the embassy to get the immigration visa. You then reenter the US in the new status. The closest embassy that handled immigration cases for Belarus nationals was in Warsaw, Poland. We figured we would first go back to Belarus and do all the required paperwork there, then stop for a few days in Warsaw to get the new visas, and then fly back to the states right from there.

It was March. I took a week off from work, my son took a week off from school, and off we went. Happy to see our parents and our families. Happy to be on the road together. Looking forward to a new chapter in our lives.

At the embassy, we handed our documents to the clerk collecting everybody’s paperwork before the appointment with the consul. The lady carefully looked over everything and said we had two problems.

First, there was a small problem. My wife was married before and so she had two previous last names. We only had proof of no criminal records in the home country in her last two names, not her maiden name.

Then, there was a real problem. Since we were both in IT, the lady said, we would need to wait for a special processing that could take about two months.

It took a moment to sink in. If we didn’t get our immigration visas that day, we couldn’t continue our journey to the states. Our L1/L2 were only one year long and had long expired. It was perfectly legal to remain in the states for as long as my L1 petition was valid, but once we crossed the border, we all needed a valid visa to reenter. I asked if I could get my L1 visa renewed instead, but was advised against it. Not to hurt my immigration case, I was told

I remember how I felt. Helpless. Empty. Like my life froze. My home was across the ocean. A little townhouse we were renting. Our cars. My job. My son’s first grade. One flight away and yet completely unreachable.

In all fairness, we wouldn’t need to go back to a war- or terror-torn country. I could even continue to work remotely. Our parents were alive and well and would be happy to accommodate us while we would look for our own temporary place. We traveled together as a family with our five months old daughter so we wouldn’t be separated either. The worst thing that could happen was my son’s school but even that we would have probably figured out. And yet I felt empty, helpless, upset, and very, very, very sad.

We waited for more than two hours before it was our turn to talk to the consul. I could not predict what would happen next.

The consul started the interview. I remember our small talk. She was smiling and was very polite and so were we. She said she was happy to see green card applicants who had their lives together and knew what and why they were doing. We smiled back and said “well, yes, US is our home now. We live our lives there.”

“You guys have two problems”, she said. We nodded.

“How old were you when you married for the first time?”, the consul asked my wife. “21”, Maryna replied.

“Alright”, she said. “You probably were too young to have any encounters with the law at the time, right?”. We smiled and confirmed the consul’s very valid assumption. “Not a problem then”, the consul smiled.

“The next problem, however, is more serious”, she said. “Yes, we know, we were told”, we replied.

At this time, I thought I knew what would follow. A very polite statement that she was very sorry but that we would need to wait for about two months to get the required clearance.

“You guys don’t build software for nuclear plants, do you? Don’t work on military systems?”. I don’t think I knew where she was going with this. “No, of course not. We build web apps. You know, hotel room bookings, health insurance quotes, stuff like that”.

“Alright”, the consul smiled. “I can’t give you your green card today, though”.

I probably said “I know”… “But I will be happy to give it to you tomorrow. Come back in the afternoon. Congratulations!”.

And just like that we were cleared. By a human who had the authority and was not afraid to use it. There are rules, and regulations, and policies, and executive orders. And then there are humans.

Be a human.

Ecommerce Chatbot

I have published a short screencast about the chatbot that I built and have also shared the code on github.

Enjoy!

Understanding Date Ranges in Your Chatbot

When your chatbot performs tasks of a personal assistant like scheduling meetings or generating reports, you need to make sure it can understand dates and date ranges.

Step 1. Resolve

LUIS has a set of pre-built entities to recognize date and time (builtin.datetime). It will understand when your users say tomorrow, October 1st or next week, for example, and will convert that to a date or a duration. Couple examples:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
// tomorrow
"resolution": { "date": "2016-11-20" }

// last quarter
"resolution": { "date": "XXXX-Q4" }

// last year
"resolution": { "date": "2015" }

// last two years
"resolution": { "duration": "P2Y" }

// last week
"resolution": { "date": "2016-W45" }

// past three weeks
"resolution": { "duration": "P3W" }

// this month
"resolution": { "date": "2016-11" }

// last ten months
"resolution": { "duration": "P10M" }

Unfortunately, the only quarter-based duration LUIS understands right now is last quarter. It doesn’t recognize this quarter, next quarter, or plurals like last three quarters.

As you can see, the resolutions are indicative, use different formats, and need to be parsed to get converted to dates and date ranges.

Step 2. Parse

When LUIS detects a datetime entity (e.g. tomorrow) it will send back the resolution along with the extracted entity itself (the word tomorrow in this case).

First, I try to understand what time span the user asked about:

1
2
const span = 
['day', 'week', 'month', 'quarter', 'year'].find(s => entity.match(s));

Then I parse the dates and durations with moment:

1
2
3
4
5
6
7
8
9
10
11
12
13
const moment = require('moment');

// date
const resolved = resolution.date.replace('XXXX', moment().year());
const date = moment(resolved, ['YYYY-MM-DD', 'YYYY-Q', 'YYYY-W', 'YYYY']);

// duration
const duration = moment.duration(resolution.duration);
const sign = ['last', 'past', 'previous'].some(p => entity.match(p)) ? -1 : +1;
const date = moment().add(sign * duration.as('hours'), 'hours');

// normalized result
return date.startOf(span || 'day');

Step 3. Understand

Now we have the date representing the beginning of the period the user asked about. If today was Friday 11/18, for example, and you asked for last three weeks, the date would be Sun, Oct 23 (weeks start on Sunday in US unless you use isoweek with moment).

One date is not enough though for utterances like:

1
please generate a service cost report for the last two weeks

Your report generation service/API is likely to require a date range.

LUIS can also understand numbers spelled as digits like 2 or 5 or spelled as words like two or five. A phrase like last two weeks will produce two entities:

1
2
3
4
5
6
7
8
9
10
11
12
13
"entities": [
{
"entity": "two",
"type": "builtin.number"
},
{
"entity": "last two weeks",
"type": "builtin.datetime.duration",
"resolution": {
"duration": "P2W"
}
}
]

Last thing I need to do to understand the range, is to extract the number and do the date math:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
const moment = require('moment');
const builder = require('botbuilder');

const numbers = {
'one': 1,
'two': 2,
'three': 3,
// you got the idea
};

// the entity here is the 'builtin.number'
const range = builder.EntityRecognizer.parseNumber(entity)
|| numbers[entity]
|| 1;

const end = moment(date)
.add(range, span)
.subtract(1, 'day')
.endOf('day');

And that’s it. Now last three weeks is understood as 10/23 - 11/12. And last quarter will be 10/1 0:00 - 12/31 23:59.

Intent Recognizers For Your Chatbot

Two weeks ago I attended API Strat in Boston where I gave a talk on cognitive APIs and conversational interfaces and showed and explained an e-commerce chatbot that I built. My presentation is on slideshare. I have learned a lot about chatbots and now I feel an urge to write about it.

Skype conversation excerpt

Intents

My bot is using the intent dialog from the Microsoft Bot Framework:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
const bot = new builder.UniversalBot(...);
const intents = new builder.IntentDialog(...);

intents.matches('Greeting', '/welcome');
intents.matches('ShowTopCategories', '/categories');
intents.matches('Explore', '/explore');
intents.matches('ShowProduct', '/showProduct');
intents.matches('AddToCart', '/addToCart');
intents.matches('ShowCart', '/showCart');
intents.matches('Checkout', '/checkout');
intents.matches('Reset', '/reset');
intents.matches('Smile', '/smileBack');
intents.onDefault('/confused'); // no intent recognized

bot.dialog('/', intents);

bot.dialog('/confused', [
function () {
session.endDialog('Sorry, I didnt understand you');
}
]);

The intent dialog associates a user’s intent like Explore or Checkout with a specific dialog that knows how to respond.

It feels very much like routing in a web framework where given a specific URL pattern, the request will be routed to a controller that knows how to handle it.

Users don’t spell out their intents like that though. And so the first thing my bot needs to do is to learn to recognize them. The simplest way to trigger a dialog handler in response to a users utterance is by matching it with a regex. A more sophisticated logic requires an intent recognizer.

Intent Recognizers

An intent recognizer is basically a service that can understand users’ utterances. Given a text message it will return a list of intents that it inferred from it along with supporting entities. Here’s how it looks in LUIS (language understanding service from Microsoft):

LUIS intents and entities

The Explore intent was recognized along with two supporting entities that I trained it for. Here’s another way of looking at it:

1
2
3
4
5
curl -v "https://api.projectoxford.ai/luis/v2.0/apps/{app-id}" 
-H "Content-Type: application/json"
-H "Ocp-Apim-Subscription-Key: {subscription-key}"
-G
-d "q=I am looking for touring bikes. Do you have some?"

And the response:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
{
"query": "I am looking for touring bikes. Do you have some?",
"topScoringIntent": {
"intent": "Explore",
"score": 0.9994699,
"actions": [
// ...
]
},
"entities": [
{
"entity": "touring",
"type": "Detail",
"score": 0.9710912,
// ...
},
{
"entity": "bikes",
"type": "Entity",
"score": 0.943606555,
// ...
}
],
// ...
}

Microsoft Bot Framework comes with built-in support for LUIS in the form of LuisRecognizer

Custom Recognizers

Not every thing your users say has to be sent to a natural language service to extract the intent. Buttons and tappable images can post back bot-specific commands like /show:123456789, for example, that you can easily recognize with a regex. Also, if you want your bot to smile back at a smile sent to it, you don’t need to train a linguistic model either.

It turns out, building your own recognizer is not hard at all. I have built a few for my e-commerce bot and here’s how it works.

First, know that the Bot Framework supports sending a message through a number of recognizers at the same time. You can chain them or run them all in parallel:

1
2
3
4
5
6
7
8
9
10
const intents = new builder.IntentDialog({
recognizers: [
commands,
greeting,
smiles,
new builder.LuisRecognizer(process.env.LUIS_ENDPOINT)
],
intentThreshold: 0.2,
recognizeOrder: builder.RecognizeOrder.series
});

The recognizer itself is a very simple interface with only one method - recognize. Here’s how you would detect a smile, for example:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
module.exports = {
recognize: function(context, callback) {
const text = context.message.text;
const smiles = text.match(/<ss type="(\w+?)">(.+?)<\/ss>/);

if (smiles) {
callback.call(null, null, {
intent: 'Smile',
score: 1,
entities: [
// smiles[1] and smiles[2]
// have the details you need to smile back
]
});
} else {
callback.call(null, null, {
intent: null,
score: 0
});
}
}
};

And here’s another one that understands commands:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27

const commands = {
parse: function (context, text) {
const parts = text.split(':');
const command = parts[0];

const action = this[command] || this[command.slice(1)];
if (!action) {
return unrecognized;
} else {
return action.call(this, context, ...parts.slice(1));
}
},
// ...
}

module.exports = {
recognize: function (context, callback) {
const text = context.message.text;

if (!text.startsWith('/')) {
callback.call(null, null, unrecognized);
} else {
callback.call(null, null, commands.parse(context, text));
}
}
};

That’s it for now but there is more to come. Stay tuned!

How To Promisify Moltin APIs

If you’ve read my last post, then you know that I am having all kinds of geeky fun with Moltin and its APIs. Today I will show you how you can quickly promisify them all.

Moltin APIs

Moltin APIs are all asynchronous HTTP calls with very lightweight wrappers for JavaScript, Python, and many other languages. I am using it with node.js and the main pattern is fairly straightforward (look for js examples).

I am writing a batch import to create a playground product catalog so I find myself doing a lot of this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
const inventory = Promise.all(goGetTheData()); 

inventory.then((data) => {
return Promise.all(data.products.map(product => {
return new Promise((resolve, reject) => {
moltin.Authenticate(function() {
moltin.Product.Create({
// .. product attributes
}, (result) => {
resolve(result);
}, (error, details) => {
reject(details);
});
});
});
}));
}).then((products) => {
return Promise.all(products.map(p => {
return new Promise((resolve, reject) => {
moltin.Authenticate(function() {
// ...
});
});
}));
}).then((modifiers) => {
// ... you got the idea, a lot of noise
});

I wish I could instead write:

1
2
3
4
5
6
7
inventory.then((data) => Promise.all(data.products.map(p => {
return moltin.Product.Create(...);
}))).then((products) => Promise.all(products.map(p => {
return moltin.Modifier.Create(...);
}))).then((modifiers) => {
// ... a lot cleaner and more readable, isn't it?
});

Promisification

There’s moltin-util on NPM that uses Promises but it seems to introduce a new API and I would like to retain the original. Here’s what I quickly put together and I now wonder if it’s worth posting to NPM. Is it? Let me know!

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
const request = require('request');
const fs = require('fs');

const promisify = (moltin) => {
const promisified = {}

const executor = (actor, action) => function () {
const args = [...arguments];

let success = (result, pagination) => {
if (result && pagination) {
result.pagination = pagination;
}

return result;
};

let error = (error, details) => details;

if (typeof (args[args.length - 1]) === 'function') {
if (typeof (args[args.length - 2]) === 'function') {
error = args.pop();
success = args.pop();
} else {
success = args.pop();
}
}

return new Promise((resolve, reject) => {
moltin.Authenticate(function () {
actor[action].call(actor, ...args,
(result, pagination) => {
resolve(success.call(null, result, pagination));
},
(err, details) => {
console.error(details);
reject(error.call(null, details));
});
});
});
};

Object.keys(moltin)
.filter(key => key !== 'options' && typeof (moltin[key]) === 'object')
.forEach(member => {
promisified[member] = {};
let actor = moltin[member];

Object.keys(actor.__proto__)
.concat(Object.keys(actor.__proto__.__proto__))
.filter(action => typeof (actor[action]) === 'function')
.forEach(action => {
promisified[member][action] = executor(actor, action);
});
});

return promisified;
}

module.exports = function (moltin) {
return promisify(moltin);
};

My code is now a whole lot cleaner and smaller too. I will soon post it on Github so stay tuned! Here is, for example, how I would go about deleting a whole bunch of products:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
const moltin = require('moltin')({
publicId: process.env.MOLTIN_PUBLIC_ID,
secretKey: process.env.MOLTIN_SECRET_KEY
});
const moltin_p = require('./promisify-moltin')(moltin);

moltin_p.Product.List(null)
.then((products) => Promise.all(products.map(p => {
console.log('Requesting a delete of %s', p.title);
return moltin_p.Product.Delete(p.id);
})))
.then((result) => {
console.log('Deleted %s products', result.length);
})
.catch((error) => {
console.error(error);
});

Sequencing Asynchronous Calls in JavaScript

I am playing with Moltin for my upcoming talk on the API Strategy conference and it generates all kinds of blog posts ideas.

Context

Moltin is the API-first (or I would even argue the API-only) commerce platform. A “new kid on the block”, a recent Y Combinator graduate with a little more than $2M in seed funding. My talk is about cognitive APIs and smarter apps and I will be using a conversational e-commerce chatbot as an example. I picked Moltin as my commerce backend because it’s ridiculously easy to get started with, requires no upfront setup, and seems to provide a simple and yet a rich API that covers all my scenarios. Plus, their free tier gives me 30,000 requests per month.

You can’t transact with a commerce platform if it doesn’t have a product catalog. Products have variants (e.g. a t-shirt can come in different sizes and different colors) and different commerce platforms approach setting up this hierarchy differently. In Moltin, you first create a main product. Then you add modifiers (in my case - color and size). And then you add variations for each modifier. Moltin will then create the actual variants matrix behind the scenes. If, for example, you add blue, red, and white variations to the color modifier and S, M, L to the size modifier, you will end up with a nine total variations (every size available in every color).

Moltin APIs

Moltin APIs are lean HTTP endpoints that understand application/x-www-form-urlencoded and multipart/form-data and return back JSON. The team also supplies lightweight wrappers for JavaScript, Python, and other languages. Here’s, for example, how the creation of a variation in JavaScript looks like:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
const moltin = require('moltin')({
publicId: process.env.MOLTIN_PUBLIC_ID,
secretKey: process.env.MOLTIN_SECRET_KEY
});

moltin.Authenticate(function() {
moltin.Variation.Create(productId, modifierId, {
title: value
},
function(result){
// result is the successfully created variation
},
function(error, details) {
// oops
});
});

I am scripting the creation of the product catalog using Adventure Works as my sample dataset so I need to run a lot of these asynchronous callback style request in order. I do it with Promises:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
const addVariation = (productId, modifierId, value) => {
return new Promise((resolve, reject) => {
moltin.Authenticate(function() {
moltin.Variation.Create(productId, modifierId, {
title: value
},
function(result){
resolve(result);
},
function(error, details) {
reject(details);
});
});
});
};

And now I can chain all my actions with .then().

Problem

I faced an interesting challenge as I was creating the variations for my products. Here’s how I do it. First, I figure out what modifiers I need to create and then for each modifier I collect the values. The result looks something like this:

1
2
3
4
5
6
7
const mods = [{
title: 'color',
values ['red', 'blue', 'white']
}, {
title: 'size',
values ['S', 'M', 'L']
}];

Now I can recursively .map() this structure into an array of Promises each creating a required variation in Moltin:

1
2
3
4
5
6
7
8
9
10
// somewhere on the chain
.then((mods) => {
return Promise.all(_.flatMap(mods, mod => {
return mod.values.map(value => {
return new Promise((resolve, reject) => {
// create the variation in Moltin
});
});
}));
})

“Looking good”, I thought, until I found out that creating all the variations asynchronously in no particular order and maybe even in parallel confuses the logic on the Moltin side that creates the product matrix (details).

Since I can’t trigger a matrix rebuild via the API, the solution was to sequence the variants creation.

Solution

Instead of just mapping the modifiers to a list of Promises running somewhat concurrently, I I needed to chain variants creation one after another. I also needed to collect all created variations into a list for the next step in the bigger chain.

Nested reduce to the rescue. First, the addVariation now keeps track of the results:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
const addVariation = (productId, modifierId, value, bag) => {
return new Promise((resolve, reject) => {
moltin.Authenticate(function() {
moltin.Variation.Create(productId, modifierId, {
title: value
},
function(result){
bag.push(result)
resolve(result);
}
// error handling skipped
});
});
};

And the Promie.all() has to convert into a linear chain of promises each creating a single variation:

1
2
3
4
5
6
7
8
9
10
11
12
// somewhere on the chain
.then(mods => {
var variations = [];

const chained = mods.reduce((chain, mod) => {
return mod.values.reduce((chain, value) => {
return chain.then(() => addVariant(productId, mod, value, variations));
}, chain);
}, Promise.resolve());

return chained.then(() => Promise.resolve(variations));
})

Works great but I feel like it can be cleaner with Rx. If you know how to convert this to observables and not explicitly manage the collection of created variants, please drop me a line. Thanks!

Content Work Automation with Text Analytics API

In my last post I used Computer Vision APIs to automate image tagging. Let’s see if machine learning APIs can help us automate tedious content work like SEO keywords generation and text proof reading.

Microsoft Cognitive Services offers Text Analytics API that can extract keywords from text and can also do sentiment analysis. I will again use Sitecore, its Habitat demo site, and Powershell Extensions to automate everything though the concepts should apply to any modern CMS.

Key Phrases

It’s probably not hard to come up with a decent list of keywords for a body of text that is a web page. As the size of your site grows, however, the task becomes very tedious very quickly if performed manually. Add to that the editorial calendar with frequent updates and you now run a risk of having obsolete keywords adversely impacting your SEO. Add to that a component based approach with proper content reuse and flexibility in the hands of your content teams and it’s even harder to track what exactly each page renders on the live site. Everything that can be automated should be automated,

Getting keywords for a given text fragment from Text Analytics API is very straightforward:

1
2
3
4
5
6
7
$keywords = Invoke-WebRequest `
-Uri 'https://westus.api.cognitive.microsoft.com/text/analytics/v2.0/keyPhrases' `
-Body "{'documents': [ { 'language': 'en', 'id': '$($page.ID)', 'text': '$text' } ]}" `
-ContentType "application/json" `
-Headers @{'Ocp-Apim-Subscription-Key' = '<use-your-own-key>'} `
-Method 'Post' `
-UseBasicParsing | ConvertFrom-Json

Here’s how I am going to aggregate the content for a given page:

1
2
3
4
5
6
7
8
9
10
11
12
function GetContent($item, $layout = $False)
{
# TBD
}

$content = GetContent $page $True `
| Where { $_ -match '\D+' } `
| %{ $_ -replace '\.$', ''} `
| Sort-Object `
| Get-Unique

$text = [String]::Join('. ', $content)

Basically, I will get various content fragments concatenated together into one big blob of text.

Aggregating Content

The GetContent function will get all content fields off of the item and then will recursively process all the datasources that the layout references. It’s actually smart enough to also resolve links to other items like you would find in the content fields on the carousel panels, for example. It will go as deep as needed, will strip out rich text markup, will skip system fields, and will even handle cyclic references.

Take a look on github if you’re interested, I enjoyed writing this one.

Keywords That Matter

For my experiment I decided to limit the key phrases returned by the API to only those that have words capitalized. I figured it’s a good indication of a header or a subtitle plus it helps spot ALL CAPS text as you will see in a minute:

1
2
3
$keywords.documents[0].keyPhrases `
| Where { $_ -cmatch '^([A-Z]\w+\s?)*$' } `
| %{ Write-Host $_ }

Here are the results for the home page, for example. You probably would want to exclude things that you know are not your keywords (e.g. Search Resutls, Tweets):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
The text is 100.0% positive

Sitecore Package
Sitecore MVP
Sitecore Powered
Download Habitat
Github Habitat Repository
Design Package Principles
Simplicity
High Cohesion Domain
Low Coupling
Pentia
Search Results
Anders Laub Christoffersen
Tweets
Extensibility
Flexibility
News List
Latest News
Click
Introduction

Proof Reading

Text Analytics can also tell you how positive your text sounds. positivity is measured in percentage points from 0% to 100%. It’s also just one HTTP request away if you have your text readily available:

1
2
3
4
5
6
7
8
9
$sentiment = Invoke-WebRequest `
-Uri 'https://westus.api.cognitive.microsoft.com/text/analytics/v2.0/sentiment' `
-Body "{'documents': [ { 'language': 'en', 'id': '$($page.ID)', 'text': '$text' } ]}" `
-ContentType "application/json" `
-Headers @{'Ocp-Apim-Subscription-Key' = '<use-your-own-key>'} `
-Method 'Post' `
-UseBasicParsing | ConvertFrom-Json

Write-Host "The text is $($sentiment.documents[0].score*100)% positive"

Many pages in the Habitat demo site are close to 100% positive. That’s to be expected for the elevated marketing speak I guess. A few, however, came back with just 16%. And it turns out that you don’t have to sound too negative to score that low. It’s enough to just be very dry and matter-of-factly. Like this:

1
2
3
4
The accounts module handles user accounts and user profiles including login, registration, forgot password and profile editing. 
A number of components are available to handle login, registration and password reset.
Links to specific pages showing these components are as follows.
Login, Register, Edit Profile (logged in users only), Forgotton Password

Imagine running a script like that for all the pages on your site and sending the results off to your content team? Maybe you will not be able to completely automate keywords generation but you will definitely help them spot content that needs improving.

I have been working with cognitive APIs for a while now and I am still surprised how easy it is to get stuff done. I am even more excited about what’s coming in the near future! So much so that I will be speaking about cognitive APIs and smart apps that one can built with them on the API Strategy conference this coming November. See you in Boston!

Image Tagging Automation with Computer Vision

I have recently presented my explorations of computer vision APIs(part 1, part 2, and part 3) on the AI meetup in Alpharetta. This time I decided to do something useful with it.

Image Tagging

When you work with digital platforms (be that content management, e-commerce, or digital assets) you can’t go far without organizing your images. Tagging makes your assets library navigable and searchable. Descriptions are a great companion to the visual preview and can also serve as the alternate text. WCAG 2.0 requires non-text content to come with a text alternative for the very basic Level A compliance.

Computer Vision

When I played with the trained computer vision models from different vendors, I realized that I can get a good set of tags from either one of the APIs and some would even try to build a description for me. The digital assets management vendors started playing with this idea as well. Adobe, for example, has introduced smart tags in the latest release of AEM. Maybe I can do the same using Computer Vision APIs and integrate with a digital product that doesn’t have that capability built in yet? Let’s try with Sitecore.

Automation

I am going to use Computer Vision from Microsoft Cognitive Services and the Habitat demo site from Sitecore. I am also going to need Powershell Extensions to automate everything.

We will need the URL of the computer vision API, the binary array of the image, the Sitecore item representing the image to record the results on, and a little bit of Powershell magic to glue it all together.

Here’s the crux of the script where I call into the computer vision API:

1
2
3
4
5
6
7
8
9
10
11
$vision = 'https://api.projectoxford.ai/vision/v1.0/analyze'
$features = 'Categories,Tags,Description,Color'

$response = Invoke-WebRequest `
-Uri "$($vision)?visualFeatures=$($features)" `
-Body $bytes `
-ContentType "application/octet-stream" `
-Headers @{'Ocp-Apim-Subscription-Key' = '<use-your-key>'} `
-Method 'Post' `
-ErrorAction Stop `
-UseBasicParsing | ConvertFrom-Json

It’s that simple. The rest of it is using Sitecore APIs to read the image, update the item with tags and descriptions received from the cognitive services, and also a try/catch/retry loop to handle the API’s rate limit (in preview it’s limited to 5000/month and 20/minute). You can find the full script on github.

20/20

Some images were perfectly deciphered by the computer vision API as you can see in this example (the %% are the confidence level reported by the API):

Computer Vision can clearly see what's in the image

Legally Blind

But some others would puzzle the model quite a bit:

Computer Vision mistakes a person for a celebrity and the cell phone for a hot dog

Not only there’s no Shu Qi in the picture above, there’s definitely no hot dog and no other food items. Granted, the API did tell me that it was not really sure about what it could see. Probably a good idea to route images like that through a human workflow for tags and description validation and correction.

Domain Specific Models

The problem with seeing the wrong things or not seeing the right things in a perfectly focused and lit image is … lack of training. Think about it. There are millions and millions of things that your vision can recognize. But you have been training it all your life and the labeled examples keep coming in on a daily basis. It takes a whole lot of labeled images to train a generic computer vision model and it also takes time.

You can get better results with domain specific models like that offered by Clarifai, for example. As of the time of this writing you can subscribe to Wedding, Travel, and Food models.

Domain Specific Computer Vision model from Clarifai

I am sure you’ll get better classification results out of these models than out of a generic computer vision model if your business is in one of these industries.


Next time I will explore Text Analytics API and will show you how it can help tag and generate keywords for your content.