Cognitive APIs. Vision. Part 1.

I’ve been actively looking at machine learning lately. Fascinating applications in day to day live! Often unexpected. Always amazing. More and more accessible every day. Google’s Motion Stills blew me away the other day. Classification of motion vectors and biased estimation models (in a temporal consistent manner, no less) - a lot of science and novel ideas in a free consumer mobile app. Enabled and powered by machine learning.

You have probably noticed a new breed of APIs popping up all over the web. Some simply call it Machine Learning APIs, others call it Cognitive Services, some simply call it Watson. Pre-trained models operationalized with an API layer and APIs to train your own.

This blog post series offers you a short tour of these new machine learning powered APIs. I am going to start with Vision and today I am tasting Microsoft Cogntive Services (aka Project Oxford).

Setup

I am going to use JavaScript and my browser. These are all HTTP APIs so I should be able to just talk to them with very little overhead or ceremonies. Besides, since the intent is to taste (and test) the APIs, I figured I would also take the latest JavaScript and browser APIs for a spin. No transpilation. No polyfills. No external dependencies. Hence a fair warning: the examples are likely to only work in latest evergreen browsers. I am using connect serve-static as my web server (here’s how).

Let’s see how much a computer vision can see in my avatar.

Pavel Veller

Describe

According to the documentation, the Describe endpoint generates a description of an image in human readable language with complete sentences. The description is based on a collection of content tags, which are also returned by the operation.

Getting started is just a few clicks and API reference is very transparent. Here we go:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
const explain = function(what, confidence) {
return `${what} with ${Math.round(confidence*100)}% confidence`;
};

// my avatar
const image = 'http://media.licdn.com/mpr/mpr/shrinknp_400_400/p/7/000/22b/22b/32f088c.jpg';

// Microsoft: Describe
const url = 'https://api.projectoxford.ai/vision/v1.0/describe?maxCandidates=3';
const key = '<use-your-own-key> ';

fetch(url, {
method: 'POST',
headers: new Headers({
'Content-Type': 'application/json',
'Ocp-Apim-Subscription-Key': key
}),
body: JSON.stringify({
'url': image
})
}).then(function(response) {
return response.json();
}).then(function({description: {captions, tags}}) {
console.log(captions.map((c) => explain(c.text, c.confidence)));
console.log(tags);
});

Very straightforward with no surprises. Everything just worked. Here’s what the Describe API sees in my avatar:

A man holding a cell phone (19% confidence)
A man holding a phone (17% confidence)
A young man holding a cell phone (12% confidence)

It definitely sees a man. It’s not sure whether a man is young. It probably doesn’t know what a microphone is but it vaguely remembers that phones used to look like this:

Old Phone

Here are all the tags:

["person", "man", "indoor", "holding", "looking", "cellphone", "hand", "phone", "young", "laptop", "sitting", "standing", "table", "boy", "computer", "shirt", "using", "brush", "red", "blue", "people"]

Tags

Another API endpoint can report tags and also provide the level of confidence in each. I sent the same request to /vision/v1.0/tag and here’s what I got back:

person (100%)
man (95%)
indoor (94%)
microphone (22%)

I wonder why microphone wasn’t detected by the Describe endpoint. I would expect that Describe gets the tags from Tag and then uses language generation algorytms to build the description. Apparently not.

Analyze

One more API endpoint that can process an image from many angles at once. It will report the most likely description and will send down the tags. You can also ask it to detect faces and more. I asked for Description, Tags, and Categories. This one does feel like an aggregation. I got the same set of tags as I got from Tags, same most likely description (with a cell phone) and a longer list of tags as I got from Describe. The category was identified as:

people_young with 81% confidence

Summary

Microsoft Vision API allows you to see one image at a time. You can either upload the binary or point it at a publicly accessible URL. Depending on what you’re after you can get different results. I am still puzzled by the difference in reported tags. It’s capable of working with domain models to do more specialized detection but right now has only one trained - celebrities. I am sure Microsoft will deploy more and will likely let you train your own. I don’t know when but I know that there are other vision APIs that do so.

Next time I will talk to Mr. Watson. Stay tuned!

Author

Pavel Veller

Posted on

June 9, 2016

Updated on

June 28, 2022

Licensed under

Comments