How To Promisify Moltin APIs

If you’ve read my last post, then you know that I am having all kinds of geeky fun with Moltin and its APIs. Today I will show you how you can quickly promisify them all.

Moltin APIs

Moltin APIs are all asynchronous HTTP calls with very lightweight wrappers for JavaScript, Python, and many other languages. I am using it with node.js and the main pattern is fairly straightforward (look for js examples).

I am writing a batch import to create a playground product catalog so I find myself doing a lot of this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
const inventory = Promise.all(goGetTheData()); 

inventory.then((data) => {
return Promise.all(data.products.map(product => {
return new Promise((resolve, reject) => {
moltin.Authenticate(function() {
moltin.Product.Create({
// .. product attributes
}, (result) => {
resolve(result);
}, (error, details) => {
reject(details);
});
});
});
}));
}).then((products) => {
return Promise.all(products.map(p => {
return new Promise((resolve, reject) => {
moltin.Authenticate(function() {
// ...
});
});
}));
}).then((modifiers) => {
// ... you got the idea, a lot of noise
});

I wish I could instead write:

1
2
3
4
5
6
7
inventory.then((data) => Promise.all(data.products.map(p => {
return moltin.Product.Create(...);
}))).then((products) => Promise.all(products.map(p => {
return moltin.Modifier.Create(...);
}))).then((modifiers) => {
// ... a lot cleaner and more readable, isn't it?
});

Promisification

There’s moltin-util on NPM that uses Promises but it seems to introduce a new API and I would like to retain the original. Here’s what I quickly put together and I now wonder if it’s worth posting to NPM. Is it? Let me know!

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
const request = require('request');
const fs = require('fs');

const promisify = (moltin) => {
const promisified = {}

const executor = (actor, action) => function () {
const args = [...arguments];

let success = (result, pagination) => {
if (result && pagination) {
result.pagination = pagination;
}

return result;
};

let error = (error, details) => details;

if (typeof (args[args.length - 1]) === 'function') {
if (typeof (args[args.length - 2]) === 'function') {
error = args.pop();
success = args.pop();
} else {
success = args.pop();
}
}

return new Promise((resolve, reject) => {
moltin.Authenticate(function () {
actor[action].call(actor, ...args,
(result, pagination) => {
resolve(success.call(null, result, pagination));
},
(err, details) => {
console.error(details);
reject(error.call(null, details));
});
});
});
};

Object.keys(moltin)
.filter(key => key !== 'options' && typeof (moltin[key]) === 'object')
.forEach(member => {
promisified[member] = {};
let actor = moltin[member];

Object.keys(actor.__proto__)
.concat(Object.keys(actor.__proto__.__proto__))
.filter(action => typeof (actor[action]) === 'function')
.forEach(action => {
promisified[member][action] = executor(actor, action);
});
});

return promisified;
}

module.exports = function (moltin) {
return promisify(moltin);
};

My code is now a whole lot cleaner and smaller too. I will soon post it on Github so stay tuned! Here is, for example, how I would go about deleting a whole bunch of products:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
const moltin = require('moltin')({
publicId: process.env.MOLTIN_PUBLIC_ID,
secretKey: process.env.MOLTIN_SECRET_KEY
});
const moltin_p = require('./promisify-moltin')(moltin);

moltin_p.Product.List(null)
.then((products) => Promise.all(products.map(p => {
console.log('Requesting a delete of %s', p.title);
return moltin_p.Product.Delete(p.id);
})))
.then((result) => {
console.log('Deleted %s products', result.length);
})
.catch((error) => {
console.error(error);
});

Sequencing Asynchronous Calls in JavaScript

I am playing with Moltin for my upcoming talk on the API Strategy conference and it generates all kinds of blog posts ideas.

Context

Moltin is the API-first (or I would even argue the API-only) commerce platform. A “new kid on the block”, a recent Y Combinator graduate with a little more than $2M in seed funding. My talk is about cognitive APIs and smarter apps and I will be using a conversational e-commerce chatbot as an example. I picked Moltin as my commerce backend because it’s ridiculously easy to get started with, requires no upfront setup, and seems to provide a simple and yet a rich API that covers all my scenarios. Plus, their free tier gives me 30,000 requests per month.

You can’t transact with a commerce platform if it doesn’t have a product catalog. Products have variants (e.g. a t-shirt can come in different sizes and different colors) and different commerce platforms approach setting up this hierarchy differently. In Moltin, you first create a main product. Then you add modifiers (in my case - color and size). And then you add variations for each modifier. Moltin will then create the actual variants matrix behind the scenes. If, for example, you add blue, red, and white variations to the color modifier and S, M, L to the size modifier, you will end up with a nine total variations (every size available in every color).

Moltin APIs

Moltin APIs are lean HTTP endpoints that understand application/x-www-form-urlencoded and multipart/form-data and return back JSON. The team also supplies lightweight wrappers for JavaScript, Python, and other languages. Here’s, for example, how the creation of a variation in JavaScript looks like:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
const moltin = require('moltin')({
publicId: process.env.MOLTIN_PUBLIC_ID,
secretKey: process.env.MOLTIN_SECRET_KEY
});

moltin.Authenticate(function() {
moltin.Variation.Create(productId, modifierId, {
title: value
},
function(result){
// result is the successfully created variation
},
function(error, details) {
// oops
});
});

I am scripting the creation of the product catalog using Adventure Works as my sample dataset so I need to run a lot of these asynchronous callback style request in order. I do it with Promises:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
const addVariation = (productId, modifierId, value) => {
return new Promise((resolve, reject) => {
moltin.Authenticate(function() {
moltin.Variation.Create(productId, modifierId, {
title: value
},
function(result){
resolve(result);
},
function(error, details) {
reject(details);
});
});
});
};

And now I can chain all my actions with .then().

Problem

I faced an interesting challenge as I was creating the variations for my products. Here’s how I do it. First, I figure out what modifiers I need to create and then for each modifier I collect the values. The result looks something like this:

1
2
3
4
5
6
7
const mods = [{
title: 'color',
values ['red', 'blue', 'white']
}, {
title: 'size',
values ['S', 'M', 'L']
}];

Now I can recursively .map() this structure into an array of Promises each creating a required variation in Moltin:

1
2
3
4
5
6
7
8
9
10
// somewhere on the chain
.then((mods) => {
return Promise.all(_.flatMap(mods, mod => {
return mod.values.map(value => {
return new Promise((resolve, reject) => {
// create the variation in Moltin
});
});
}));
})

“Looking good”, I thought, until I found out that creating all the variations asynchronously in no particular order and maybe even in parallel confuses the logic on the Moltin side that creates the product matrix (details).

Since I can’t trigger a matrix rebuild via the API, the solution was to sequence the variants creation.

Solution

Instead of just mapping the modifiers to a list of Promises running somewhat concurrently, I I needed to chain variants creation one after another. I also needed to collect all created variations into a list for the next step in the bigger chain.

Nested reduce to the rescue. First, the addVariation now keeps track of the results:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
const addVariation = (productId, modifierId, value, bag) => {
return new Promise((resolve, reject) => {
moltin.Authenticate(function() {
moltin.Variation.Create(productId, modifierId, {
title: value
},
function(result){
bag.push(result)
resolve(result);
}
// error handling skipped
});
});
};

And the Promie.all() has to convert into a linear chain of promises each creating a single variation:

1
2
3
4
5
6
7
8
9
10
11
12
// somewhere on the chain
.then(mods => {
var variations = [];

const chained = mods.reduce((chain, mod) => {
return mod.values.reduce((chain, value) => {
return chain.then(() => addVariant(productId, mod, value, variations));
}, chain);
}, Promise.resolve());

return chained.then(() => Promise.resolve(variations));
})

Works great but I feel like it can be cleaner with Rx. If you know how to convert this to observables and not explicitly manage the collection of created variants, please drop me a line. Thanks!

Content Work Automation with Text Analytics API

In my last post I used Computer Vision APIs to automate image tagging. Let’s see if machine learning APIs can help us automate tedious content work like SEO keywords generation and text proof reading.

Microsoft Cognitive Services offers Text Analytics API that can extract keywords from text and can also do sentiment analysis. I will again use Sitecore, its Habitat demo site, and Powershell Extensions to automate everything though the concepts should apply to any modern CMS.

Key Phrases

It’s probably not hard to come up with a decent list of keywords for a body of text that is a web page. As the size of your site grows, however, the task becomes very tedious very quickly if performed manually. Add to that the editorial calendar with frequent updates and you now run a risk of having obsolete keywords adversely impacting your SEO. Add to that a component based approach with proper content reuse and flexibility in the hands of your content teams and it’s even harder to track what exactly each page renders on the live site. Everything that can be automated should be automated,

Getting keywords for a given text fragment from Text Analytics API is very straightforward:

1
2
3
4
5
6
7
$keywords = Invoke-WebRequest `
-Uri 'https://westus.api.cognitive.microsoft.com/text/analytics/v2.0/keyPhrases' `
-Body "{'documents': [ { 'language': 'en', 'id': '$($page.ID)', 'text': '$text' } ]}" `
-ContentType "application/json" `
-Headers @{'Ocp-Apim-Subscription-Key' = '<use-your-own-key>'} `
-Method 'Post' `
-UseBasicParsing | ConvertFrom-Json

Here’s how I am going to aggregate the content for a given page:

1
2
3
4
5
6
7
8
9
10
11
12
function GetContent($item, $layout = $False)
{
# TBD
}

$content = GetContent $page $True `
| Where { $_ -match '\D+' } `
| %{ $_ -replace '\.$', ''} `
| Sort-Object `
| Get-Unique

$text = [String]::Join('. ', $content)

Basically, I will get various content fragments concatenated together into one big blob of text.

Aggregating Content

The GetContent function will get all content fields off of the item and then will recursively process all the datasources that the layout references. It’s actually smart enough to also resolve links to other items like you would find in the content fields on the carousel panels, for example. It will go as deep as needed, will strip out rich text markup, will skip system fields, and will even handle cyclic references.

Take a look on github if you’re interested, I enjoyed writing this one.

Keywords That Matter

For my experiment I decided to limit the key phrases returned by the API to only those that have words capitalized. I figured it’s a good indication of a header or a subtitle plus it helps spot ALL CAPS text as you will see in a minute:

1
2
3
$keywords.documents[0].keyPhrases `
| Where { $_ -cmatch '^([A-Z]\w+\s?)*$' } `
| %{ Write-Host $_ }

Here are the results for the home page, for example. You probably would want to exclude things that you know are not your keywords (e.g. Search Resutls, Tweets):

1
The text is 100.0% positive

Sitecore Package
Sitecore MVP
Sitecore Powered
Download Habitat
Github Habitat Repository
Design Package Principles
Simplicity
High Cohesion Domain
Low Coupling
Pentia
Search Results
Anders Laub Christoffersen
Tweets
Extensibility
Flexibility
News List
Latest News
Click
Introduction

Proof Reading

Text Analytics can also tell you how positive your text sounds. positivity is measured in percentage points from 0% to 100%. It’s also just one HTTP request away if you have your text readily available:

1
2
3
4
5
6
7
8
9
$sentiment = Invoke-WebRequest `
-Uri 'https://westus.api.cognitive.microsoft.com/text/analytics/v2.0/sentiment' `
-Body "{'documents': [ { 'language': 'en', 'id': '$($page.ID)', 'text': '$text' } ]}" `
-ContentType "application/json" `
-Headers @{'Ocp-Apim-Subscription-Key' = '<use-your-own-key>'} `
-Method 'Post' `
-UseBasicParsing | ConvertFrom-Json

Write-Host "The text is $($sentiment.documents[0].score*100)% positive"

Many pages in the Habitat demo site are close to 100% positive. That’s to be expected for the elevated marketing speak I guess. A few, however, came back with just 16%. And it turns out that you don’t have to sound too negative to score that low. It’s enough to just be very dry and matter-of-factly. Like this:

1
The accounts module handles user accounts and user profiles including login, registration, forgot password and profile editing. 
A number of components are available to handle login, registration and password reset. 
Links to specific pages showing these components are as follows. 
Login, Register, Edit Profile (logged in users only), Forgotton Password

Imagine running a script like that for all the pages on your site and sending the results off to your content team? Maybe you will not be able to completely automate keywords generation but you will definitely help them spot content that needs improving.

I have been working with cognitive APIs for a while now and I am still surprised how easy it is to get stuff done. I am even more excited about what’s coming in the near future! So much so that I will be speaking about cognitive APIs and smart apps that one can built with them on the API Strategy conference this coming November. See you in Boston!

Image Tagging Automation with Computer Vision

I have recently presented my explorations of computer vision APIs(part 1, part 2, and part 3) on the AI meetup in Alpharetta. This time I decided to do something useful with it.

Image Tagging

When you work with digital platforms (be that content management, e-commerce, or digital assets) you can’t go far without organizing your images. Tagging makes your assets library navigable and searchable. Descriptions are a great companion to the visual preview and can also serve as the alternate text. WCAG 2.0 requires non-text content to come with a text alternative for the very basic Level A compliance.

Computer Vision

When I played with the trained computer vision models from different vendors, I realized that I can get a good set of tags from either one of the APIs and some would even try to build a description for me. The digital assets management vendors started playing with this idea as well. Adobe, for example, has introduced smart tags in the latest release of AEM. Maybe I can do the same using Computer Vision APIs and integrate with a digital product that doesn’t have that capability built in yet? Let’s try with Sitecore.

Automation

I am going to use Computer Vision from Microsoft Cognitive Services and the Habitat demo site from Sitecore. I am also going to need Powershell Extensions to automate everything.

We will need the URL of the computer vision API, the binary array of the image, the Sitecore item representing the image to record the results on, and a little bit of Powershell magic to glue it all together.

Here’s the crux of the script where I call into the computer vision API:

1
2
3
4
5
6
7
8
9
10
11
$vision = 'https://api.projectoxford.ai/vision/v1.0/analyze'
$features = 'Categories,Tags,Description,Color'

$response = Invoke-WebRequest `
-Uri "$($vision)?visualFeatures=$($features)" `
-Body $bytes `
-ContentType "application/octet-stream" `
-Headers @{'Ocp-Apim-Subscription-Key' = '<use-your-key>'} `
-Method 'Post' `
-ErrorAction Stop `
-UseBasicParsing | ConvertFrom-Json

It’s that simple. The rest of it is using Sitecore APIs to read the image, update the item with tags and descriptions received from the cognitive services, and also a try/catch/retry loop to handle the API’s rate limit (in preview it’s limited to 5000/month and 20/minute). You can find the full script on github.

20/20

Some images were perfectly deciphered by the computer vision API as you can see in this example (the %% are the confidence level reported by the API):

Computer Vision can clearly see what's in the image

Legally Blind

But some others would puzzle the model quite a bit:

Computer Vision mistakes a person for a celebrity and the cell phone for a hot dog

Not only there’s no Shu Qi in the picture above, there’s definitely no hot dog and no other food items. Granted, the API did tell me that it was not really sure about what it could see. Probably a good idea to route images like that through a human workflow for tags and description validation and correction.

Domain Specific Models

The problem with seeing the wrong things or not seeing the right things in a perfectly focused and lit image is … lack of training. Think about it. There are millions and millions of things that your vision can recognize. But you have been training it all your life and the labeled examples keep coming in on a daily basis. It takes a whole lot of labeled images to train a generic computer vision model and it also takes time.

You can get better results with domain specific models like that offered by Clarifai, for example. As of the time of this writing you can subscribe to Wedding, Travel, and Food models.

Domain Specific Computer Vision model from Clarifai

I am sure you’ll get better classification results out of these models than out of a generic computer vision model if your business is in one of these industries.


Next time I will explore Text Analytics API and will show you how it can help tag and generate keywords for your content.

Content Testing and Context.Site

A quick blog post about Content Testing feature of Sitecore and its unfriendliness towards Context.Site

I went through a few content testing scenarios recently and one thing really puzzled me: Content Testing dialogs stumble upon Context.Site.

Reference Storefront

If you try to set up a test in the Sitecore Commerce reference storefront and send the page through the workflow, here’s how the variants screenshots will look like:

Test Variants Screenshots all show YSOD

The base controller is using Context.Site for view path resolution:

1
2
3
4
5
6
7
8
9
10
11
12
protected string GetRenderingView(string renderingViewName = null)
{

/*
ShopName is a property on the CommerceStorefront object that is represented
by an item at Context.Site.RootPath + Context.Site.StartItem
*/

var shopName = StorefrontManager.CurrentStorefront.ShopName;
// ...
const string RenderingViewPathFormatString = "~/Views/{0}/{1}/{2}.cshtml";
// ...
return string.Format(RenderingViewPathFormatString, shopName, "Shared", renderingViewName);
}

And it won’t find anything in the shell site:

1
2
3
4
5
6
7
8
9
Nested Exception

Exception: System.InvalidOperationException
Message: The view '~/Views/shell/Shared/Structures/TopStructure.cshtml' or its master was not found or no view engine supports the searched locations. The following locations were searched:
~/Views/shell/Shared/Structures/TopStructure.cshtml
Source: System.Web.Mvc
at System.Web.Mvc.ViewResult.FindView(ControllerContext context)
at System.Web.Mvc.ViewResultBase.ExecuteResult(ControllerContext context)
...

Habitat

Trying the same workflow with test in Habitat stumbles upon the validation step:

Validation shows error 500

Here the Context.Site is being used for custom dictionary functionality:

1
2
3
4
5
6
7
8
9
10
11
12
private Item GetDictionaryRoot(SiteContext site)
{

var dictionaryPath = site.Properties["dictionaryPath"];
if (dictionaryPath == null)
{
throw new ConfigurationErrorsException("No dictionaryPath was specified on the <site> definition.");
}

// ...

return rootItem;
}

And it also errors out in shell:

1
2
3
4
5
6
7
8
9
Nested Exception

Exception: System.Configuration.ConfigurationErrorsException
Message: 'No dictionaryPath was specified on the <site> definition'.
Source: Sitecore.Foundation.Dictionary
at Sitecore.Foundation.Dictionary.Repositories.DictionaryRepository.GetDictionaryRoot(SiteContext site)
at Sitecore.Foundation.Dictionary.Repositories.DictionaryRepository.Get(SiteContext site)
at Sitecore.Foundation.Dictionary.Repositories.DictionaryRepository.get_Current()
...

Just in case you wondered, Preview.ResolveSite doesn’t help.

Conclusion

Alistair Deneys explained that Content Testing needs to run screenshot generation in context of shell to render unpublished content of different versions.

Content Testing needs to quickly learn how to do everything it needs in the context of the current site while getting everything from the master database.

While you probably shouldn’t use Context.Site for view path resolution - we now have official support for MVC areas, and probably shouldn’t use custom dictionary implementation - here’s my blog post on how to make standard dictionary items editable in Experience Editor, you should be allowed to use Context.Site in your page rendering logic if you need it.

Sitecore Catalog Export for Azure Recommendations API

Azure Recommendations API requires a product catalog snapshot and the transactions history to train a model. This blog post will show you how you can export a Sitecore Commerce reference storefront catalog using PowerShell Extensions.

Bare Minimum

Let’s start small. At a minimum, the Recommendations API needs your SKU #s, product name, and the category name:

1
AAA04294,Office Language Pack Online DwnLd,Office
AAA04303,Minecraft Download Game,Games
C9F00168,Kiruna Flip Cover,Accessories

The following script will give us the data we need:

1
2
3
4
5
6
7
8
9
10
$catalog = '/sitecore/Commerce/Catalog Management/Catalogs/Adventure Works Catalog'
$product = '{225F8638-2611-4841-9B89-19A5440A1DA1}' # Commerce Product Template

$products = Get-ChildItem -Path $catalog -Recurse `
| Where { $_.Template.InnerItem['__Base template'] -like $product }

$products | Select 'Name', `
'__Display Name', `
@{Name = 'Category'; Expression = {$_.Parent.Name}} `
| Sort 'Name' -Unique

The result looks like this:

1
Name        __Display name                   Category
----        --------------                   --------
22565422120 Gift Card                        Departments
AW007-08    Black Diamond Quicksilver II     Carabiners
AW009-08    Black Diamond Quicksilver II     SaleItems
AW013-08    Petzl Spirit                     Adventure Works Catalog
...

Adding Features

Features need to be exported in a special format. Different products in a given catalog may have different features and even have different number of them. Azure solves this by requiring features as a comma separated list of name value pairs:

1
AAA04294,Office Language Pack Online DwnLd,Office,, softwaretype=productivity
BAB04303,Minecraft DwnLd,Games,, softwaretype=gaming, compatibility=iOS, agegroup=all
C9F00168,Kiruna Flip Cover,Accessories,, compatibility=lumia, hardwaretype=mobile

The following addition to the script will add features to the list:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
function ExtractFeatures($product)
{
$fields = $product.Template.OwnFields `
| Where { $_.Name -notlike 'images' -and $_.Name -notlike '*date'} `
| Where { $product[$_.Name] -ne '' }

$features = @()
foreach($field in $fields)
{
$features += @{Name = $field.Name; Value = $product[$field.Name]}
}

return $features
}

function ApplyFeatures($product)
{
foreach($feature in $product.Features)
{
$product | Add-Member -Name $feature.Name `
-MemberType NoteProperty `
-Value "$($feature.Name)=$($feature.Value)"
}

$product.PSObject.Properties.Remove('Features')

return $product
}

# ... (see above)

$products | Select 'Name', `
'__Display Name', `
@{Name = 'Category'; Expression = {$_.Parent.Name}}, `
'Description', `
@{Name = 'Features'; Expression = {ExtractFeatures($_)}} `
| Sort 'Name' -Unique `
| %{ ApplyFeatures $_ }

PSObject is a dynamic type that you can modify on the fly. First, I extracted a collection of features into a new Features property. Then I applied features to become new properties on the product object. CSV export will be able to pick it up transparently. I hope.

CSV

It should now be easy to export the list as CSV. There’s a caveat though.

Both ConvertTo-CSV and Export-CSV will happily export the list for you but will normalize every record to the common set of fields.

You won’t see the features in the list. Here’s a trick to get every product in the export have its own features:

1
2
3
4
5
6
7
8
9
10
# ... (see above)

$products | Select 'Name', `
'__Display Name', `
@{Name = 'Category'; Expression = {$_.Parent.Name}}, `
'Description', `
@{Name = 'Features'; Expression = {ExtractFeatures($_)}} `
| Sort 'Name' -Unique `
| %{ ApplyFeatures $_ } `
| %{ ConvertTo-CSV -InputObject $_ -NoTypeInformation | Select -Skip 1 }

Instead of piping the entire set to the ConvertTo-CSV, I basically processed the list one by one in the foreach loop. I also removed the type info and the CSV headers. Azure doesn’t need labels anyway. Works like a charm!

1
"AW007-08","Black Diamond Quicksilver II","Carabiners","Straight","BasePrice=10.0000"
"AW009-08","Black Diamond Quicksilver II","SaleItems","Straight"
"AW013-08","Petzl Spirit","Adventure Works Catalog","Straight"
"AW014-08","Petzl Spirit","Carabiners","Straight","BasePrice=14.0000"
"AW029-03","Women's woven tee","Shirts","Short-sleeve, breathable henley, 100% cotton knit","BasePrice=35.0000","Brand=Litware"

Commas and Quotes

There’s one more thing that I needed to do for Azure Recommendations API to absorb the catalog. As you could tell, the catalog format is not exactly CSV. Every line can have different number of fields basically. Neither does Azure backend use CSV parsing to read it.

The double quotes in the export above were taken literally. Azure would think that the SKU # is "AW007-08", for example. And then the commas in the descriptions where messing up the parsing as well. My next post will be about the Recommendations API itself and I will write more about it, but here’s the final version that produces a clean catalog export ready to go:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
function ExtractFeatures($product)
{
# ... (see above)
}

function ApplyFeatures($product)
{
# ... (see above)
}

function CleanUpCommas($product)
{
foreach ($prop in $product.PSObject.Properties)
{
$src = $product.PSObject.Members[$prop.Name].Value
$product.PSObject.Members[$prop.Name].Value = $src -replace ",", ";"
}

return $product
}

function CleanUpQuotes($line)
{
return $line -replace """", ""
}

# ... (see above)

$products | Select 'Name', `
'__Display Name', `
@{Name = 'Category'; Expression = {$_.Parent.Name}}, `
'Description', `
@{Name = 'Features'; Expression = {ExtractFeatures($_)}} `
| Sort 'Name' -Unique `
| %{ ApplyFeatures $_ } `
| %{ CleanUpCommas $_ } `
| %{ ConvertTo-CSV -InputObject $_ -NoTypeInformation | Select -Skip 1 } `
| %{ CleanUpQuotes $_ }

Got to love PowerShell. Cheers!