Peter Miller, Author at Perficient Blogs https://blogs.perficient.com/author/pmiller/ Expert Digital Insights Mon, 01 Oct 2018 20:02:42 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Peter Miller, Author at Perficient Blogs https://blogs.perficient.com/author/pmiller/ 32 32 30508587 Another Short Experiment with the Connect API https://blogs.perficient.com/2018/10/01/another-short-experiment-with-the-connect-api/ https://blogs.perficient.com/2018/10/01/another-short-experiment-with-the-connect-api/#comments Mon, 01 Oct 2018 20:01:34 +0000 https://blogs.perficient.com/?p=231828

In my prior post, A Short Experiment with the Amazon Connect User API, I created an Express web application to explore using the Amazon Connect User API. Since then, Amazon has released new API methods for updating contact attributes and viewing queue metrics. In this post, I update the Express app to demo these new methods.

Let’s start with the end product again. In addition to the existing a list of Connect users and a Connect user detail view, the app now has a form you can use to flag a prior call for follow-up and an auto-updating page to see the current metrics for your queues.

These new screens are shown below.

** The application code is available on GitHub at:  https://github.com/phmiller/connect-api-express ***

 

Close your eyes to the styling, and let’s talk through what’s going on with each screen.

 

Update Contact Attributes

This page uses the updateContactAttributes method to add a new contact attribute to a Connect call. In this case a flag to indicate the call needs follow-up.


app.post("/submit-updateAttributes", async (req, res) => {
const contactId = req.body.contactId;
const flagForFollowUpRaw = req.body.flagForFollowUp; // will equal "on" if checked, undefined if false
const flagForFollowUp = flagForFollowUpRaw && flagForFollowUpRaw === "on";
var updateContactAttributesParams = {
InstanceId: connectInstanceId,
InitialContactId: contactId,
Attributes: {
FlaggedForFollowUp: flagForFollowUp.toString()
}
};
var updateContactAttributesPromise = connectClient
.updateContactAttributes(updateContactAttributesParams)
.promise();
var updateContactAttributesResult = await updateContactAttributesPromise;
console.log("result", updateContactAttributesResult);
res.render("submittedUpdateAttributes", {
title: "Contact Attributes Updated"
});
});

view raw

index.js

hosted with ❤ by GitHub

The updateContactAttributes method hangs off the connectClient object from the AWS SDK and I pass in the contact id, instance id and the new attribute(s).

You can see a before and after of the call’s Contact Trace Record below with the contact id and then the new attribute highlighted.

Using this API was straightforward, although it would be easier to write useful applications if there was a Connect API to retrieve current and past contacts, rather than having to find the contact id from the Admin Site or getting the contact id from an active call in a Streams API app.

 

Current Queue Metrics

This page gets all the current queue metrics for every queue in the Connect instance. It automatically reloads every two seconds to get the latest metrics (code for which is in the currentMetrics.pug and currentMetrics.js files on GitHub)


const qArns = [
"arn:aws:connect:…",
"arn:aws:connect:…"
];
const metricsList = [
{
Name: "AGENTS_AVAILABLE",
Unit: "COUNT"
},
{
Name: "AGENTS_ONLINE",
Unit: "COUNT"
},
{
Name: "AGENTS_ON_CALL",
Unit: "COUNT"
},
{
Name: "AGENTS_ONLINE",
Unit: "COUNT"
},
{
Name: "AGENTS_STAFFED",
Unit: "COUNT"
},
{
Name: "AGENTS_AFTER_CONTACT_WORK",
Unit: "COUNT"
},
{
Name: "AGENTS_NON_PRODUCTIVE",
Unit: "COUNT"
},
{
Name: "AGENTS_ERROR",
Unit: "COUNT"
},
{
Name: "CONTACTS_IN_QUEUE",
Unit: "COUNT"
},
{
Name: "OLDEST_CONTACT_AGE",
Unit: "SECONDS"
},
{
Name: "CONTACTS_SCHEDULED",
Unit: "COUNT"
}
];
app.get("/currentMetrics", async (req, res) => {
var getCurrentMetricsParams = {
InstanceId: connectInstanceId,
Filters: {
Channels: ["VOICE"],
Queues: qArns
},
CurrentMetrics: metricsList,
Groupings: ["QUEUE"]
};
var getCurrentMetricsPromise = connectClient
.getCurrentMetricData(getCurrentMetricsParams)
.promise();
var getCurrentMetricsResult = await getCurrentMetricsPromise;
console.log("current metrics:", JSON.stringify(getCurrentMetricsResult));
res.render("currentMetrics", {
title: "Current Queue Metrics",
metricResults: getCurrentMetricsResult.MetricResults
});
});

view raw

index.js

hosted with ❤ by GitHub

I got tripped up a bit by the syntax of this command. It is described reasonably well in the documentation, but I didn’t understand at first that I needed to specify every metric I wanted (metricsList variable) as well as the ARNs of every queue (qArns variable). It was a bit frustrating to dig through the Admin Site to get the queue ARN from the URL.

 

Once I got past those issues, the API behaved as I expected with one exception. Queues with no activity at all, i.e. no agents signed in and no calls, are not returned at all in the results. Rather than every metric being zero for the queue, it just is not there in the response.

My simple example doesn’t give you all that much, but I could see other uses like creating a small dashboard that lights up when certain thresholds are passed like too many calls in queue or not enough agents signed in.

 

Permissions

When I’m running this application from my machine, it’s using my AWS CLI credentials, which have access to everything. If you were running this code in a Lambda under an IAM role with lesser privileges, you’d have to manually assign permissions to the Connect API. For example, “connect:UpdateContactAttributes” targeting the Connect instance id in a custom policy.

 

Trying it out yourself

Please check out the code on GitHub at: https://github.com/phmiller/connect-api-express

 

References

 

Thanks for reading. Any questions, comments or corrections are greatly appreciated.

To learn more about what we can do with Amazon Connect, check out Helping You Get the Most Out of Amazon Connect

]]>
https://blogs.perficient.com/2018/10/01/another-short-experiment-with-the-connect-api/feed/ 2 231828
A Short Experiment with the Amazon Connect User API https://blogs.perficient.com/2018/09/13/a-short-experiment-with-the-amazon-connect-user-api/ https://blogs.perficient.com/2018/09/13/a-short-experiment-with-the-amazon-connect-user-api/#respond Thu, 13 Sep 2018 15:51:17 +0000 https://blogs.perficient.com/?p=231178

At the end of July, Amazon introduced the User Management API to Amazon Connect. The User API opens the door for customers and partners to start scripting user setup and maintenance actions, create custom user management applications and modify user data from Lambda functions. For this post, I took a few hours to code up a simple web application that shows a list of users and details for each user. It isn’t too pretty and doesn’t do a whole lot, but if it gives you a sense of what’s possible and how easy is it use to the API, I’ll feel satisfied.

Let’s start with the end product. This app has two screens, a list of Connect users and a Connect user detail view. Every Connect user in the list is clickable and takes you to the details page.

These screens are shown below.

 

 

 

 

 

 

 

 

Nothing too fancy, right? However, if you wanted to make the user details editable or add a button to create a new user, the structure is there for you.

Let’s get into the fun part, some code!

 

Some Code…

I used the Express web app framework with Node. I kept all the code in one file, the index.js and used the Pug templating language to render each of the views. To make the code more readable, I used async methods along with the await keyword. This avoids massively indented callback chains and makes the logic easier to follow by just scanning down the file. More details on the tech stack are in the Helpful Resources section at the end of this post.

I’ve included the code of the two views for reference, but the important part of this post is to look at the User API code, which is in the index.js.

 


const express = require("express");
const app = express();
const AWS = require("aws-sdk");
require("express-async-errors");
const connectInstanceId = "7f03…";
var connectClient = new AWS.Connect({
apiVersion: "2017-08-08",
region: "us-east-1"
});
// serve static content from public directory
app.use(express.static("public"));
// configuration for pug views
app.set("views", "./views");
app.set("view engine", "pug");
// index – list of users
app.get("/", async (req, res) => {
var listUsersParams = {
InstanceId: connectInstanceId
};
var listUsersPromise = connectClient.listUsers(listUsersParams).promise();
var listUsersResponse = await listUsersPromise;
res.render("index", {
title: "Connect Users",
dataList: listUsersResponse.UserSummaryList
});
});
// user detail by user id
app.get("/user/:userId", async (req, res) => {
var userId = req.params.userId;
var describeUserParams = {
InstanceId: connectInstanceId,
UserId: userId
};
var describeUserPromise = connectClient
.describeUser(describeUserParams)
.promise();
var describeUserResult = await describeUserPromise;
res.render("user", {
title: describeUserResult.User.Username,
user: describeUserResult.User
});
});
app.listen(3000, () => console.log("App listening on port 3000!"));

view raw

index.js

hosted with ❤ by GitHub

The first step in using the Connect User API is to load the AWS SDK and create a Connect client object, as shown in lines 8-11. We tell the client object which version of the Connect API to use and what region your Connect instance is in.

From there, we can call individual API methods in the routing methods. For the index view method app.get(“/”)… we call the listUsers method which takes in your Connect instance id and returns a list of user basic info objects. We pass the objects to the view to render and away we go.

For the detail view method app.get(“/user/:userId”)… we call the describeUser method which takes that Connect instance id again, along with the user id of the user you want details on. That method returns a detailed user object. Again, we pass that object to the view to render.

In case you were wondering, because I don’t specify an alternative, this Node app will simply use the default profile credentials from the Amazon Command Line Interface tool installed on the machine. See the Helpful Resources section for a link with details on how to specify alternative credentials.

And that’s really it. No particular secret incantations needed here. Amazon has done a nice job in providing an easy to use, thoroughly documented API. I’d love to see more added to the API, for example to pull out the Queues associated with a Routing Profile, but this is a solid base to start off with. Again, the view code is reproduced below for your curiosity.

 


html
head
title= title
link(rel="stylesheet" href="css/base.css")
body
h2= "Users in my Connect Instance (" + dataList.length + ")"
ul
each val in dataList
li
a(href="user/" + val.Id) #{val.Username}

view raw

index.pug

hosted with ❤ by GitHub


html
head
title= title
link(rel="stylesheet" href="../css/base.css")
body
h2= "Details for " + user.Username
ul
li= "ARN: " + user.Arn
li= "First Name: " + user.IdentityInfo.FirstName
li= "Last Name: " + user.IdentityInfo.LastName
li= "Routing Profile Id: " + user.RoutingProfileId
br
a(href="../") #{"Back"}

view raw

user.pug

hosted with ❤ by GitHub

 

Helpful Resources

 

Thanks for reading. Any questions, comments or corrections are greatly appreciated. Stay tuned next week for another post on the Connect API.

To learn more about what we can do with Amazon Connect, check out Helping You Get the Most Out of Amazon Connect

]]>
https://blogs.perficient.com/2018/09/13/a-short-experiment-with-the-amazon-connect-user-api/feed/ 0 231178
Amazon Connect Streams API Changelog #4: July & August 2018 https://blogs.perficient.com/2018/08/30/amazon-connect-streams-api-changelog-4-july-and-august/ https://blogs.perficient.com/2018/08/30/amazon-connect-streams-api-changelog-4-july-and-august/#respond Thu, 30 Aug 2018 13:40:44 +0000 https://blogs.perficient.com/?p=230592

Amazon Connect Streams API (Streams) allows developers to create custom agent experiences for Amazon Connect. The code for Streams is hosted in GitHub and open for community contributions in the form of pull requests. This is the fourth installment of my running series covering changes to the Streams API. Prior installments can be found here:

 

Newly Approved Pull Requests

Since last time, there has been 1 approved pull request (PR) to Streams.

 

Re-initializing the CCP in Streams Apps (PR #78)

PR #78 is a code change to the core.js file that adds clean-up logic to the terminate function called when tearing down the CCP. This clean-up logic unsubscribes all existing event handlers from the Streams internal event bus, then re-initializes the bus and resets some other internal state variables as shown in the code snippet below:


/**————————————————————————-
* Uninitialize Connect.
*/
connect.core.terminate = function() {
connect.core.client = new connect.NullClient();
connect.core.masterClient = new connect.NullClient();
var bus = connect.core.getEventBus();
if(bus) bus.unsubscribeAll();
connect.core.bus = new connect.EventBus();
connect.core.agentDataProvider = null;
connect.core.upstream = null;
connect.core.keepaliveManager = null;
connect.agent.initialized = false;
connect.core.initialized = false;
};

view raw

core-snippet.js

hosted with ❤ by GitHub

Prior to this PR, if you tore down the CCP and then tried to start it up again in a Streams app by calling initCCP, the new CCP would behave oddly. You’d get multiple event notifications in your Streams code making it difficult to manage calls. You could see this scenario if the agent signed out of Amazon Connect and then tried to log in again through your app without refreshing the browser.

Curiously, the PR also mentions that this code change fixes the video stream attaching itself to the video element. Perhaps a hint of future plans?

This PR was submitted by GitHub user dnovicki. dnovicki just joined GitHub last week. Welcome and awesome job on a PR.

 

New & Open Pull Requests

There is one new PR since last time and 2 open PRs at this point:

  • PR #79: Typos in gulpfile
    • This PR from GitHub user odemeulder fixes a typo in the Streams build to produce a file called “connect-streams.js” not “connect-steams.js”. It also adds a note to the documentation calling out that the tool gulp is required. I expect this PR to be approved quickly.
  • PR #64: Exposing rtc session for callstats integration
    • I covered this PR last installment; as I said this is an interesting PR that opens up possibilities for real-time monitoring but cracks open the internals of Amazon Connect in a way that could make this PR fragile to later Streams changes

 

Closed Pull Requests

 

Active Issues

Issues in Github are used to track problems, ask for help or suggest new features. An issue sometimes will end up spawning a PR for a code fix. Looking through the issues list on Github I saw a few recently updated issues I wanted to highlight:

 

Thanks for reading. Any questions, comments or corrections are greatly appreciated. To learn more about what we can do with Amazon Connect, check out Helping You Get the Most Out of Amazon Connect

]]>
https://blogs.perficient.com/2018/08/30/amazon-connect-streams-api-changelog-4-july-and-august/feed/ 0 230592
Amazon Connect Streams API Changelog #3: June 2018 https://blogs.perficient.com/2018/07/03/amazon-connect-streams-api-changelog-3-june-2018/ https://blogs.perficient.com/2018/07/03/amazon-connect-streams-api-changelog-3-june-2018/#respond Tue, 03 Jul 2018 17:35:40 +0000 https://blogs.perficient.com/?p=228759

Amazon Connect Streams API (Streams) allows developers to create custom agent experiences for Amazon Connect. The code for Streams is hosted in Github and open for community contributions in the form of pull requests. This is the third installment of my running series covering changes to the Streams API. Prior installments can be found here:

Newly Approved Pull Requests

Since my last installment, there have been 2 approved pull requests (PRs) to Streams.

Code Clean-Up: Fixing a Null Pointer Exception (PR #74)

PR #74 is a code change to the softphone.js file that adds some initialization logic and null checks for some audio metadata (the audio “stats”) for the current call. When a call connects, these stats are populated by the startStatsCollectionJob in the onRefreshContact handler. When a call failed to connect, these stats were never populated and a subsequent call to send these stats via sendSoftphoneReport failed with a null pointer exception. With the changes, sendSoftphoneReport succeeds even when the call fails to connect.

This kind of fix is nice to see as it reduces the noise of extra error logs when troubleshooting issues with the Streams API and the Contact Control Panel.

TypeScript Tooling Support (PR #41)

I highlighted this PR back in my last post and am happy to see it get approved. A TypeScript declaration file for Streams API helps developers write better code faster, by lighting up type checking and code completion in editors like Visual Studio Code.

 

New & Open Pull Requests

There are no new PRs since last time and only 2 open PRs at this point:

 

Closed Pull Requests

 

Active Issues

For this installment, I’m adding this section to take a quick look at some recently updated Issues. Issues in Github are used to track problems, ask for help or suggest new features. An issue sometimes will end up spawning a PR for a code fix. Looking through the issues list on Github I saw three I wanted to highlight:

Transfer to Quick Connect via Streams (Issue #54 and #27)

The always helpful jagadeeshaby responded to both of these requests for help on how to do transfers to Quick Connects using Streams API with a short code snippet:


var agent = new lily.Agent();
agent.getEndpoints(agent.getAllQueueARNs(), {
success: function(data){
console.log("valid_queue_phone_agent_endpoints", data.endpoints, "You can transfer the call to any of these endpoints");
},
failure:function(){
console.log("failed")
}
});
agent.getContacts(lily.ContactType.VOICE)[0].addConnection(any_valid_queue_or_phone_or_agent_endpoint, {
success: function(data) {
alert("transfer success");
},
failure: function(data) {
alert("transfer failed");
}
});

This code path is not immediately obvious from the documentation. Thanks for the tip jagadeeshaby!

Audio Device Selector & Speaker Ring (Issue #67)

extmchristensen and mschersten bring up a user experience issue we see across customers. Between the browser and Windows sound settings, agents can have a hard time selecting the right audio device to use for Amazon Connect calls. This issue is requesting some changes to Streams API to let partners implement their own audio device selection user experience. mschersten brings up that it would be nice to have simultaneous ring across multiple audio devices, so a low utilization agent can hear a ring over their PC speakers.

 

I’d love to see changes along these lines, and I’ll be sure to keep an eye out on this issue for any further activity.

Thanks for reading. Any questions, comments or corrections are greatly appreciated. To learn more about what we can do with Amazon Connect, check out Helping You Get the Most Out of Amazon Connect

]]>
https://blogs.perficient.com/2018/07/03/amazon-connect-streams-api-changelog-3-june-2018/feed/ 0 228759
Single Sign-On With Amazon Connect And Azure Active Directory https://blogs.perficient.com/2018/06/18/single-sign-on-with-amazon-connect-and-azure-active-directory/ https://blogs.perficient.com/2018/06/18/single-sign-on-with-amazon-connect-and-azure-active-directory/#comments Mon, 18 Jun 2018 18:00:50 +0000 https://blogs.perficient.com/?p=227523

On March 30, 2018, Amazon announced the general availability of Amazon Connect federated single sign-on using SAML 2.0, stating, “You can enable federated access and controls via any SAML 2.0 compliant identity provider, such as Microsoft Active Directory Services, Okta, Ping Identity, and Shibboleth. Once this is done, agents and managers can sign in to Amazon Connect through your identity provider portal with a single click, and without a separate username and password for Amazon Connect.”

Amazon provides detailed instructions for enabling single-sign, which further link to specific identity provider instructions. Missing from this list of identity providers is Microsoft Azure Active Directory (Azure AD). This post briefly covers how to enable single-sign on to Amazon Connect with Azure AD.username and password for Amazon Connect.

 

 

Azure Active Directory and Single Sign-On

While working on our Amazon Connect Toolkit for Microsoft Dynamics 365, we started thinking about single-sign on between Azure AD and Amazon Connect.

For Amazon Connect agents working in Microsoft Dynamics 365, it would be convenient to use their Azure AD credentials for both Dynamics and Amazon Connect.

Microsoft provides detailed instructions for single-sign on to the AWS Console.

When these steps are complete, you will be able to log in to the AWS Console using Azure AD credentials by clicking on the Amazon Web Services app in your Azure Applications page:

 

 

Getting to Amazon Connect

In my testing, I found I could not access Amazon Connect from the AWS Console at this point. Navigating to my Amazon Connect instance I got a “Session Expired” notice.

To get access to Amazon Connect, I had to take two additional steps, both referenced in the Amazon Connect documentation on single-sign:

  1. Grant the IAM Role permissions to my Amazon Connect instance

  2. Change the Relay State URL to point to my Amazon Connect instance

For step 2, I had to go to the Azure Portal to make the necessary changes. I went to Azure Active Directory, Enterprise Applications and selected the Amazon Web Services app I created during the single sign-on setup.

Within the Amazon Web Services app, I went to Single sign-on and clicked on Show advanced URL settings. I entered a Relay State URL with my Amazon Connect instance id as shown below:

Once I saved this change, I was able to log in to Amazon Connect from the Amazon Web Services app in Azure!

 

Misc. Notes

  • Within Azure AD, you can only have one Amazon Web Services app and therefore one Relay State URL; so while it would be nice to direct agents directly to the CCP by adding ?destination=%2Fconnect%2Fccp and admins to the Admin Site by having different URLs per user type, I’m not sure that’s possible with Azure AD

  • If your agents are using a custom CCP, they should log in via the Amazon Web Services app in Azure first and then your custom CCP will load up just fine

Thanks for reading. Any questions, comments or corrections are greatly appreciated. To learn more about what we can do with Amazon Connect, check out Helping You Get the Most Out of Amazon Connect

]]>
https://blogs.perficient.com/2018/06/18/single-sign-on-with-amazon-connect-and-azure-active-directory/feed/ 2 227523
Amazon Connect Streams API Changelog #2: Open Pull Requests https://blogs.perficient.com/2018/06/04/amazon-connect-streams-api-changelog-2-open-pull-requests/ https://blogs.perficient.com/2018/06/04/amazon-connect-streams-api-changelog-2-open-pull-requests/#respond Mon, 04 Jun 2018 17:00:36 +0000 https://blogs.perficient.com/?p=227188

In my last post, I reviewed the recent changes (approved pull requests) to the Amazon Connect Streams API (Streams). In this post, I’m taking a look at the open pull requests, i.e., changes suggested by Amazon and community members. As of June 1, 2018, there are four open pull requests:

Each pull request (PR) is identified by a unique number. I’ll use the PR number as I discuss these suggested changes below.

 

 

Documentation Updates (PR #21 and #39)

Community members Franco Lazzarino (https://github.com/flazz) and yours truly (https://github.com/phmiller) have open PR’s to update some confusing documentation around the Streams methods contact.getState and contact.getStateDuration. These methods are aliased as contact.getStatus and contact.getStatusDuration in the api.js file. Per the discussion in PR #39, either method should work, so it might be best to leave the documentation as is until one or the other is deprecated.

Franco Lazzarino’s PR #39 is at: https://github.com/aws/amazon-connect-streams/pull/21. My PR #21 is at: https://github.com/aws/amazon-connect-streams/pull/21 and also includes a change to the documentation to clarify the “user friendly name” you get from the agent.getConfiguration().name is the agent’s first name. As this change is still relevant, I will either amend my PR to only include this change or close this PR and open a new one.

TypeScript Tooling Support (PR #41)

Andy Hopper (https://github.com/andyhopp) from Amazon has an open PR to add better tooling support for using TypeScript with Streams. This PR adds a TypeScript declaration file for Streams. This declaration file describes the public interface of the Streams API. Programming editors like Visual Studio Code can use the declaration file to light up type checking and IntelliSense, making developers more productive. While developing a Streams powered Angular application recently, my team used this file and it was quite helpful.

Andy Hopper’s PR #41 is at: https://github.com/aws/amazon-connect-streams/pull/41. I am hopeful it can be approved and improved over time as more developers use it and refine the definition file.

 

 

WebRTC Media Info (PR #64)

Community member karthikbr82 (https://github.com/karthikbr82) has an open PR to surface real-time media data for every call from Streams. This data can then be analyzed and monitored by services such as callstats.io (https://www.callstats.io/).

Karthikbr82’s PR #64 is at: https://github.com/aws/amazon-connect-streams/pull/64.

 

Code Changes for PR #64

This PR is of particular interest to me, as in my post about implementing a mute button (https://blogs.perficient.com/2017/10/26/implementing-a-mute-button-in-amazon-connect/) I also made changes to Streams to get to the underlying real-time media.

I went into the SoftphoneManager class and saved the RTCSession object for the call as a property of the call (contact). The RTCSession object wraps a WebRTC RTCPeerConnection, providing access to real-time media to implement mute in application code.


// SoftphoneManager code
connect.contact(function(contact) {
contact.onRefresh(function() {
session.remoteAudioElement = document.getElementById('remote-audio');
session.connect();
// new code
contact.session = session;
}
}
// Streams powered application code
function muteSelf() {
window.myCPP.contact.session.pauseLocalAudio();
}

PR #64 is more elegant than my hackery, as it adds a session event that is fired when a call is accepted. This event contains the RTCSession object for the call. The code to fire the session event is also in the SoftphoneManager class as shown below.


connect.contact(function(contact) {
contact.onRefresh(function() {
session.remoteAudioElement = document.getElementById('remote-audio');
session.connect();
// new code
var bus = connect.core.getEventBus();
bus.trigger(contact.getEventName(connect.ContactEvents.SESSION), session);
}
}

Application code can subscribe to this event by using the new contact.OnSession method.


// api.js
Contact.prototype.onSession = function(f) {
var bus = connect.core.getEventBus();
bus.subscribe(this.getEventName(connect.ContactEvents.SESSION), f);
};
// application code
function subscribeToContactEvents(contact) {
contact.onSession(handleContactSession);
}
function handleContactSession(session) {
var rtcSession = session;
// do something with the RTCSession
}

I’m curious to see if this PR gets approved as it relies on some of the internal implementation details of Streams and Amazon Connect. I’m not sure if Amazon wants the RTCSession being used directly by integrating applications. On the other hand, this PR opens the door for partners, vendors, and community members to implement great features like real-time monitoring.

 

Thanks for reading. If you like these reviews of pull requests, let me know! Any questions, comments or corrections are greatly appreciated. To learn more about what we can do with Amazon Connect, check out Helping You Get the Most Out of Amazon Connect

]]>
https://blogs.perficient.com/2018/06/04/amazon-connect-streams-api-changelog-2-open-pull-requests/feed/ 0 227188
Amazon Connect Streams API Changelog #1: Through May 2018 https://blogs.perficient.com/2018/05/31/amazon-connect-streams-api-changelog-1-through-may-2018/ https://blogs.perficient.com/2018/05/31/amazon-connect-streams-api-changelog-1-through-may-2018/#respond Thu, 31 May 2018 17:00:29 +0000 https://blogs.perficient.com/?p=227180

Amazon Connect Streams API (Streams) allows developers to create custom agent experiences for Amazon Connect. Since my introductory posts back in late 2017 there have been several changes to Streams, some from the community, some directly from Amazon. These changes are managed as pull requests. This post will review approved pull requests from November 2017 through May 2018, and what the changes mean for building custom applications for Amazon Connect.

Pull Requests

The code for Streams is hosted in Github and open for community contributions in the form of pull requests. A pull request (PR) is a set of pending code changes that the owner of a Github repository can approve (merge) into the main codebase or reject. Each PR has an associated thread for related discussion.

You can see a list of the open PR’s for Streams at: https://github.com/aws/amazon-connect-streams/pulls.

You can see all the commits (approved changes) to Streams at: https://github.com/aws/amazon-connect-streams/commits/master.

The PR’s approved in the timeframe covered by this post are:

 

 

CCP Browser Compatibility (PR #26 and #30)

PR #26 and PR #40 were community contributions from Sean Romocki (https://github.com/sromocki) that address CCP browser incompatibilities.

PR #26

PR #26 (https://github.com/aws/amazon-connect-streams/pull/26) addressed a pending web browser change that was going to block the Amazon Connect Contact Control Panel (CCP) from working properly.

As described in the PR, the Chrome browser got more strict about granting access to the microphone to applications hosted in an iframe. Since the CCP is hosted in an iframe by Streams, this change updated Streams to explicitly request permission to use the microphone. If you were working with Streams in this time frame you had to build the latest copy of Streams to keep the CCP working.

PR #26 was approved on January 23, 2018: https://github.com/aws/amazon-connect-streams/commit/5fc44af68939a2016cc1c6fd08d13793e74d5ee4

Code Changes for PR #26

As we saw in my intro posts, Streams applications start out by invoking the connect.core.initCCP method from the core.js file. initCCP adds an iframe to the application’s HTML page to host the CCP. With the code changes, before adding the iframe initCCP sets the allow attribute of the iframe to “microphone”. The allow attribute tells Chrome that it is OK to let the iframe use the microphone.


// Create the CCP iframe and append it to the container div.
var iframe = document.createElement('iframe');
iframe.src = params.ccpUrl;
iframe.style = "width: 100%; height: 100%";
iframe.allow = "microphone";
containerDiv.appendChild(iframe);

view raw

streamsPR26.js

hosted with ❤ by GitHub

 

PR #30

PR #30 (https://github.com/aws/amazon-connect-streams/pull/30) was a tweak to the notifications code in Streams that removes a deprecation warning in the JavaScript console in Chrome. This warning is not something the typical end user would look for or notice, but it is good practice to avoid warnings like this for future compatibility.

The code change for this PR was to invoke the permissions callback for notifications within a promise, avoiding the warning.

PR #30 was approved on May 29th, 2018: https://github.com/aws/amazon-connect-streams/commit/c965fbd347bcc2e18ab3b1d4f3eeb3c92be46e1b

 

 

Code Contribution Guidelines (PR #48)

PR #48 (https://github.com/aws/amazon-connect-streams/pull/48) added a pull request template, a code of conduct and contribution guidelines for the Streams repository. These documents are boilerplate describing how developers interact with the repository when they want to make pull requests. PR #46 was a community contribution from Henri Yandell (https://github.com/hyandell).

PR #48 was approved on May 29th, 2018: https://github.com/aws/amazon-connect-streams/commit/0e4fd831b85a5c97025d33962c913de4801483b8

 

 

Documentation Updates (PR #6, #52, #68, #69)

Amazon updated grammar mistakes, cleared up some vague language, and made other minor changes in a series of 3 pull requests:

An example of the type of changes made here is making sure that the documentation always refers to “Amazon Connect” and not just Connect. These pull requests were a contribution from randalld-aws (https://github.com/randalld-aws).

These pull requests were approved on May 29th, 2018:

Community contributor Bast Leblanc (https://github.com/BastLeblanc) added an example of the CCP URL to the documentation with PR #6 (https://github.com/aws/amazon-connect-streams/pull/6). You need this URL to initialize a custom Streams application, so it is nice to have an additional pointer to it in the documentation.

PR #6 was approved on May 29th, 2018: https://github.com/aws/amazon-connect-streams/commit/83b913c897c02c3e2e60ab30b7e5bebccf2e7e91

 

 

API Updates (mute) and NPM build (PR #61)

PR #61: https://github.com/aws/amazon-connect-streams/pull/61 was created by Amazon’s jagadeeshaby (https://github.com/jagadeeshaby). This PR added mute control to Streams, made it easier for developers to include the Streams library and fixed a number of small bugs.

These changes are described at a high level in the description of pull request #61 (https://github.com/aws/amazon-connect-streams/pull/61).

 

Mute Control

With these changes, Streams gets 3 new agent object methods to control the agent’s audio: mute, unmute and onMuteToggle. These methods can be used to create mute and unmute controls in a custom CCP. These methods are documented at: https://github.com/aws/amazon-connect-streams/blob/master/Documentation.md.

Code

https://github.com/aws/amazon-connect-streams/pull/61/commits/7f42a364d1f9ec55a1dd42063137626d5b100ca0

The new mute control methods are in the api.js file, hanging off the agent object.


Agent.prototype.onMuteToggle = function(f) {
var bus = connect.core.getEventBus();
bus.subscribe(connect.AgentEvents.MUTE_TOGGLE, f);
};
Agent.prototype.mute = function() {
connect.core.getUpstream().sendUpstream(connect.EventType.BROADCAST,
{
event: connect.EventType.MUTE,
data: {mute: true}
});
};
Agent.prototype.unmute = function() {
connect.core.getUpstream().sendUpstream(connect.EventType.BROADCAST,
{
event: connect.EventType.MUTE,
data: {mute: false}
});
};

The mute and unmute functions broadcast MUTE events upstream, i.e. to any other listening components, with a mute flag of true to mute, false to unmute.

What other components are listening? Our old friend softphone.js, last seen in my post on implementing a mute button in Amazon Connect (https://blogs.perficient.com/2017/10/26/implementing-a-mute-button-in-amazon-connect/). In softphone.js we see changes in this PR to subscribe and handle MUTE events.


// Bind events for mute
var handleSoftPhoneMuteToggle = function() {
var bus = connect.core.getEventBus();
bus.subscribe(connect.EventType.MUTE, muteToggle);
};
// Make sure once we disconnected we get the mute state back to normal
var deleteLocalMediaStream = function(connectionId) {
delete localMediaStream[connectionId];
connect.core.getUpstream().sendUpstream(connect.EventType.BROADCAST, {
event: connect.AgentEvents.MUTE_TOGGLE,
data: { muted: false }
});
};
// Check for the local streams if exists – revert it
// And inform other clients about the change
var muteToggle = function(data) {
var status;
if (connect.keys(localMediaStream).length === 0) {
return;
}
if (data && data.mute !== undefined) {
status = data.mute;
}
for (var connectionId in localMediaStream) {
if (localMediaStream.hasOwnProperty(connectionId)) {
var localMedia = localMediaStream[connectionId].stream;
if (localMedia) {
var audioTracks = localMedia.getAudioTracks()[0];
if (status !== undefined) {
audioTracks.enabled = !status;
localMediaStream[connectionId].muted = status;
if (status) {
logger.info(
"Agent has muted the contact, connectionId – " + connectionId
);
} else {
logger.info(
"Agent has unmuted the contact, connectionId – " + connectionId
);
}
} else {
status = localMediaStream[connectionId].muted || false;
}
}
}
}
connect.core.getUpstream().sendUpstream(connect.EventType.BROADCAST, {
event: connect.AgentEvents.MUTE_TOGGLE,
data: { muted: status }
});
};

The softphone subscribes to MUTE events in handleSoftPhoneMuteToggle and when it gets a MUTE event it invokes the muteToggle function. The muteToggle function finds the agent’s media stream and sets its enabled property to false (muted) or true (unmuted). A reference to the media stream is stored in the localMediaStream object whenever a media stream is added to the active call. That code is session.onLocalStreamAdded.

After the media stream is modified in muteToggle, the softphone publishes a MUTE_TOGGLE event with the new muted status for any upstream listeners.

If you are curious about the code that publishes events “upstream”, you can refer to the source code for the Stream objects at: https://github.com/aws/amazon-connect-streams/blob/master/src/streams.js. The general idea is that there is a service worker that relays events from the Streams application code to the CCP iframe and from the CCP iframe up to Amazon Connect’s back-end service. For a visual of this interaction see the Architecture diagram at https://github.com/aws/amazon-connect-streams/blob/master/Architecture.md.

 

NPM Build

NPM (https://www.npmjs.com/) is the most used package manager for JavaScript and this Streams PR adds support for building Streams with NPM. Previously, to use Streams, you had to build it yourself using a buildfile, as I described at https://blogs.perficient.com/2017/10/05/intro-to-amazon-connect-streams-api-part-1/. This type of build added some extra steps for Windows developers and was out of step with common JavaScript library practices.

With this NPM build support, building is a matter of having NPM, which JavaScript developers will and running a few simple terminal commands.

Getting Streams to build using NPM is a nice improvement for developers. There are a few minor issues to be aware of, that have been raised as an issue in Github: https://github.com/aws/amazon-connect-streams/issues/66 by community member extmchristensen (https://github.com/extmchristensen).

 

Better Logging and Miscellaneous Bug Fixes

The PR doesn’t elaborate on the details of the better logging or bug fixes. The are changes across nearly 20 files and in particular softphone.js has a large diff between the prior version and the PR. With that said, I skimmed through the diff’s and identified some to highlight:

  • Clean up logic in core.js is now invoked on browser onunload versus onbeforeunload. The difference here is subtle. The unload event is fired after beforeunload when a page is being unloaded. I assume this provides smoother shutdown behavior when a user closes the browser or navigates away from a Streams application

  • The publication of certain event types like API_METRIC, LOG and MASTER_RESPONSE are no longer logged in event.js. This presumably helps keep the JavaScript console less cluttered in the browser

  • Ringtone Start and Ringstone Stops telemetry events with call info are now published when ringing starts and stops respectively. Callback ringing is distinguished with a Callback Ringtone Connecting event. I’m not sure where these events go or if they are accessible for querying. This new code is in ringtone.js

  • Additional error handling to stop a ringtone if the onAccepted or onConnected callbacks fail. This new code is also in ringtone.js

  • Better handling for calls getting re-routed to the same agent multiple times and better session clean up in softphone.js.

  • Metrics on the responsiveness of the back-end Connect API, i.e. how long it took a given request to complete, are now gathered by the worker in worker.js. See the WorkerClient method

  • Implements an exponential backoff with retry strategy when trying to refresh the auth token for the back-end Connect API. This should make the Streams apps more resilient when kept open for a long time. See connect.backoff in the util.js and then line 575 in worker.js

Thanks for reading. Next time we will take a look at some open pull requests for Streams. Any questions, comments or corrections are greatly appreciated. To learn more about what we can do with Amazon Connect, check out Helping You Get the Most Out of Amazon Connect

]]>
https://blogs.perficient.com/2018/05/31/amazon-connect-streams-api-changelog-1-through-may-2018/feed/ 0 227180
Automation, CloudFormation and Amazon Connect https://blogs.perficient.com/2018/01/24/automation-cloudformation-and-amazon-connect/ https://blogs.perficient.com/2018/01/24/automation-cloudformation-and-amazon-connect/#respond Wed, 24 Jan 2018 14:36:38 +0000 https://blogs.perficient.com/integrate/?p=5429

CloudFormation is a tool from Amazon used to automate the deployment of AWS services. In this post, we’ll cover some tips and tricks for using CloudFormation to automate deploying Amazon Lambda functions for Amazon Connect. This automation is repeatable, testable and far less error prone than asking a person to do it all by hand. Let’s get started!

 

CloudFormation Templates

AWS CloudFormation provides a common language for you to describe and provision all the infrastructure resources in your cloud environment. CloudFormation allows you to use a simple text file to model and provision, in an automated and secure manner, all the resources needed for your applications across all regions and accounts.

https://aws.amazon.com/cloudformation/

The simple text files mentioned above are called CloudFormation templates. Templates are written in either JSON or YAML (Yet Another Markup Language), we will focus on these templates for the rest of the post.

There is a visual designer in AWS for CloudFormation templates, but we will focus on writing them from scratch. AWS provides a number of sample templates at https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/sample-templates-services-us-west-2.html. The full template language documentation is at: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-reference.html

 

Whither the Weather?

For this post, we will create a template that deploys two Lambda functions that can be used in Amazon Connect contact flows. These functions will get local weather conditions for a city. We’ll use the OpenWeatherMap service. If you’d like to follow along, you can sign up for a free account at https://home.openweathermap.org/users/sign_up

With an account and API access key we issue simple HTTP requests to get the current weather in a city, let’s say Chicago:

GET https://api.openweathermap.org/data/2.5/weather?id=4887398&appid=8b2...

And we get back a JSON payload of current weather data (trimmed for space):

{
       "coord": {
           "lon": -87.65,
           "lat": 41.85
       },
       "weather": [
           {
               "id": 701,
               "main": "Mist",
               "description": "mist",
               "icon": "50d"
           },
         ...
       "id": 4887398,
       "name": "Chicago",
         ...
}

We’ll have one Lambda function that gets the current weather for a city and another Lambda that takes a city name and returns the OpenWeatherMap API city id used in the current weather query string.

 

Putting it Together

I’m going to start by showing you the completed CloudFormation template and then we’ll work our way back over the tricky parts. If you’ve never used CloudFormation templates, you can try to follow along, but you’ll feel a lot better if you’ve worked through some of the Amazon examples and documentation first.

You can take a look at the full CloudFormation template here:

https://gist.github.com/phmiller/07c2e220ca3e8d747be7645aaf2b7c64

 

GetOpenWeatherMapCityId Lambda

We’ll start with the Lambda function to get an OpenWeatherMap API city id. The body of this function is declared inline in the template as shown below:

  getOpenWeatherMapCityIdLambdaFunction:
       Type: "AWS::Lambda::Function"
       Properties:
         Description: "Gets the OpenWeatherMap API city id for a city"
         Code:
           ZipFile: !Sub |
             exports.handler = (event, context, callback) => {
               const cityName = event.cityName;
               console.log(
                 "Looking up city id for " + cityName
               );
               //hardcoded for sample purposes;
               var cityId = 4887398; //Chicago
               callback(null, { cityName: cityName, cityId: cityId});
             }
         Handler: "index.handler"
         Role: !GetAtt 
           - "executeOwnLambdaIAMRole"
           - "Arn"
         Runtime: "nodejs6.10"
         Timeout: 8
         MemorySize: 128
       DependsOn:
         - "executeOwnLambdaIAMRole"

The actual JavaScript code here is unimportant. I just hard-coded a return value for Chicago.

What’s more interesting is the Role attribute where we assign the executeOwnLambdaIAMRole to this Lambda function. Every Lambda function needs to be assigned an IAM role so it can execute at all. This is an easy step to miss.

Within the Role attribute is a bit of interesting syntax that we will see it again later. Using the !GetAtt operator along with the name of the IAM role and the attribute we want is a common pattern that allows you to refer to other objects within the template.

I defined the executeOwnLambdaIAMRole immediately above the Lambda in the template using the system policy of arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole.

 

getCurrentWeatherForCity Lambda

The getCurrentWeatherForCity Lambda function that we will actually want to call from an Amazon Connect contact flow is declared inline as seen below:

getCurrentWeatherForCityLambdaFunction:
       Type: "AWS::Lambda::Function"
       Properties:
         Description: "Looks up current weather conditions in a city from OpenWeatherMap API"
         Code:
           ZipFile: !Sub |
             const https = require("https");
             const aws = require("aws-sdk");
             const openWeatherMapApiKey = process.env["OpenWeatherMapApiKey"];
             
             exports.handler = (event, context, callback) => {
               const cityName = event.Details.ContactData.Attributes.CityName;
               var lambda = new aws.Lambda({
                 region: "${AWS::Region}"
               });
               var payloadObject = { cityName: event.Details.ContactData.Attributes.CityName };
               lambda.invoke(
                 {
                   FunctionName: "${getOpenWeatherMapCityIdLambdaFunction}",
                   Payload: JSON.stringify(payloadObject)
                 },
                 function(error, data) {
                   if (error) {
                     console.error("Failed to invoke Lambda to get city id", error);
                     callback("Failed to invoke Lambda to get city id: " + error);
                   }
                   if (data) {
                     var cityId = JSON.parse(data.Payload).cityId;
                     console.log("Got city id " + cityId + " for city " + cityName);
                     //make open weather api call
                     const queryString = "/data/2.5/weather?id=" + cityId + "&appid=" + openWeatherMapApiKey;
                     console.log("Querying API for weather with query string of " + queryString);
                     //hardcoded for sample purposes
                     callback(null, { name: "Chicago", weather: "Mist" });
                   }
                 }
               );
             }
         Handler: "index.handler"
         Role: !GetAtt 
           - "executeOwnAndGetCityIdLambdaIAMRole"
           - "Arn"
         Runtime: "nodejs6.10"
         Timeout: 8
         MemorySize: 128
         Environment:
           Variables:
             OpenWeatherMapApiKey: !Sub ${OpenWeatherMapApiKey}
       DependsOn:
         - "executeOwnAndGetCityIdLambdaIAMRole"

Again, the JavaScript logic here is mostly unimportant, but there are a few items to highlight. This function needs the OpenWeatherMap API Key to construct valid requests. Rather than hardcode that value into the function body, we instead set an environment variable, using the CloudFormation parameter OpenWeatherMapApiKey which a user supplies when running the template. We use the !Sub operator and curly braces to get the value.

OpenWeatherMapApiKey: !Sub ${OpenWeatherMapApiKey}

Then, from within the JavaScript function, we access the environment variable in standard Node style from the process object.

const openWeatherMapApiKey = process.env["OpenWeatherMapApiKey"];

Like the previous Lambda function, we assign an IAM Role. In this case the Role is executeOwnAndGetCityIdLambdaIAMRole. This Role has the permissions we saw before in addition to a policy that allows it to invoke the Get City Id Lambda function from code.

Policies: 
           - 
             PolicyName: "invokeCityIdLambda"
             PolicyDocument: 
               Version: "2012-10-17"
               Statement: 
                 - 
                   Effect: "Allow"
                   Action: "lambda:InvokeFunction"
                   Resource:
                     - !GetAtt 
                       - "getOpenWeatherMapCityIdLambdaFunction"
                       - "Arn"

We are using the !GetAtt operator and getting the ARN of a resource we created in the template.

The Lambda function uses the aws-sdk package to invoke the Get City Id function.

 

Amazon Connect Permissions

To use the Get Weather Lambda function from Amazon Connect, we need to grant Amazon Connect permissions on it. We can do this manually through the AWS command line, but in this template, we script it instead. We grant Amazon Connect (the principal), permission to invoke the Lambda Function from our AWS Account. Within the SourceAccount attribute we use AWS::AccountId a built-in variable in CloudFormation templates.

    # permission so that connect can invoke it
     getCurrentWeatherForCityLambdaFunctionInvokePermission:
       Type: AWS::Lambda::Permission
       DependsOn: getCurrentWeatherForCityLambdaFunction
       Properties:
         FunctionName:
           Ref: getCurrentWeatherForCityLambdaFunction
         Action: lambda:InvokeFunction
         Principal: connect.amazonaws.com
         SourceAccount:
           Ref: AWS::AccountId

Final Notes

If you found this post interesting, I encourage to take some time and work through the rest of the template. There are a couple of other useful nuggets in there. For example, the template sets up CloudWatch events to periodically trigger and keep the Lambda functions warm (as discussed in my prior post: https://blogs.perficient.com/integrate/2017/11/27/keeping-lambdas-warm-in-amazon-connect/).

Thanks for reading. Any questions, comments or corrections are greatly appreciated. To learn more about what we can do with Amazon Connect, check out Helping You Get the Most Out of Amazon Connect

]]>
https://blogs.perficient.com/2018/01/24/automation-cloudformation-and-amazon-connect/feed/ 0 196507
Keeping Lambdas Warm in Amazon Connect https://blogs.perficient.com/2017/11/27/keeping-lambdas-warm-in-amazon-connect/ https://blogs.perficient.com/2017/11/27/keeping-lambdas-warm-in-amazon-connect/#respond Mon, 27 Nov 2017 15:42:02 +0000 https://blogs.perficient.com/integrate/?p=5017

One of Amazon Connect’s strengths is the ability to use Amazon Lambda in contact flows to access other AWS services and external systems. Some common use cases are looking up accounts in an external CRM, storing contact data in Amazon DynamoDB or sending emails using Amazon SES or Amazon SNS.

To keep contact flows responsive, Amazon limits any Lambda function’s execution time to 8 seconds after which it will time out. Even 8 seconds would be a long time for the caller to wait on the line for a data dip to complete. So, Lambdas in a contact flow should be designed to return as quickly as possible.

Lambda functions in contact flows execute just like other Lambdas in AWS. They execute within an isolated environment (container) that is stood up when the Lambda is invoked. AWS manages containers for you, and does its best to re-use existing containers. When it cannot, creating a new container can take a little bit of time. Which often isn’t a big deal, but maybe enough to get past the 8 second timeout in a contact flow and cause the invocation to fail.

One approach to handling such a failure is to add retry logic to your contact flow and try again. As a general rule of thumb, you should design your contact flows such that one Lambda failure won’t sink the call.

Along with robust contact flows, you can use Amazon CloudWatch Events to periodically invoke your Lambdas outside of a contact flow. This ensures that the Lambda function stays “warm” with a ready container and is less likely to timeout. This technique is widely used for Lambda in other contexts and works well for Amazon Connect. You can set up these Events from the CloudWatch Management Console or from the Amazon Command Line Interface (CLI).

CloudWatch Management Console

We’re going to create a single CloudWatch Event rule that triggers every Lambda “target” once every 60 minutes. From the CloudWatch Management Console, we navigate to the Events tab and click on “Create rule”.

In the first step of the wizard, we select “Schedule” and a “fixed rate of” 60 minutes. In the Targets area we use the “Function” drop down to select a Lambda and then the “Add target” button to add additional Lambda functions.

In this example, I’ve selected two Lambda functions: createCustomizedGreeting and getCustomerNameFromIncomingNumber.

In the next step, we name the new rule “KeepContactFlowLambdasWarm” and add a description of “Periodically trigger Lambda functions used in Amazon Connect contact flows so that they stay responsive”.

After we click “Create rule”, we are all set. Every hour our Lambda functions will be triggered. In the future if we add more Lambda functions to our contact flows, we can go back to the CloudWatch Management Console and edit the rule to add more targets.

 

AWS CLI

If you prefer to script this rule and adding the targets, Amazon has provided a nice tutorial at: https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/RunLambdaSchedule.html that shows you the relevant commands and syntax. I don’t have much to add here, other than to suggest you also look into command options like list-targets-by-rule to give you a handy way to see which Lambda functions will be triggered:

A Warning Before We Go

When a Lambda is invoked by CloudWatch, it is a real invocation. Your code runs. The cost to your AWS subscription is minimal, but make sure your Lambda function can be executed like this without doing expensive or inappropriate reads or uses of business data or processes. Within CloudWatch you can customize the event data that is passed into the Lambda, giving you more granular control on what happens during that execution.

Thanks for reading. Any questions, comments or corrections are greatly appreciated. To learn more about what we can do with Amazon Connect, check out Helping You Get the Most Out of Amazon Connect

]]>
https://blogs.perficient.com/2017/11/27/keeping-lambdas-warm-in-amazon-connect/feed/ 0 196476
More natural text to speech with SSML and Amazon Connect https://blogs.perficient.com/2017/11/08/more-natural-text-to-speech-with-ssml-and-amazon-connect/ https://blogs.perficient.com/2017/11/08/more-natural-text-to-speech-with-ssml-and-amazon-connect/#comments Wed, 08 Nov 2017 16:32:10 +0000 https://blogs.perficient.com/integrate/?p=4970

Within Amazon Connect we can build engaging contact flows that use Amazon Polly to prompt callers with text to speech utterances. Amazon Polly produces natural sounding speech using deep learning technologies. This is not your old-school and often cringe worthy “robot” voice.

With that said, let’s look at a few scenarios where we can delight callers by tweaking how Polly speaks certain key items. To do this, we will use Speech Synthesis Markup Language (SSML). Don’t worry, the acronym is probably the hardest part of SSML.

Your account number is fifty-one thousand eight hundred thirty-nine…

Let’s say our contact flow uses Lambda to look up the caller’s account number and we want to confirm that we found the right one. For our first attempt we set the prompt value in a Get customer input node to “Is your account number $.Attributes.customerNumber?” A caller’s account id is 51839 and the caller is prompted with “Is your account number fifty-one thousand eight hundred thirty-nine?” We’d like the caller to hear all those digits pronounced separately.

At this point we could enter a long cycle of tweaking the Get customer input node, saving and publishing the contact flow and then calling back in to test. Instead, we can go over to the Polly console for our AWS account (https://console.aws.amazon.com/polly/home/SynthesizeSpeech) and have a much tighter testing loop.

Once we’re at the Polly console, we select the “SSML” tab and copy and paste our prompt. Polly doesn’t know about our contact attributes here, so we’ll replace $.Attributes.customerNumber with 51839. We can’t quite press “Listen to speech” yet to hear the result though. SSML is similar to XML and requires an enclosing parent speak tag. So our full input is “<speak>Is your account number 51839?</speak>”

SSML lets us specify that each character in a string be read out individually using the say-as tag with an attribute interpret-as of “characters”. There is a similar attribute value of “digits”, but let’s stick to “characters” to handle alpha-numeric account codes as well.

Go ahead and change the input string to include the say-as tag and we end up with: “<speak>Is your account number <say-as interpret-as=”characters”>51839</say-as>?</speak>”

Much better right? Now just take that input string and copy it into the Get customer input node, making sure to select the “SSML” option from the “Interpret as” dropdown.

Dramatic pauses

Let’s say we want to present the caller with a menu of options via DTMF or even better, a Lex bot. Or first attempt at a prompt is “How can we assist you today? Would you like to check your most recent order, create a new order or speak to an agent?” Hurry back to the Polly console and take a listen. Might be nice to have a uniform pause between each option.

Option one is the Oxford comma, so a comma after “create a new order” and before the or. If the pause for the comma still seems a bit fast, option two is to use SSML to insert pauses exactly as long as we want.

For that we use the break tag with an attribute time with the pause value in milliseconds. So to pause for under half a second per item, we get: “<speak>How can we assist you today? Would you like to check your most recent order <break time=”400ms”/> create a new order <break time=”400ms”/> or speak to an agent?</speak>”

Candy controversy

For our last example, we want to cycle through some promotions as a caller is in queue. Today we’re offering some free candy with large orders. In our customer queue flow we have a prompt “If you place a large order with us today, we will include a free box of our classic caramel candy at no charge to you”. Delicious.

I forgot to mention our company is based in southern Wisconsin, and we have pretty strong opinions on how to pronounce the word caramel (https://english.stackexchange.com/questions/372583/why-do-north-americans-pronounce-caramel-as-carmel). We drop that middle “a” and so should our contact center.

SSML and Polly have us covered. We can use the phoneme tag to supply a phonetic pronunciation. Phonetic alphabets are tricky, so I got some help from a transcription site online (http://lingorado.com/ipa/) to get the International Phonetic Alphabet version of “carmel”. Our updated prompt looks like: “<speak>If you place a large order with us today, we will include a free box of our classic <phoneme alphabet=”ipa” ph=”kɑrˈmɛl”>caramel</phoneme> candy at no charge to you</speak>” and our callers are hearing it the way we like.

For full documentation on SSML and Amazon Connect, check out the developer page on AWS at: https://developer.amazon.com/docs/custom-skills/speech-synthesis-markup-language-ssml-reference.html

Thanks for reading. Any questions, comments or corrections are greatly appreciated. To learn more about what we can do with Amazon Connect, check out Helping You Get the Most Out of Amazon Connect

]]>
https://blogs.perficient.com/2017/11/08/more-natural-text-to-speech-with-ssml-and-amazon-connect/feed/ 5 196473
Amazon Connect JavaScript Libraries: Lily CCP, Streams and connect-rtc https://blogs.perficient.com/2017/11/06/amazon-connect-javascript-libraries-lily-ccp-streams-and-connect-rtc/ https://blogs.perficient.com/2017/11/06/amazon-connect-javascript-libraries-lily-ccp-streams-and-connect-rtc/#respond Mon, 06 Nov 2017 18:56:55 +0000 https://blogs.perficient.com/integrate/?p=4857

Quick post here to describe the various JavaScript libraries that you may run into doing custom Amazon Connect development.

Lily CCP

When you load the Amazon Connect Contact Control Panel (CCP), it loads a JavaScript library called Lily CCP that contains all the code needed to work with the Amazon Connect CTI Web Service and WebRTC API’s. This includes the RTCSession and SoftphoneManager classes we saw in earlier posts. This API is not documented by Amazon and can be considered a private API that we happen to be able to peek into the source code for via browser tools.

Streams API

The Amazon Connect Streams API (Streams) is a public API, documented on GitHub at: https://github.com/aws/amazon-connect-streams. It gives you as a developer access to some parts of LilyRTC, such as the SoftphoneManager and also provides events and methods for call control and agent status. When Lily CCP loads, it creates a global window object of window.connect but there’s almost nothing hanging off of it we can use. When Streams loads, it adds the objects and methods like window.connect.contact() and window.connect.core.initCCP() that we use in custom applications. Again, these methods are not new to Streams, they are actually in Lily CCP, but are not exposed via the global window object.

connect-rtc

Amazon Connect connect-rtc.js (connect-rtc) is another public API, also on GitHub at: https://github.com/aws/connect-rtc-js. It gives you as a developer access to even more of Lily CCP, in this case some of the WebRTC call objects. In past examples this let us get to the underlying media streams involved in an Amazon Connect call. Like with Streams, connect-rtc is mostly taking methods and objects from Lily CCP and attaching to them the global window.connect object. In this case the window.connect.RTCSession and window.connect.RTCErrors objects.

 

Loose ends

  • Streams and connect-rtc also add their objects and methods to a global window.lily object. Not quite sure why, but perhaps as a left-over from when Lily CCP used that object
  • The use of the global window object means you could provide your own JavaScript source files that add the same or modified objects to the window.connect object and thereby “patch” in whatever new behaviors you wanted without modifying Streams or connect-rtc source; this might be an appealing approach if you want to keep your changes to Streams or connect-rtc isolated

I hope this gave you a quick tour of the JavaScript libraries in play for Amazon Connect. Thanks for reading. Any questions, comments or corrections are greatly appreciated. To learn more about what we can do with Amazon Connect, check out Helping You Get the Most Out of Amazon Connect

]]>
https://blogs.perficient.com/2017/11/06/amazon-connect-javascript-libraries-lily-ccp-streams-and-connect-rtc/feed/ 0 196464
Amazon Connect: Softphone Info and Incoming Calls https://blogs.perficient.com/2017/10/31/amazon-connect-softphone-info-and-incoming-calls/ https://blogs.perficient.com/2017/10/31/amazon-connect-softphone-info-and-incoming-calls/#comments Tue, 31 Oct 2017 14:08:48 +0000 https://blogs.perficient.com/integrate/?p=4848

In a previous post on the connect-rtc.js library for Amazon Connect (https://blogs.perficient.com/integrate/2017/10/26/implementing-a-mute-button-in-amazon-connect/), I mentioned that this library needs the “softphone media info” from the Streams API to be able to establish a WebRTC call between the agent and the caller. Let’s take a look at what this softphone media info is and why connect-rtc.js needs it. Fair warning, this gets us into some low-level technical details of Amazon Connect and WebRTC.

When you initialize the Streams API in your application with a call to initCCP a lot of setup work happens. Streams creates an iframe and loads the Contact Control Panel (CCP) code from the url you passed in. Streams establishes a connection (conduit) to the iframe to exchange messages and also subscribes to the internal event bus for agent state notifications. In addition, CCP, even when its UI is hidden, communicates with the Amazon Connect CTI Service, a JSON Web API for controlling Connect calls and agents. For more details on all the layers involved, see the architecture documentation at https://github.com/aws/amazon-connect-streams/blob/master/Architecture.md and check out the source code on GitHub, the core.js file.

In our prior examples, we hooked up an event handler in Streams for incoming contacts (calls) via the contact.contact(function (contact) {}) method. Amazon Connect triggers this handler when it routes a call to your agent. Within this handler, we can get to the softphone media info through the contact.getAgentConnection().getSoftphoneMediaInfo() method.

An example softphone media info object for an incoming call looks like this:

  {
    "callType": "audio_only",
    "autoAccept": false,
    "mediaLegContextToken": "sCTlZyQA5VeRqKsKz...",
    "callContextToken": "sCTlZyQA5VeRqKsKz5s1Z...",
    "callConfigJson": 
      "{ "iceServers": 
          [{"credential": "w7zETEkww2goK...",
             "urls": ["turn:52.55.191.227:3478?transport=udp"],
            "username": "150...@amazonaws.com"},  
           {"credential": "z8I...",
            "urls":["turn:52.55.191.236:3478?transport=udp"],
            "username":\"150...@amazonaws.com\"}
           ],
           "protocol":"LilyRTC/1.0/WSS",
           "signalingEndpoint":
                "wss://rtc.connect-telecom.us-east-1.amazonaws.com/LilyRTC"
       }"
  }

From this object we can see this is an audio only call, that the agent is configured to manually accept the call, some tokens to identify the incoming call and a list of “ICE servers”. So, what we do with all of this and how does it help the agent take the call from Amazon Connect?

Establishing a WebRTC call, signaling and ICE

When I’m talking about the incoming call from Amazon Connect, I’m talking about a peer to peer connection between Amazon Connect and the agent with the caller’s audio (media) flowing to Amazon Connect and then to the agent. And vice versa. This peer to peer connection uses WebRTC and the WebRTC RTCPeerConnection class, wrapped in Amazon Connect by the RTCSession class.

WebRTC calls can be logically divided into two separate parts, signaling or metadata about the call and media or the content of the call. Establishing a WebRTC call is the process of using signaling to agree on what type of content is being exchanged between peers and the best path for that content.

Agreeing on the type of content

Agreeing on what type of content the peers in a WebRTC call will share is done through the Offer-Answer process. Peer A starts with an Offer, which it sends to Peer B. Peer B takes a look at the Offer and responds with an Answer to Peer A.

As described in the Mozilla developer documentation (https://developer.mozilla.org/en-US/docs/Web/API/WebRTC_API/Connectivity), the content of an Offer or Answer is called a Session Description and describes the media format, type, transfer protocol and originating IP address. These messages are formatted using the Session Description Protocol (SDP).

Who’s offering?

In our case, an agent taking a call from Amazon Connect, I at first assumed Peer A would be Amazon Connect and Peer B would be the agent. Because Amazon Connect is “ringing” the Agent. That would mean Amazon Connect sends an Offer to the agent, and then the Agent responds with an Answer.

This is in fact not the case. For Amazon Connect calls, the agent is Peer A and sends an Offer to Amazon Connect. Amazon Connect is Peer B and responds with an Answer.

This surprised me, but there are good reasons why. First, there are no special powers or privileges that Peer A has over Peer B because they initiate. Someone has to start, but from there on it is a negotiation and conversation between equal peers.

Second, because the agent’s browser does not connect to the signaling channel until they are notified of an incoming call, the timing works out better to have the agent send the Offer. In short, nobody would be happy if Amazon Connect fired off an Offer to the agent’s browser over the signaling channel and the agent’s browser wasn’t even listening for it.

This requires a bit more explanation.

Signaling channel

As mentioned above, when CCP initializes, it establishes a connection to the Amazon Connect CTI API service and that allows us to be notified in code when a call is incoming. In that notification, we get the softphone media info object. And within that object we see a “signalingEndpoint” property set to a value of “wss://rtc.connect-telecom.us-east-1.amazonaws.com/LilyRTC”. This is the WebSocket address of the signaling channel.

With that WebSocket address in hand, CCP, or Streams or our own custom code can now create an RTCPeerConnection to Amazon Connect that is using that signaling channel for Offers and Answers. This is done in the SoftphoneManager class, in the new contact handler code, with the RTCSession constructor:

  
  var session = new connect.RTCSession(
                  callConfig.signalingEndpoint,
                   callConfig.iceServers,
                   softphoneInfo.callContextToken,
                   logger,
                   contact.getContactId());

Later on in the same method in the SoftphoneManager, we see a call to session.connect(), which sends an Offer over the signaling channel to Amazon Connect.

Before we look at the content of that Offer, back to the timing issue. If Amazon Connect wanted send the Offer, it would have to essentially spam the signaling channel with repeated Offers because the Amazon Connect servers wouldn’t know if the agent’s browser was connected to the channel or not yet. In addition, delivering the location and protocol of the signaling channel with every incoming call gives Amazon Connect some flexibility to change either of those if needed.

Offer and answer

OK, on to the Offer itself. Remember, the whole point here is to agree on what type of media we are sending around and find a path for it. Here’s a snippet (in SDP) of an Offer from the agent to Amazon Connect:

  
  o=- 270861860651928968 2 IN IP4 127.0.0.1
  s=-
  t=0 0
  a=group:BUNDLE audio
  a=msid-semantic: WMS s0FPcRYAyTu9e3lO5lBGTrYoeehY4CK8dOzd
  m=audio 9 UDP/TLS/RTP/SAVPF 111 103 104 9 0 8 106 105 13 110 112 113 126
  c=IN IP4 0.0.0.0
  a=rtcp:9 IN IP4 0.0.0.0
  a=ice-ufrag:QqOF
  a=ice-pwd:b7bo5qFQ5W4HH4Qp38n54zNf
  a=ice-options:trickle
  ...

Without going into every detail of the Offer-Answer process or SDP, we can identify a few key elements. The “o=” element is the originator info, giving us a session id and originating IP address. In this case it is 127.0.0.1, i.e. localhost. The “m=” element is the media descriptor, in this case audio. If you want more on SDP, there is an excellent interactive post on the WebRTC hacks blog at (https://webrtchacks.com/sdp-anatomy/) and a more dry, but still informative wikipedia entry at (https://en.wikipedia.org/wiki/Session_Description_Protocol).

That Offer goes out over the WebSocket to Amazon Connect, which responds with Answer like:

  
  o=AmazonConnect 1508415694 1508415695 IN IP4 10.1.3.169
  s=AmazonConnect
  c=IN IP4 10.1.3.169
  t=0 0
  a=msid-semantic: WMS ErAOgTGTi77mBzKkZs0bV59azwtyKZPe
  m=audio 23248 UDP/TLS/RTP/SAVPF 111 110
  a=rtpmap:111 opus/48000/2
  a=fmtp:111 useinbandfec=1; minptime=10
  a=rtpmap:110 telephone-event/48000
  a=silenceSupp:off - - - -
  a=ptime:20
  a=sendrecv
  ...

In the originator info we see this is Amazon Connect and its coming from a public facing IP. The media descriptor agrees on audio as the media type and the “a=” attribute immediately following specifies that both peers should use the OPUS codec for media encoding.

Where to? ICE, STUN, TURN

With the Offer and Answer exchanged, the peers agree they want to exchange OPUS encoded audio. However, we cannot just start exchanging audio bits over the signaling channel. We need a direct (as possible) media path between Amazon Connect and the agent.

Finding the best media path is handled with the Interactivity Connectivity Establishment (ICE) technique. If we imagine a world where everyone had their own static IP address, then ICE would be so simple as to be irrelevant. However, we have NAT, firewalls and other obstacles that complicate finding a path.

To find the best path, ICE uses STUN and TURN. STUN servers essentially tell you what your public facing IP is. TURN servers are media relays. If there is no direct path, both peers bounce media through the TURN server. For more on this, plus some nice diagrams see https://temasys.com.sg/webrtc-ice-sorcery/.

In the example we’ve been following, we will use TURN and hence relay media. We have two indications that this is the case. First, in the softphone media info we see that listed ICE servers are TURN servers, i.e. “turn:52.55.191.227:3478?transport=udp” and have provided credentials to connect to them (you don’t want just anybody using your server to relay media).

Second, if we dig deep enough into the RTCSession object we can see that when it creates the underlying RTCPeerConnection it passes in the ICE server addresses as parameters, along with a parameter to specify that only relay (TURN) IP addresses should be considered as media path endpoints:

  ...
    key: "_createPeerConnection",
      value: function _createPeerConnection(configuration) {
        return new RTCPeerConnection(configuration);
      }
    }, {
      key: "connect",
      value: function connect() {
        var self = this;
        var now = new Date();
        self._sessionReport.sessionStartTime = now;
        self._connectTimeStamp = now.getTime();
  
        self._pc = self._createPeerConnection({
          iceServers: self._iceServers,
          iceTransportPolicy: "relay",
          bundlePolicy: "balanced" //maybe 'max-compat', test stereo sound
        }, {
          optional: [{
            googDscp: true
          }]
        });
  ...

ICE Candidates

A high level view of the ICE technique is that it involves each peer sending the other peer a set of ICE candidates, or IP addresses, for the other peer to try out and see if they are reachable for sending media. So Peer A says I’m at IP x or IP y and Peer B says, OK I can get to you from IP y, I’m at IP z and IP a. Peer A says OK, I can get to you at IP z. Now media can flow from IP y to IP z. In other words, the peers have exchanged ICE candidates and chosen the best ones.

And our call is connected! Here are the parts in visual form:

…but wait, there’s one more mystery to be solved.

When did the agent answer the call?

Nowhere in this post did I mention the agent clicking the Accept button or otherwise taking an action to accept the incoming call. Yet, I just said the call is connected. WebRTC has done its thing and media is flowing. Still not sure? Take a look at the JavaScript console logs in our past sample applications or when running the CCP while a call is ringing to you. You’ll see the Offer and Answer SDP in the logs.

At this point though, the agent cannot hear the caller or speak to them and vice versa. Remember, the agent has just established a WebRTC connection to Amazon Connect. Amazon Connect has its own connection to the caller where it’s playing hold music. In order to support recording and mix the audio properly with multiple participants, Amazon Connect must be using some kind of mixer (perhaps an audio MCU) to control who hears what and when.

So the right people are connected, but until Amazon Connect gets the right signal, it isn’t sending the caller audio to the agent and vice versa. That signal comes from the agent’s browser when they click Accept. On clicking accept, Streams or the CCP sends an accept message to the Amazon Connect CTI API as shown below:

  
   Contact.prototype.accept = function(callbacks) {
        var client = connect.core.getClient();
        var self = this;
        client.call(connect.ClientMethods.ACCEPT_CONTACT, {
           contactId:  this.getContactId()
        }, {
        ...

In turn, this message must be telling the mixer to cut off the hold music and let the caller and agent hear each other. Given that going through the Offer-Answer process and ICE can take a little bit of time, by establishing the call before the agent accepts it the agent does not perceive any connection delays.

Call auto-establishment explains how supervisors doing silent joins via the CCP are automatically connected without clicking accept call. For silent joins, the mixer just sends them the call audio as soon as the call is established.

And that ends our tour of Amazon Connect, softphone media info and WebRTC. I hope you understand why I didn’t go down this rabbit in the last post. Thanks for reading. Any questions, comments or corrections are greatly appreciated.

Post-script: Logging ICE candidates

Curious to see what ICE candidates are being exchanged and ultimately selected? If you are using connect-rtc.js you get a few tantalizing log messages in the JavaScript console like “SESSION onicecandidate [object RTCIceCandidate]”. So close, but the default console logger isn’t expanding that RTCIceCandidate object.

While working on this post, I tweaked the connect-rtc.js, specifically the RTCSession object to log the string representations of the ICE candidates. First in “onIceCandidate” and then in “onEnter”:

  
  key: "onIceCandidate",
      value: function onIceCandidate(evt) {
        var candidate = evt.candidate;
        //MY HACK
        this.logger.log("onicecandidate", candidate ? JSON.stringify(candidate) : "null");
  ...
  
   key: "onEnter",
   ...
   setRemoteDescriptionPromise.then(function () {
          var remoteCandidatePromises = Promise.all(self._candidates.map(function (candidate) {
            var remoteCandidate = self._createRemoteCandidate(candidate);
            //MY HACK
            self.logger.info("Adding remote candidate", remoteCandidate ? JSON.stringify(remoteCandidate) : "null");
            return rtcSession._pc.addIceCandidate(remoteCandidate);
          }));
        

Enjoy! To learn more about what we can do with Amazon Connect, check out Helping You Get the Most Out of Amazon Connect

]]>
https://blogs.perficient.com/2017/10/31/amazon-connect-softphone-info-and-incoming-calls/feed/ 2 196463