Alex Moisi, Author at Perficient Blogs https://blogs.perficient.com/author/amoisi/ Expert Digital Insights Fri, 21 Dec 2018 22:56:59 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Alex Moisi, Author at Perficient Blogs https://blogs.perficient.com/author/amoisi/ 32 32 30508587 Populating a DynamoDB table based on a CSV file https://blogs.perficient.com/2018/12/21/populating-a-dynamodb-table-based-on-a-csv-file/ https://blogs.perficient.com/2018/12/21/populating-a-dynamodb-table-based-on-a-csv-file/#respond Fri, 21 Dec 2018 22:19:12 +0000 https://blogs.perficient.com/?p=234091

We’ve previously detailed the steps necessary to build a holiday calendar  and looked at how we can easily upload all your holidays at once. However, so far we’ve only worked with JSON, which is an easy to understand format for Node.js, but not necessarily the most intuitive for a human reader. To avoid making mistakes you might want to use a CSV file with dedicated headers for your items. In this blog post we will show you how to set up a Lambda function that can parse a table similar to the screenshot below.

1.Prerequisites

To get started you will need to deploy most of the prerequisites detailed in the two blog posts linked at the top of this article. We won’t walk through creating a DynamoDB or how to configure a Lambda function that can read it, both both of these steps are necessary for the full holiday calendar solution to work.

You will also need to prepare a deployment package for Lambda. We will need to do this because unfortunately Node.js doesn’t have an easy way to parse CSV files so we will use an external package to help us. For this blog post I chose to use csvtojson, but numerous other modules are available and the steps to set everything up will be very similar.

If you are already familiar with installing a node package feel free to skip to the code section below, otherwise please follow along.

To get started install Node.js on your local machine. You can find the instructions to do so here. Simply grab the most recent version and run the installer. This will also install npm, which is what we need in order to set up the csvtojson package. Once Node is installed open a command line terminal and type npm -v. You should see the version of npm you have installed.

This means everything we need is installed and we can move on. If you get an error try to follow the installation steps again.

Once you have everything working navigate to a new folder and run the following command: npm init -y. You should see something similar to the following screen.

 

This will set everything up so you can run the next command which will actually download the csvtojson package. Simply run: npm install csvtojson

This will download the csvtojson package and you can now use file explorer to navigate to the folder you set up earlier. You should now see a folder named node_modules, go ahead and create a new file and name it index.js.

Select everything except the .bin folder and archive it into a zip file.  This will be your deployment package and it should now be ready to upload into Lambda.

2.Uploading a CSV file from S3

Now that we have all the basic steps in place, navigate to AWS Lambda and select “create a new function”. Name it something that will make sense, select Node.js 6.10 and use a role that has access to S3 and DynamoDB. If you need help creating such a role, check out our post on managing your holiday calendar.

Once everything looks good create the function and under Code entry type select upload a. zip file. Upload the deployment package containing the csvtojson package and your empty index.js file.

Once the package is uploaded you can open index.js file and start adding code. For the most part we will re-use the code we previously wrote to upload data from a JSON file. However, there are a few small changes that will allow us to stream each row of the CSV file and convert it to JSON so we can push it into DynamoDB.

We previously used the S3 getObject method to select entries in our S3 bucket. This time around we will use the getObject and create a read stream. This will allow us to stream raw data to the csvtojson object which will in turn parse each row to JSON. Finally, we will use the same addData fucntion we used previously to update our DynamoDB table.

 

Here is the code in it’s entirety. Just paste it into the index file, save the function and you should be ready to test uploading a CSV file.

 const AWS = require('aws-sdk');
 const s3 = new AWS.S3();
 const docClient = new AWS.DynamoDB.DocumentClient({region: 'us-east-1'});

exports.handler = (event, context) => {
    const bucketName = process.env.bucketName;
    const keyName = process.env.key;
    const params = { Bucket: bucketName, Key: keyName };
    const csv=require('csvtojson');
//grab the csv file from s3        
    const s3Stream = s3.getObject(params).createReadStream()

    csv().fromStream(s3Stream)
         .on('data', (row) => {
//read each row 
             let jsonContent = JSON.parse(row);
             console.log(JSON.stringify(jsonContent));
             
//push each row into DynamoDB
             let paramsToPush = {
                    TableName:process.env.tableName,
                    Item:{
                       "dateStart" :new Date(jsonContent.holidayStart).getTime(),
                       "dateEnd":new Date(jsonContent.holidayEnd).getTime(),
                       "reason":jsonContent.reason,
                       "holidayStart":jsonContent.holidayStart,
                       "holidayEnd":jsonContent.holidayEnd
                    }
                };
            addData(paramsToPush);
    });
      
};


 function addData(params) {
            console.log("Adding a new item based on: ");
            docClient.put(params, function(err, data) {
            if (err) {
                console.error("Unable to add item. Error JSON:", JSON.stringify(err, null, 2));
            } else {
                console.log("Added item:", JSON.stringify(params.Item, null, 2));
                }
            });
        }

Once you save this code in your function make sure you create the 3 environmental variables pointing to the bucket, the file and the DynamoDB table.  Once that is complete you should be ready to upload your JSON file into S3. You can of course configure a trigger on the bucket for any new objects or simply run this function with an empty test event.

Hopefully this will help you better manage your DynamoDB holiday calendar and easily upload changes. For assistance configuring this solution or help with any Amazon Connect related topic please Request an Amazon Connect Demo.

]]>
https://blogs.perficient.com/2018/12/21/populating-a-dynamodb-table-based-on-a-csv-file/feed/ 0 234091
Maintaining your holiday calendar by bulk uploading data https://blogs.perficient.com/2018/12/14/maintaining-your-holiday-calendar-by-bulk-uploading-data/ https://blogs.perficient.com/2018/12/14/maintaining-your-holiday-calendar-by-bulk-uploading-data/#respond Sat, 15 Dec 2018 02:29:27 +0000 https://blogs.perficient.com/?p=233456

We’ve looked before at the steps necessary to build a holiday calendar in DynamoDB. One of the advantages of keeping all your closure times tracked in a database is that you can easily update when the call center should be available, modify the closure reasons or add a new holiday with minimum effort and no changes to code. However, because of the way DynamoDB is structured, uploading a lot of items at once can be a bit cumbersome. Here are, for example, the steps necessary to accomplish a batch upload using the AWS CLI.

To help with this issue and make it easier to automate maintaining your holiday calendar we will walk you through a few lambda functions that can bulk upload data from S3. This will be a two part blog post with this section focused on uploading a JSON file specific to the holiday calendar while the second part will cover uploading a CSV file.

1.Prerequisites

Before we can look at the code itself there are a few items that need to be configured. You will need a DynamoDB table to write the dates into, an S3 bucket to store the data input and a Lambda role that can interact with both.

Get started by creating a DynamoDB table and an S3 bucket. Make sure they will be in the same region as the Lambda function and name them anything that makes sense for your environment. Also make sure the DynamoDB table primary key is a number. To make it easier later on name the primary key dateStart or make sure you update the code below appropriately.

Once you have your resources created navigate to IAM and set up a role for Lambda.  In order to have a function update your DynamoDB table Lambda must have “dynamodb:PutItem” permissions for DynamoDB and “s3:Get*”,”s3:List*”  action permissions for the S3 bucket we will use to upload our files. I recommend creating two separate policies and only granting access to the relevant resource.

For example in my environment for the role used by Lambda I have a “DynamoDBPutAccess” policy that looks like this:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": [
                "dynamodb:PutItem"
            ],
            "Resource": [
                "arn:aws:dynamodb:us-east-1:553456133668:table/DYANMODBTABLENAME"
            ],
            "Effect": "Allow"
        }
    ]
}

As well as a “S3ReadAccess” policy that looks like this:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": [
                "s3:Get*",
                "s3:List*"
            ],
            "Resource": "arn:aws:s3:::UPLOADBUCKET/*",
            "Effect": "Allow"
        }
    ]
}

As always when working with Lambda it also makes sense to grant your role a policy that will allow it to interact with CloudWatch and write logs.

Once you have your Lambda role it’s time to decide what kind of data you will upload.

2.Uploading a JSON file from S3

The easiest way to upload data into our holiday calendar is by using a JSON file as a starting point, this will require the minimum amount of conversion and we can use the built in JSON functions.

Start off by declaring the resources we will use and load in two environmental variables “bucketName” and “fileName”. These will be the source of our holiday file (don’t forget to also create these two variables in your environment. Alternatively you can decide to hardcore the path. Keep in mind the way variables are used in the code below is at times unnecessarily verbose in hopes of making it easier to follow).

   const AWS = require('aws-sdk');
   const s3 = new AWS.S3();
   const docClient = new AWS.DynamoDB.DocumentClient({region: 'us-east-1'});

    exports.handler = (event, context, callback) => {  
            const bucketName = process.env.bucketName;
            const keyName = process.env.fileName;
        
            readFile(bucketName, keyName, readFileContent);
    };

After declaring all our variables we call a readFile function that takes in our bucket, our file name and a function to be called if we successfully find data.

function readFile (bucketName, fileName, onFileContent) {
            const params = { Bucket: bucketName, Key: fileName };
            s3.getObject(params, function (err, data) {
                if (!err) 
                    onFileContent(filename, data.Body.toString());
                else
                    console.error("Unable to find data object. Error JSON:", JSON.stringify(err, null, 2));
            });
        }

We can simply use the s3 method getObject to pull in data, log any error we encounter or send the data we collected from the fileName to our readFileContent function.

function readFileContent(filename, content) {
           let jsonContent = JSON.parse(content);       
           for (let i in jsonContent){
               let holidayStart = jsonContent[i]['holidayStart'];
               let holidayEnd =  jsonContent[i]['holidayEnd'];
               let dateStart = new Date(holidayStart).getTime();
               let dateEnd = new Date(holidayEnd).getTime();
               let reason = jsonContent[i]['reason']
               
               let params = {
                    TableName:process.env.tableName,
                    Item:{
                       "dateStart" :dateStart,
                       "dateEnd":dateEnd,
                       "reason":reason,
                       "holidayStart":holidayStart,
                       "holidayEnd":holidayEnd
                    }
                };
              addData(params);
           }
        }

This is where the data from our JSON file is actually parsed and associated with the right attributes in DynamoDB. We are assuming the DynamoDB table you created has dateStart as the primary key and the JSON file you will use has the following format:

  "holiday1": {
               "holidayStart" : "August 10, 2018 00:00 AM GMT+09:00",
               "holidayEnd" : "August 10, 2018 11:59:00 PM GMT+09:00",
               "reason" : "Test holiday"
  },
 "holiday2":{...},
 "holiday3":{...}

As you can tell our function transforms the date in holidayStart to an epoch timestamp and writes it into the dateStart, but also uploads the date string so you can easily tell what dates are already entered. Finally we will also enter a reason that Amazon Connect can use to dynamically inform the caller why the call center is closed.

Something else to note in the params variable we build is that we tell our function what table it should update. Make sure you add an environmental variable named tableName and enter the name of your DynamoDB table.

Now that data has been collected and parsed into the right format and we know where it needs to eventually live we just need to actually add it into the DynamoDB table. To do this we’ll invoke the addData function after reading each element in the JSON file. This function will use the DynamoDB client to put an item into our database, or log more details if we hit an error.

  function addData(params) {          
            docClient.put(params, function(err, data) {
            if (err) {
                console.error("Unable to add item. Error JSON:", JSON.stringify(err, null, 2));
            } else {
                console.log("Added item:", JSON.stringify(params.Item, null, 2));
                }
            });
        }

 

That should be all the code needed. Make sure you create the 3 variables below and point them to the right resources and you should be ready to upload your JSON file into S3.

Typically I would recommend setting a trigger on the S3 bucket whenever an object is created and invoke this function, but you can also just run it manually with an empty JSON event and your table should populate.

Using a JSON might not be the most convenient way to enter holiday data, so in this blog post we take a look at uploading data from a CSV file. We will use the core concepts detailed here, but we will need to parse CSV to a format Node.js can work with.

 

Request an Amazon Connect Demo

]]>
https://blogs.perficient.com/2018/12/14/maintaining-your-holiday-calendar-by-bulk-uploading-data/feed/ 0 233456
Invoking Lambda Functions with Amazon Connect https://blogs.perficient.com/2018/11/30/invoking-lambda-functions-with-amazon-connect/ https://blogs.perficient.com/2018/11/30/invoking-lambda-functions-with-amazon-connect/#respond Sat, 01 Dec 2018 01:50:05 +0000 https://blogs.perficient.com/?p=233748

Amazon is continuing to release new features for Amazon Connect at a rapid clip. In this blog post I will take a deeper look at a new change to the contact flow configuration page that can make integrating with Lambda significantly easier. I will also detail some of the downsides you should be aware of when using this new configuration.

Before we get started, here is a screenshot of the new section you can find by navigating to your Amazon Connect instance inside the AWS console and selecting contact flows.

As we’ve noted in several previous blog posts and Amazon points out in the official documentation, for Amazon Connect to properly invoke a Lambda function we need to make sure the function has the right policy applied. You can check all the policies of a function by navigating to Lambda and using the view permissions button. Here is how that might look for a function that has had policies applied via the AWS CLI.

This view is incredibly useful if you want to see all details about who can invoke your function and in turn what services Lambda can access. To add more permissions, you can run a command like the one below within the AWS CLI.

aws lambda add-permission --function-name function:my-lambda-function --statement-id 1 --principal connect.amazonaws.com --action lambda:InvokeFunction --source-account 123456789012 --source-arn arn:aws:connect:us-east-1:123456789012:instance/def1a4fc-ac9d-11e6-b582-06a0be38cccf

However, by using the contact flow settings page in Amazon Connect this step is no longer necessary. All you have to do now is select the function you want your call center to invoke and click the add Lambda function button. Once you get a success message you’re ready to reference the function inside your contact flows.

Note, that you will still need to use the function ARN. There is no drop-down menu inside the invoke Lambda node that shows all your options the way all Lex bots are available inside the get customer input node. However, the helpful copy button next to your function name will grab the ARN for you, so there is no real need to go into the Lambda console.

Another thing to note is that when navigating to a function that is configured as detailed above the view permissions button will not display the lambda:InvokeFunction action. This can make it a bit harder to keep track of what permissions are applied to each one of your functions.

 

Not having a specific policy applied to your Lambda function means that you will not be able to control exactly which AWS account can invoke Lambda or set things up so that your function can be used by any account or instance. This can be an issue if you have a Lambda function used by multiple instances of Amazon Connect. You will also not be able to use this configuration for Lambda functions in other AWS regions.

Finally, if you are deploying multiple Lambda functions and don’t want to go into a GUI for each one, it might still be easier to use CloudFormation to apply the proper policies when the function is created. In case you’ve never used CloudFormation to apply policies to a Lambda function, here is how that could look.

 "AmazonConnectUpdateRights": {
           "Type": "AWS::Lambda::Permission",
            "Properties": {
                "FunctionName": {
                    "Ref": "FUNCTIONNAME"
                },
                "Action": "lambda:InvokeFunction",
                "Principal": "connect.amazonaws.com",
                "SourceAccount": {
                    "Ref": "AWS::AccountId"
                }
            }
        }

All of that said, this new way to use Lambda with Amazon Connect will certainly be useful for simple deployments with just one Amazon Connect instance. For more tips on how to use Lambda or help with any Amazon Connect related topic please Request an Amazon Connect Demo.

]]>
https://blogs.perficient.com/2018/11/30/invoking-lambda-functions-with-amazon-connect/feed/ 0 233748
Using AWS to Host a Custom Agent Console part 2 https://blogs.perficient.com/2018/11/27/using-aws-to-host-a-custom-agent-console-part-2/ https://blogs.perficient.com/2018/11/27/using-aws-to-host-a-custom-agent-console-part-2/#respond Tue, 27 Nov 2018 23:19:51 +0000 https://blogs.perficient.com/?p=233295

In a previous blog post we looked at the steps necessary to set up a custom agent console. We walked through uploading a page to S3 and configuring a CloudFront distribution which can be whitelisted and then used with Amazon Connect.

To keep things simple we didn’t dive into CloudFront settings, many of which can make deploying a custom website a lot more secure and easy to manage. Here are some steps which you might want to consider next time you are configuring a custom agent console.

  1. Keep your test instance easy to update

One of the advantages of CloudFront is that your site will be cached around the world, resulting in faster load times for your users and less hits against your origin (in this case the S3 bucket hosting the website). However, if you are testing a new website and are still making frequent changes to code this feature can actually make your life harder. I can’t even count how many times I’ve upload a new index.html file into S3, only to navigate to my CloudFront link and see the old cached version without any of my recent updates.

To get around this you can use versioning (upload an index_v2.html and then navigate directly to it), different directories or even invalidation rules. For example, by creating an invalidation that excludes all files in your scripts folder you can make sure you will always get the most recent JavaScript files. Note that if you do decide to invalidate an entire folder you need to use a wildcard character at the end of your path. See more details here.

The one disadvantage of using invalidation rules is that you need to create them after you created a distribution and you will need to wait several minutes for it to take effect. It all depends on where exactly your content is being served from, but I have seen invalidation rules take up to half an hour to go into effect. Because of this my preferred method of setting up working websites, without cluttering the S3 bucket with multiple versions of the same file, is to set the default TTL to 0.

 

When you first configure your distribution you can decide how long CloudFront should cache your objects. The default TTL will be 86400 seconds which translates to 24 hours, but while testing your website you can easily set this to 0. Changing the default to 0 will make every request to CloudFront go directly to your origin and grab the latest objects.

Make sure you configure this when setting up the distribution for the first time and make sure you update it once the site is finalized.

  1. Lock down the S3 bucket

In our previous blog post we simply configured the S3 bucket hosting our website to be publicly accessible, however CloudFront can very easily lock down access to the bucket without any downsides to your custom agent console.

All you need to do is select the Restrict Bucket Access option when creating the distribution. You will also need to create an Origin Access Identity (or if you have one already, re-use it). This is a virtual user identity that CloudFront will use to access your bucket. Make sure you select the “Yes, Update Bucket Policy option” or go in and modify the S3 bucket policy appropriately. Here is what an S3 bucket policy might look like after deploying an OAI.

{
    "Version": "2008-10-17",
    "Id": "PolicyForCloudFrontPrivateContent",
    "Statement": [
        {
            "Sid": "1",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity"
            },
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::BUCKETNAME/*"
        }
    ]
}

This means there is now no way to access the S3 bucket directly and all traffic will have to go through your CloudFront distribution. Here are more details about the OAI.

  1. Allow only select IP addresses.

If you locked down the S3 bucket as detailed above you can use CloudFront to monitor all traffic to your site, however CloudFront doesn’t really have any robust mechanism for blocking malicious access. That’s where AWS WAF comes into play. AWS WAF is a web application firewall that lets you monitor the requests to CloudFront and can be configured to whitelist or block certain requests.

You should review the official documentation, but the core idea behind WAF is that you will select a resource( like a CloudFront distribution) apply certain conditions (most often IP match conditions, but you can also do Geo matching or string matching) and then finally decide if you should allow or block requests that match the condition.

Here are the basic steps to block traffic from a certain IP address.

  • Navigate to WAF inside of the AWS console and select Conditions > IP Addresses. Enter the address you would like to block from accessing your site.
  • Create a new web access control list from the Web ACL’s link, name it something appropriate and select the CloudFront Resource you want to block access to.
  • The second panel of the wizard will allow you to create conditions, but we already have the IP addresses we need, so just click next to set up our rule. Create a new rule for when a request does originate from an IP address in the condition we created (in the screenshot below the condition I created is named blockedIP).

  • Finally decide what should happen if we receive a request that fulfills this rule (each rule can have multiple conditions, allowing for very granular control). In this example we want to block any requests from the blockedIP condition so make the following selections.

Note that we set the Default Action to allow all traffic that doesn’t match a rule, you can easily flip this to only allow traffic from a predefined block of IP addresses.

That should be it, once you review and create the ACL traffic from the selected IP addresses will be blocked with a 403 error.

Hopefully this post gave you some ideas on how you can better use CloudFront to deploy your custom agent console or any other custom website. For assistance deploying your own website or help with any Amazon Connect related topic please Request an Amazon Connect Demo.

]]>
https://blogs.perficient.com/2018/11/27/using-aws-to-host-a-custom-agent-console-part-2/feed/ 0 233295
Create a Basic Agent Console with Queue Metrics https://blogs.perficient.com/2018/10/28/create-a-basic-agent-dashboard-with-queue-metrics/ https://blogs.perficient.com/2018/10/28/create-a-basic-agent-dashboard-with-queue-metrics/#respond Mon, 29 Oct 2018 03:16:44 +0000 https://blogs.perficient.com/?p=233029

In a previous blog post we covered the steps necessary to host a static website within AWS, more specifically how to deploy a custom agent console that can load in contact attributes. Today we will look at how we can use the Connect API to enhance this custom agent console with live data from the call center. In this use case we will offer agents a quick snapshot of how many calls are waiting in queue and how many other agents are available to help with those calls. However, the same steps detailed below can be used to build a standalone wallboard that doesn’t offer call controls.

Our proof of concept will pull in the number of calls in queue, longest wait time and agent details for all the queues in our environment then display the data on the agent console every 5 seconds. To do this we will use the newly released GetCurrentMetricData operation, which Peter Miller explored in more depth within this blog post. The final console will look like this screenshot.

We will deploy and configure the following resources:

  1. A Lambda function to query the API for metrics
  2. An API Gateway allowing external access to the Lambda function
  3. An enhanced agent console that will periodically request data from our API Gateway

Please note that while our example can work for a small call center it means every agent’s console will end up hitting the Connect API. In production it’s recommended to deploy a Lambda function that periodically updates a DynamoDB table and potentially even aggregates data based on routing profile, so each agent can see only data relevant to their queues. Also, please note that we will not be covering security best practices, in production your API gateway should be secured to prevent unwanted access to your call center metrics.

Configuring Lambda

To start off create a new function inside your AWS console and name it appropriately. We will use asynchronous functions to pull in data so select Node.js 8.10 as the run time and finally select an appropriate role or create a custom one if needed.

The Lambda function we will build will be invoked by the API gateway and at a minimum it should have “connect:GetCurrentMetricData” permissions for the instance of Connect you want to use. For more details on how to set up the role for this Lambda function check out the example at the bottom of this page.
In my environment I used the following, very permissive, policy to grant Lambda access to all my Connect instances.

{
       "Version": "2012-10-17",
       "Statement": [
         {
           "Sid": "VisualEditor0",
           "Effect": "Allow",
           "Action": [
             "connect:GetCurrentMetricData",
             "logs:CreateLogStream",
             "logs:CreateLogGroup",
             "logs:PutLogEvents"
           ],
           "Resource": "*"
         }
       ]
     }

Once all basic settings are configured create the function and add two environmental variables which will contain your Connect instanceID and your queueID’s. Here is how this looked in my environment.

The instanceID will accept only one ID, which can be found at the end of your Amazon Connect instance ARN, while the queueID will accept as many queues as you want to pass in as long as they are separated by a comma. You can find the queue ID, by looking at the URL of each queue inside of Amazon Connect. Note that our code will aggregate all the queues and offer 1 total, if you want to return the data grouped by queue make use of the Groupings parameter when invoking the Connect API.

You can leave all other settings as default and start working on the actual code.

Please note: As of October 24th, the AWS SDK package available by default inside Lambda does not yet contain the getCurrentMetricData method. Until this is resolved you will most likely encounter a “TypeError: connect.getCurrentMetricData is not a function” error if you use the standard “require(‘aws-sdk’)”. To get around this temporary limitation you can build your own AWS SDK package using these instructions.

After grabbing the AWS SDK package, you can simply use the code below in an index.js file and upload a zip file of the entire deployment to Lambda. For more details on how to upload a deployment package check out the official documentation.

const AWS = require('./aws-sdk');
AWS.config.update({ region: 'us-east-1' });
    

function getCurrentData(){
   const params = {
          CurrentMetrics: [
            {
              Name: 'AGENTS_AVAILABLE',
              Unit: 'COUNT'
            },
            {
              Name: 'AGENTS_ONLINE',
              Unit: 'COUNT'
            },
              {
              Name: 'CONTACTS_IN_QUEUE',
              Unit: 'COUNT'
            },
              {
              Name: 'OLDEST_CONTACT_AGE',
              Unit: 'SECONDS'
            }
          ],
          Filters: {
            Channels: [
              'VOICE'
            ],
            Queues: process.env['queueID'].split(','),
          },
          InstanceId: process.env['instanceID']
        };
    const connect = new AWS.Connect();  
    
    return new Promise(function(resolve, reject) {
    connect.getCurrentMetricData(params, function(err, data) {
        if (err) return reject(err);
      resolve(data);
    });
  });
    
}

exports.handler = async (event, context, callback) => {   
    let responseBody={};
        try{
            console.log('Running getCurrentMetric function');
            
            let dataTransit =  getCurrentData();
            let data = await dataTransit;
            console.log(data.MetricResults[0]);    
            data.MetricResults[0].Collections.forEach(function(element) {
                  const key = element.Metric.Name;
                  const value = element.Value;
                  responseBody[key] = value;
                    });
              console.log('Building response to send over ')
              console.log(responseBody);
              
              let responseCode = 200;
              let response = {
                  statusCode: responseCode,
                  headers: { "Access-Control-Allow-Origin" : "*"},
                  body: JSON.stringify(responseBody)
              }; 
            callback(null, response);
        }
          catch(error){
            console.log(error);
            let responseCode = 500;
            let response = {
                  statusCode: responseCode,
                  headers: { "Access-Control-Allow-Origin" : "*"},
                  body: JSON.stringify(error)
              }; 
            callback(null, response);
          }
}

A few things to note about this function: We are only asking the Connect API for 4 values: Available Agents, Agents Online, Contacts in Queue and the Oldest Contact Age. The data returned will be a total for all queues, not grouped and the response is configured so the API gateway can parse it and our website can load the data. If we don’t add the Access-Control-Allow-Origin to our header we will encounter a CORS problem (Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at…) . Please Note: right now the header is set to accept request from any website, you should change this to limit access to just your domain.

Once the function is uploaded we are ready to configure the API Gateway.

API Gateway

Navigate to the API Gateway section of your AWS console and create a new API. Name it something appropriate like connectMetricsAPI and hit save, then add a new GET method.

Under integration type select Lambda Function and check the Lambda Proxy integration box. This option will make configuring the integration between Lambda and API Gateway easier and it will allow us to pull in the API response without too many configuration changes. For a full breakdown of the differences between using a regular Lambda integration as opposed to Lambda Proxy please review this article.

Enter the name of your previously created Lambda and hit save. The API Gateway will add the necessary policy to your function, so it can be invoked.

Once saved select Enable CORS under Actions and leave all the default settings except the Access-Control-Allow-Origin. In this box add the domain for your agent console.

Finally, deploy the API to a new stage from the Actions menu. You can name the stage anything that makes sense for your environment (for example “UAT” or “production” for the final version). This will generate an invoke URL you can use to test the response from your Lambda function. You should see something like the JSON below.

At this point our AWS services are configured and we can look at the new agent console code.

Custom Agent Console

The agent console we will deploy will use the Streams API to serve the CCP to agents for call controls, but will also pull in contact attributes and metrics. We will reuse much of the code from this blog post, but I added a new container for the metrics and made some changes to the script. You can download the final version of the page from this github repository.

If you download the code and want to test it, make sure you update the ccpUrl and the metricAPI inside the script.js file.

The main difference from the previous agent console is the getCurrentMetrics function which simply performs a GET XMLHttpRequest on our API gateway, parses the data and calls a function to update our HTML table with new numbers.

I also added a small function to convert milliseconds to minutes and seconds since the OLDEST_CONTACT_AGE will not return a nicely formatted timer.

Something to keep in mind when testing this agent console is that the getCurrentMetrics function is set to run every 5 seconds, depending on where it is in the run cycle and when a new call entered the system you might see up to 10 seconds of delay in the data. This is particularly noticeable when there is only one call in queue so the oldest contact age increments the longer they wait for an agent.

When testing, if you have only one call in queue, you will also see the metrics for contacts in queue or oldest contact age go to 0 when the call is offered to an agent. The call is essentially no longer in queue until the agent rejects it or misses it, then the metrics will jump back once the call is waiting in queue once more.

Finally, if you’re seeing odd numbers for the metrics, make sure you don’t have an at capacity path set for your queues or some other logic that can route callers in queues your Lambda function is not returning data for. Using GetCurrentMetricData on the Connect API you should get numbers that match pretty closely the results from the real time reports available out of the box. If in doubt check the reports and compare the numbers with the data in your agent console.

Finally, to actually deploy the custom agent console, please follow the steps in this blog post, create an S3 bucket and a CloudFront distribution. Once the webpage is uploaded in S3 and the CloudFront distribution is created you should be ready to test the new agent console.

For help customizing your console or assistance designing your call flows please email Craig Reishus.

]]>
https://blogs.perficient.com/2018/10/28/create-a-basic-agent-dashboard-with-queue-metrics/feed/ 0 233029
Building More Intelligent Lex Bots with Lambda Integration https://blogs.perficient.com/2018/10/08/lexbots/ https://blogs.perficient.com/2018/10/08/lexbots/#comments Tue, 09 Oct 2018 02:59:05 +0000 https://blogs.perficient.com/?p=232205

In a previous blog post we covered the basics of how Amazon Connect can use Lex bots to collect details from the caller and fulfill a simple task such as sending an e-mail or a text message.  By using a Lambda function within the fulfillment code hook of a bot we can trigger pretty much any automated task that Lambda can handle. Once this task is complete and Lex receives the appropriate response it will return to the contact flow and the caller can be routed into a queue or another flow.

This is incredibly powerful if we want our bot to perform a task after it identifies the caller’s intent and collects all the required slots. However, there are times when, depending on a collected value, we might need to collect more details or, after checking a database, we may want to override a slot collected earlier. In this post we will look at ways to build these types of intelligent interactions using more complex Lambda responses.

Let’s start by reviewing the basics of how the two AWS services interact.

When invoking Lambda, Lex will send a JSON payload that will contain the current intent, details about slots as well as information about the bot and some system variables (including where within the Lex bot Lambda has been invoked). For a full description of the entire input please review the official AWS documentation, but for our purposes the most relevant portion is within the current intent section, shown below.

"currentIntent": {

        "name": "BookHotel",

        "slots": {

            "city": null,

            "room": null

        }

By referencing this section of the input Lambda can access the intent and slots collected by Lex and do any necessary operations, then return an appropriate response. This response will be different depending on the action you want Lex to take next. Currently there are 5 actions or instructions Lambda can send back to Lex.

“dialogAction”: {“type”: “ElicitIntent, ElicitSlot, ConfirmIntent, Delegate, or Close”}

The body of each action will look a bit different. Here is, for example, how a “close” action response will look like:

const response = {
  "dialogAction": 
            {"type": "Close",
            "fulfillmentState": "Fulfilled",
            "message": {
                "contentType": "PlainText",
                "content": "Your hotel has been booked."
            }
        }
};

This is an example of the typical response a fulfillment Lambda function would send back to Lex after it completed whatever action is necessary. It will let the bot know that everything went fine, it can now play the content of the message to the caller and return to the contact flow.

However, in case something is not fine, and we need to have the Lex bot collect additional details we can use the elicit slot response. As the name might imply, this dialog action will let the bot know it must go back and collect a new slot value. For the example below let’s say our Lambda code checked a database and discovered that king rooms are not available in Chicago. Our function can send back the following response which will inform the caller they need to choose a queen or single room.

const response = {
 	"dialogAction": 
            {"type": "ElicitSlot",
                "message": {
                    "contentType": "PlainText",
                    "content": `We're sorry but King is not supported for ${event.currentIntent.slots.city}. Please choose Queen or Single`
                    },
            "intentName": "BookHotel",
            "slots": event.currentIntent.slots,
            "slotToElicit": "room"
            } 
}

Something to keep in mind is that if you are building this kind of response, in your fulfillment Lambda function you should make sure to have a “close” path that actually fulfills the intent of the caller if all values are valid. Potentially infinite loops that keep requesting the same slots are very likely to get an error like: “Invalid Lambda Response: Reached second execution of fulfillment lambda on the same utterance”.

Something else to consider is that it’s also possible to use Lex simply to collect data and validate everything within Amazon Connect. This could look something like the screenshot below. We use Lex to collect the city and room and pass both parameters into a Lambda function that checks a database and lets us know if the entries are invalid. In this example we ask the caller to enter their details again, but we could also set up a separate intent (or a separate bot) for collecting city or room and route the caller to this new get input node.

 

This approach can make a lot of sense if we’re only collecting one or two entries from the caller and the Lambda function doesn’t need to do anything except validate these. It might also be easier to manage for call center supervisors as they can review the logic in the contact flows instead of having to read code inside of AWS Lambda. Finally, this approach also allows you to more easily customize the behavior if one of the entries is not valid. By handling validation within Connect we can build a more detailed menu to handle exceptions instead of relying on the prompt text from an elicit slot response. That said, if you need to do more complex validation of entries the dialog code hook is probably the best option.

So far, we only covered invoking Lambda as a final step of the Lex bot, inside the fulfillment code hook however, we can also invoke functions inside the validation or dialog code hook area. The main difference is that for validation Amazon Lex invokes the specified Lambda function on each user input (utterance) while the fulfillment Lambda function will only be invoked once all slots are filled out.

Since the dialog code hook will be invoked after each utterance, even before all slots have customer entries we will need to use a different type of response. If the room slot value is null because we haven’t asked the caller to make a room selection yet, we don’t want to use an elicit slot response and bypass the regular slot collection mechanism. Instead we can make use of the delegate dialog action. Delegate essentially passes all details back to Lex and lets it decide what is the next appropriate action, which may be to collect the next required slot or fulfill the intent.

A delegate dialog action response could look like this:

 

const response = {
 	"dialogAction":
                    {
                     "type":"Delegate",
                     "slots":
                        {
                         "city":”Seattle”,
                         "room": null
                        }
                    }
}

This response will let Lex know the city input of Seattle is a valid entry and room is currently null, so it should be collected next. Note that you should be careful not to hard-code null as an option that could be repeated as Lex will try to collect that slot only to have it overwritten by the null Lambda response again and again. For a step by step breakdown on how delegate works with Lex please review this example.

Note that using delegate, you can also overwrite the customer input. Maybe instead of Seattle your system needs to use the metropolitan area of “Seattle-Tacoma-Bellevue”. After performing the validation check the delegate response can simply return the new entry to Lex and the bot will pass it on to Amazon Connect once the data collection is complete.

Now that we covered the different options for invoking Lambda and the different responses Lex can receive, let’s look at some pseudo-code for a function that can handle both validation and fulfillment for a hotel booking Lex bot.

 

// --------------- Helpers to build responses which match the structure of the necessary dialog actions -----------------------
function elicitSlot(sessionAttributes, intentName, slots, slotToElicit, message) {
     return {
        sessionAttributes,
        dialogAction: {
            type: 'ElicitSlot',
            intentName,
            slots,
            slotToElicit,
            message,
         
        },
    };
}

function close(sessionAttributes, fulfillmentState, message) {
    return {
        sessionAttributes,
        dialogAction: {
            type: 'Close',
            fulfillmentState,
            message,
        },
    };
}
 
function delegate(sessionAttributes, slots) {
    return {
        sessionAttributes,
        dialogAction: {
            type: 'Delegate',
            slots,
        },
    };
}
  

// --------------- Main handler -----------------------

exports.handler = (event, context, callback) => {
    console.log("incoming event details: " + JSON.stringify(event));
    try {
        console.log(`event.bot.name=${event.bot.name}`);
        console.log("incoming event details: " + JSON.stringify(event));
 
        //Save details from the Lex inpout
        const city = event.currentIntent.slots.city;
        const room = event.currentIntent.slots.room;
        const source = event.invocationSource;
        
        
        //Check if Lambda is invoked to validate or fulfill the request
        if (source === "DialogCodeHook") {
            const outputSessionAttributes = event.sessionAttributes || {};
            let slots = intentRequest.currentIntent.slots;
        
            // Check if any slots have been collected yet and validate the ones that have been colected.
            if (city === null && room === null){
                
                // nothing has been collected yet so we can just pass the null values back to Lex for standard collection
                
                callback(delegate(outputSessionAttributes,slots));             
            }
            else if(city === null && room !== null){
                // This is where you can validate the room type 
                
                if (valid entry) 
                    //The room choice is a valid entry so we can just pass the null slots back to Lex for collection
                    slots = {
                        "city": null,
                        "room": room
                    };
                callback(delegate(outputSessionAttributes,slots));     
                }
                
                else if (not a valid entry){
                    //The room choice is not a valid entry so we will elicit the room slot again. 
                    callback(elicitSlot(outputSessionAttributes, intentRequest.currentIntent.name,intentRequest.currentIntent.slots, "room",
            { contentType: 'PlainText', content: 'We apologize but that is not a valid entry for room type. Please make a new selection.' }))
                }
                        
            }
            else if(city !== null && room === null){
                // This is where you can validate the city and either delegate the slots back to Lex so we can collect room type or elicit the city slot again.                 
            }
            
            else{
                  // This is where you can validate both entries if necessary and delegate the valid slots back to Lex so we can fulfil the intent or elicit the necessary slots again.        
            }       
        }
        
        else if (source === "FulfillmentCodeHook"){
            
            //This is where you will enter the code that will fulfil this intent and finall let Lex know that this request has been closed succesfuly
        callback(close(outputSessionAttributes, 'Fulfilled', { contentType: 'PlainText', content: "Your hotel has been booked." });
        }
    } catch (err) {
        callback(err);
    }
};

While several chunks are missing on purpose and the main handler could certainly be simplified hopefully this function gave you a good idea how you should structure your own Lambda functions.  For more ideas on how to improve your contact flows and assistance deploying your Lex bots please email Craig Reishus.

 

]]>
https://blogs.perficient.com/2018/10/08/lexbots/feed/ 2 232205
Dynamic Queue Hold Messages Using the Get Metric Node https://blogs.perficient.com/2018/06/27/dynamic-queue-hold-messages-using-the-get-metric-node/ https://blogs.perficient.com/2018/06/27/dynamic-queue-hold-messages-using-the-get-metric-node/#respond Wed, 27 Jun 2018 19:26:49 +0000 https://blogs.perficient.com/?p=228502

Last week Amazon announced the release of a new node for Amazon Connect: Get Metrics. It is available from the Set menu in regular contact flows, customer queue, hold and even whisper flows and allows us to dynamically query queue data.

While the Check Queue Status node (available under the Branch menu in some contact flows) already gave us some capabilities to check on a queue, this new node adds more variables and opens up a lot more options. The official announcement points out a few ways we can now improve routing, however, thanks to two new metrics, “Metrics.Queue.OldestContactAge” and “Metrics.Queue.Size,” we can now also offer a much more robust queue hold experience.

 

Before using the Get Metrics node, a typical queue hold flow could look similar to the screenshot below. We would loop an audio clip and interrupt it every minute or so to check the queue status and, based on the longest time in queue, play back a hard-coded message. Each one of the prompts in the image below would have a text like, “The longest time in queue right now is approximately 2(or 4 or 6 or 8) minutes,” then we would maybe offer a callback option for times over 6 minutes.

The disadvantage of this option is obvious. Not only are the messages very rough approximations but we need to build several branching paths to make sure we capture all possible combinations. This is where the Get Metrics node comes in. We can reference its results using the $.Metrics.Queue.OldestContactAge notation and, instead of several branches, we can build something like this.

 

In the screenshot above, the prompt has the following text, “The longest time in queue is approximately $.Metrics.Queue.OldestContactAge”. Instead of 4 branches with hard-coded messages, we have one dynamic prompt.

However, before calling it a day, we need to listen to what the OldestContactAge format is. This metric is returned as seconds and your callers will hear, “The queue time is approximately 480” when the longest wait time is 8 minutes. This is not ideal, but it can easily be fixed with a Lambda function.

Add an invoke Lambda function node and pass in the metric as a parameter.

 

Keep in mind attributes passed in this way will appear as parameters in the Connect JSON payload. If you instead save the metric as a contact attribute using the Set Contact Attributes node, the metric will be accessible under ContactData.Attributes in the JSON payload.

Once passed into Lambda, you can simply divide by 60 to get the minutes or use the modulus operator (%) to get the seconds. That being said, seconds might be too detailed for most call centers, so we will just find the minute wait time and, if it’s less than 1, let the caller know they have less than 1 minute to wait.

Here is the code I used:
exports.handler = (event, context, callback) => {
let input = event.Details.Parameters.timeInQueue.substr(1);
(input > 60) ? callback(null, { "time":Math.floor(parseInt(input) / 60)}) : callback(null, { "time": 0 });
};

Something to note is that the variable provided by Get Metrics and passed into Lambda (timeInQueue) had a . before the seconds, which I remove before converting to integer. Also, I am returning an external attribute called “time” which will be 0 if the input was less than 1 minute; you can of course choose to return the seconds or a rounded up result.

With the Lambda result back in Amazon Connect, you can check the external attribute “time” using a Check Contact Attributes node and, if it’s equal to 0, play a prompt letting the caller know they have to wait less than 1 minute; otherwise simply play back the Lambda results using the following text, “The wait time in queue is approximately $.External.time minutes. Please continue to hold.”

 

Our queue hold flow already sounds much better; however, keep in mind the Metrics.Queue.OldestContactAge variable will contain (hopefully not surprisingly so) the wait time of oldest contact in queue. This means your oldest caller in queue will hear the time increasing if they hit this node repeatedly. We always recommend having a max wait time with a custom message and maybe even forcing the caller to a callback if the wait times goes beyond a few minutes. You can also check if any agents are online and play a special message if it looks like everyone left for the day or maybe trigger a Lambda function that will alert a supervisor.

Also keep in mind that if you don’t have callbacks set to use their own queue and a callback was left waiting overnight, it could potentially skew the numbers for the first callers of the day.

An alternative which can avoid a lot of these issue is instead checking the queue size by using $.Metrics.Queue.Size. If there is one call in queue, you can let the caller know they are next in line (queue size will include the current call in its count), and if the queue size is larger than one, you can let the caller know there are approximately X calls waiting for an agent.

 

Hopefully this gave you some ideas on what is possible using the new node for Amazon Connect. For a full list of all metrics available check out the official documentation and for more ideas on how to improve your contact flows please email Craig Reishus.

 

]]>
https://blogs.perficient.com/2018/06/27/dynamic-queue-hold-messages-using-the-get-metric-node/feed/ 0 228502
Using AWS to Host a Custom Agent Console https://blogs.perficient.com/2018/05/29/using-aws-to-host-a-custom-agent-console/ https://blogs.perficient.com/2018/05/29/using-aws-to-host-a-custom-agent-console/#respond Tue, 29 May 2018 14:54:14 +0000 https://blogs.perficient.com/?p=226446

While not directly related to core Amazon Connect functionality, there are many reasons to familiarize yourself with how to host your own static website in AWS. Creating a basic website your supervisors can use to check the holiday calendar set up via Lambda and DynamoDB or hosting your custom agent console are just two examples you can easily deploy.

The steps we will follow are:

  1. Create the static website to be hosted.
  2. Set up an S3 bucket with appropriate public permissions.
  3. Create a Cloudfront distribution.
  4. Configure your Amazon Connect instance.

Creating the website

Keep in mind that S3 can only host static websites. This means we will not be able to run any server-side code, however we can still use the streams API to create an enhanced agent console with all the features of the standard CCP and more. We will start by embedding the default CCP into a simple HTML website. The website will also pull in contact attributes and surface them to the agent.  In case you are not already familiar with contact attributes, they are essentially variables that can be associated with each call at any point during a contact flow using the set contact attribute node.

Example of setting 2 attributes via the set attribute node

You can save pretty much any value in a contact attribute, including variables collected by Lambda from an external database or information the caller entered in response to a menu prompt. This makes them a powerful way to offer your agents details about the call they are about to take.

However, keep in mind that once saved as a contact attribute, the information will be tied to the call and will be available via the Contact Search page as well as via the CTR export functionality. This means contact attributes are not a safe way to store any personal identifiable information.

 

Example of an agent console embedding the CCP alongside a contact attributes section.

Our custom agent console will use the Streams API to collect these attributes and surface them to an agent using a custom web page. For more details on what is possible using the API, check out our Streams API blog post, but if you are just looking for a bare bones way to capture attributes and display them for an agent, use this example.

The files provided in the example repository contain a basic index page with some simple style rules and a logo, as well as a packaged version of the Streams API (amazon-connect-v1.2.0-2-g5fc44af.js). If Amazon issues an update to the API, make sure you download and compile the latest version from the official github repo then load it inside your own page.

This leaves probably the most interesting part of the provided files: script.js. This Javascript file contains, in order:

  1. The code necessary to initialize the standard Amazon Connect CCP within a container.
  2. The function subscribeToContactEvents which will be called every time a new call arrives to the agent and will in turn call the appropriate functions to update and clear contact attributes.
  3. The function updateContactAttribute which will grab and cycle through the call attributes, updating an HTML table with the attribute key (attribute name) and matching value.
  4. Finally, the function clearContactAttribute which will be called whenever a call ends and essentially replaces the attribute table with a blank table.

The one component that will need to be modified in the code is the ccpURL variable. Please use your own ccp link to update this section:

   //replace with the CCP URL for the current Amazon Connect instance
   var ccpUrl = "https://MYINSTANCE.awsapps.com/connect/ccp#/";

With our basic website all set up it’s time to configure everything in AWS.

Creating the S3 bucket

With our basic website all set up, it’s time to configure everything in AWS.

Creating the S3 bucket

This process will be a bit different depending on whether you plan to use a web domain you registered in Route 53 or another domain host or if you will use the default Cloudfront URL that will be generated by your distribution. It might look something like https://d111111abcdef8.cloudfront.net

If you are planning on using a specific domain, you should name the bucket the same as the domain.

Example of using a custom domain for your S3 bucket name

Once the bucket is created, navigate to Permissions and add the following bucket policy.

{
   "Version": "2012-10-17",
   "Statement": [
       {
           "Sid": "PublicReadGetObject",
           "Effect": "Allow",
           "Principal": "*",
           "Action": "s3:GetObject",
           "Resource": "arn:aws:s3:::BUCKETNAME/*"
       }
   ]
}

AWS will warn you the bucket will now be public and anyone on the internet will be able to access it. Keep this in mind if you’re planning on hosting anything that could potentially pose a security flaw. However, in our case, the page is simply embedding the Amazon Connect CCP and anyone planning to use it will need an Amazon Connect user account.

Navigate to the bucket properties and select Static Website Hosting. Fill in the necessary details and you can now use the automatically generated endpoint to check your brand new website.

S3 static website hosting configurations

However, we are not ready to pull in the CCP quite yet, and if you used our GitHub example, you might see several errors in the console. In order to properly work with Amazon Connect, we need an HTTPS page, and for that we will use Cloudfront.

Creating a Cloudfront distribution

Navigate to Cloudfront inside AWS, and select Create a Distribution, then select Web Distribution. This should open a long list of options for your new distribution; luckily you won’t need to modify too many of them.

Under Origin Domain Name, select the S3 bucket you just created. Under Viewer Protocol Policy, select Redirect HTTP to HTTPS. You can customize the Object Caching if you want to force the distribution to grab the files from your S3 bucket more frequently; however, if you need to make changes to your page after it’s been published, the safest way is to just invalidate the distribution.

Make sure you select an SSL Certificate; the Default CloudFront Certificate should work well enough. Finally, for Default Root Object, make sure you enter index.html. Hit Save and wait approximately 15 minutes for the distribution to generate. In the mean time, you can check your settings which should look like this:

 

CloudFront Distribution settings

 

As a final step, navigate to Amazon Connect and under Application Integration, add the Cloudfront domain name with https:// in front of it.

 

Configuring integration with an external website in Amazon Connect

 

If you want to configure a custom domain you can simply handle that within Route 53, otherwise you are ready to use your new custom agent console. For help customizing your console or assistance designing your call flows please email Craig Reishus.

]]>
https://blogs.perficient.com/2018/05/29/using-aws-to-host-a-custom-agent-console/feed/ 0 226446
4 Ways to Improve Your Amazon Connect Contact Flows https://blogs.perficient.com/2018/01/16/4-ways-to-improve-your-amazon-connect-contact-flows/ https://blogs.perficient.com/2018/01/16/4-ways-to-improve-your-amazon-connect-contact-flows/#respond Tue, 16 Jan 2018 21:36:17 +0000 https://blogs.perficient.com/integrate/?p=5387

One of the most powerful features available within Amazon Connect is the visual contact flow editor. Within Amazon Connect contact flows are not just used for interactive menus, they allow supervisors to dynamically update the settings for each call entering the system and make sure callers hear personalized and relevant options. This is an area where even a few slight changes can have a high impact and the tips below will help you deliver a much more pleasant caller experience.

Intelligently break up contact flows
One of the ways Amazon Connect allows supervisors control over each aspect of the call center is through a deep integration between settings and contact flows. Oftentimes the caller’s choices must influence numerous configurations. Some examples include the prompts and options callers should hear while on hold or what part of the call should be recorded. Instead of configuring everything ahead of time and manually managing numerous menus, Amazon Connect allows for updating settings directly in the contact flow.

Example of updating language settings based on a menu choice

This approach will deliver a more personalized experience for your callers, but can easily lead to overlapping and difficult to manage contact flows.  To avoid this, any design phase should include a careful consideration of how to set up contact flows in a modular fashion. Any menu that will be reused, such as a callback flow, should be set up as a stand alone. Other menus should ideally only have a simple, clearly defined purpose – set queue specific settings or gather client input and do a data dip.

A good approach is breaking out the caller experience, depending on which settings will impact the entire call and which ones are more specific. Start with a general main menu first and continue focusing the experience as we go further down the menu tree. Language, recording type and maybe contact flow logging (if turned on) are all good examples of settings that can be configured in the very first contact flow.

Example of a main menu contact flow

 

Once global settings are configured a queue selection contact flow can determine (now in the appropriate language) which queue the caller should be paired with. This menu can also set queue specific settings if they will override the defaults.  Some of the common settings you will probably want to configure based on the queue selected are customer hold flows, customer queue flows and potentially any whisper flows.

 

Some of the more common queue specific settings

 

While it might take longer to configure this kind of modular approach it will make managing the call center much easier in the long-term, while still providing a customized caller experience. Keep in mind you can use the transfer to contact flow node at any point to send the caller back to another level of the menu if they made a wrong selection or need to hear the previous options again.

Avoid repeating prompts

It’s good practice to start your contact flow with a main greeting, a prompt letting the caller know they have reached a new menu, maybe inform them about your recording policies or regular business hours. However, you will also most likely have multiple contact flows along with transfer to contact flow nodes that will route callers back to the previous IVR.

This setup can cause some problems as callers will hear the main greeting repeatedly, every time they enter the main menu again. One possible solution is creating a separate contact flow dedicated just to greetings, a menu that can be skipped by subsequent transfers. That being said, setting up a separate menu for just one prompt can become confusing. A better approach is demonstrated in the Sample inbound flow (first call experience) deployed by default in every new instance of Amazon Connect.

The sample inbound flow

This set-up might seem a bit confusing at first, but once you understand how Amazon Connect creates and uses contact attributes you will see opportunities to use them everywhere.

Let’s analyze what is happening in this sample flow: as soon as the caller enters the menu a custom contact attribute, “greetingPlayed” is checked. If it is found and equals true the contact flow skips the greeting. However, since this is the very first entry point into the call center and this attribute has not been defined yet all new calls will go down the “No Match” path. At this point the contact flow defines “greetingPlayed” as a new “true” attribute and attaches it to the call. This attribute will be available even when we transfer to a different contact flow, and will be used to skip the greeting the next time we are back in the main menu, since it will match “true” the second time around.

Another example of where this kind of contact attribute check could be used would be a custom timeout message. Any prompt asking for customer input can route the default or timeout path to a check custom attribute node. If there is no match, in other words this is the first time the caller did not make a selection, maybe they just need to hear the menu again. If it’s their second time, it might be better to let them know the menu cannot read their input and route them to a default customer service queue.

 

An example of how to handle “unresponsive” callers

 

Treat queue hold time as a contact flow

Another area where Amazon Connect takes a slightly different approach from other call centers is the configuration of phone hold behavior while waiting in queue. Instead of setting everything up within a configuration menu supervisors will use the set customer queue flow node and direct the caller to a looping contact flow. While callers are waiting for an agent they are essentially placed in a special contact flow which can be configured to interrupt every few minutes and offer them information or ask for a choice.  This makes the queue hold behavior incredibly flexible, but also a bit confusing if you are new to Amazon Connect.

Sample customer queue flow

In the example above the contact flow will repeat the loop prompts, or prompt if you are only playing hold music, and periodically interrupt to go down the timeout branch. By changing the timeout setting supervisors can control how often this will happen. In our example we are interrupting every 2 minutes to check the queue status.

Example of presenting multiple prompts with a 2 minute interruption

By checking the queue status and more specifically the time in queue property we can inform the caller how much longer they have to wait. Some other options available here are: checking staffing, capacity or hours of operation. Ideally we should check hours of operation before placing someone in a customer queue flow, however it is possible someone calling in at 4:59pm will reach a queue that has closed in the meantime.

In our example if the caller has more than 5 minutes to wait we are letting them know there is a delay and offer an option for a callback. Something to keep in mind is that once placed in a customer queue flow there is no straightforward way to transfer the caller back to a previous contact flow such as an earlier menu or a dedicated callback flow. The caller can be transferred to a voicemail via the transfer to phone number option, or be sent to a callback queue using the transfer to queue node.

If the callback option is enabled, consider checking the caller’s phone number and asking them for an input instead of simply using the default phone number. If you need more details on how that can be set up please read our previous blog post on setting up callbacks.

One final consideration is that customer hold flows are different from customer queue flows and as such are more limited. They are only used when an agent places the caller on hold and won’t really offer the caller the option to make a choice in a menu or receive queue status updates. Both contact flows can still invoke Lambda functions though, which can allow custom code to run at pretty much any time during a call.

Make use of AWS Lambda

No discussion about contact flows would be complete without mentioning the invoke Lambda node. A very powerful tool for any contact center the ability to run custom code and read the results can greatly expand what can be accomplished within a contact flow. Processing new support tickets, creating a custom Holiday calendar,  or creating a premium contact flow that has customized greetings and “remembers” the last issue the caller encountered are all examples of what is possible by making use of Lambda.

More details on what exactly is passed into the custom function and what can be read back by Connect are available in the official documentation. Finally, remember to keep crucial functions “warm” using the tips in this blog post.  For help building your own custom functions or assistance designing your call flows please email Craig Reishus.

]]>
https://blogs.perficient.com/2018/01/16/4-ways-to-improve-your-amazon-connect-contact-flows/feed/ 0 196502
Building an Amazon Connect Holiday Calendar in 4 Easy Steps https://blogs.perficient.com/2017/12/20/building-an-amazon-connect-holiday-calendar-in-4-easy-steps/ https://blogs.perficient.com/2017/12/20/building-an-amazon-connect-holiday-calendar-in-4-easy-steps/#comments Wed, 20 Dec 2017 22:42:16 +0000 https://blogs.perficient.com/integrate/?p=5232

Between calendars, the check queue status node as well as the capacity settings for each queue, Amazon Connect administrators have several options to handle a busy work day. However, there are some scenarios where the routing behavior needs to be customized beyond these out of the box options. One example would be setting up a holiday hours calendar well in advance.

By default, the Amazon Connect hours of operation are perfect for setting up regular week-long calendars that handle multiple shifts and routine breaks. However, to handle infrequent changes such as time off for New Year’s day it makes sense to use Lambda along with a DynamoDB calendar.  We’ve previously covered the steps necessary to set up an emergency call flow, and the same concepts can be used to provide the caller with a season appropriate greeting or make sure they don’t get stuck in a closed queue because the regular calendar was not updated appropriately.

DynamoDB set-up

The first step you will need to take is create a table to store all our holiday dates. Depending on the exact use case there are a few ways you could set things up. If your call center only takes whole days off it might make sense to just save the month and day then query the table with today’s date. If you’re only interested in updating the greeting with a custom message having a simpler table that only stores a date and corresponding text to read back would be easier.

For our use case, we want to have a start timestamp for the holiday as well as an end timestamp. This will allow us to call Lambda with a timestamp for each call and check if it’s between any of our predetermined holiday hours then branch based on the results. We will also want to play a different message depending on the holiday so along with the two dates we will save a holiday name as a string. Since the dates should not overlap you can simply use the dateStart as your primary key.

After the table is created add a few items using either the tree or JSON text entry options. Make sure to append a dateEnd and a holiday name alongside your primary key. Also keep in mind that dateStart and dateEnd should be timestamps (for example midnight December 25th GMT will be represented as 1514160000000). Using a timestamp conversion tool should make things easier and at the end of the blog we will also go over another way to update the table. After entering a few values your table should look similar to the image below (note that for testing purposes we entered a test holiday you should make sure to remove before go-live).

 

Lambda set-up

With the DynamoDB table in place it’s time to look at the code we will use to check where our date falls in the calendar. Create a new Lambda function and select the author from scratch option. You can name it something that makes sense for you, select the Node.js run-time and make sure you select a role that can access Lambda.

In case you don’t already have an IAM role set up for database access you can easily create one using the visual IAM guide. Keep in mind it’s always best to limit access on a need to have basis. It might make sense to create a restricted read only role and even if you allow full access to DynamoDB it’s best to limit access to just one table via the resource section. Here is an example of how your role for this project might look.

{

    "Version": "2012-10-17",

    "Statement": [

        {

            "Action": [

                "dynamodb:*",

            ],

            "Effect": "Allow",

            "Resource": "arn:YOUR DYNAMODB TABLE ARN"

        },

     

    ]

}

Once you have the appropriate role in place you can create the function and enter the following code in the in-line editor.

const AWS =require('aws-sdk');
const docClient = new AWS.DynamoDB.DocumentClient({region: 'us-east-1'});

exports.handler = (event, context, callback) => {
    
     const scanningParams ={
       TableName:'connect-holiday-schedule',
       Limit:100,
       FilterExpression: ":dateNow between dateStart and dateEnd",
       ExpressionAttributeValues: {
         ":dateNow": Date.now()
         
    }
   };
  
  docClient.scan(scanningParams, function(err,data){
      if(err){
          callback(err,null);
      }else{
          if(data.Items[0] == null){
              const noHolidayResponse = JSON.stringify({reason: 'noHoliday'});
              callback(null,JSON.parse(noHolidayResponse ));
          }
          else{
          callback(null,data.Items[0]);
          }
      }
  });
};

This Lambda function will scan the table and return results where the current date is between the dateStart and dateEnd values. Scanning a table is not necessarily the most resource efficient solution, but for our small holiday schedule it will work well enough. If you expect a much larger table you should consider running a query instead.

Something to note is that we are returning a hard-coded “noHoliday” message if we don’t find any results and we are only returning one item back (data.Items[0]) in case of a match. It’s possible that our function will find more than one matching result, but at this time passing back anything beyond a key:value list to Amazon Connect will result in an error.

Something like this will work well.

{

  "dateEnd": 1514277900000,

  "dateStart": 1513617545572,

  "reason": "Test Holiday"

}

 

But if your results look like this Amazon Connect won’t be able to parse it and will throw “Results”: “The Lambda Function Returned An Error.”

[

{    "dateEnd": 1514275200000,    "dateStart": 1514102400000,    "reason": "Christmas"  },
{    "dateEnd": 1514880000000,    "dateStart": 1514793600000,    "reason": "New Year Day"  },
{    "dateEnd": 1514277900000,    "dateStart": 1513617545572,    "reason": "Test Holiday"  }

]

Also note that we are using Date.now(), which will return a timestamp. If you don’t need hours you can just grab the day and month or filter on another property.

Finally, if you are not familiar with node.js you can easily use another programing language to achieve the same result. The getting started with DynamoDB documentation has examples for several languages you can use to query a table.

With the function in place you can run a test with a blank JSON since we are not passing any values into the function. If you are building Lambda functions over Christmas or you set up a test holiday appropriately you should see a reason for celebration being returned. Otherwise you should see the “noHoliday” result.

Before moving to the Amazon Connect contact flow, make sure you add the appropriate permissions via the AWS CLI. . You can find more details on how to do this in the official integration documentation.

Amazon Connect set-up

Since we want to make sure all callers are aware the office is closed for New Year’s Day, the contact flow below should be the first entry point into the call center. To keep things easy to manage it’s recommended to set it up in a separate flow that will then transfer to a menu or a regular calendar check after doing the Lambda call.

The very first node should be an Invoke AWS Lambda Function calling the function we created in the previous step. All you will need is the ARN found in the upper left corner of the function page. You can leave the timeout in place since the query should be very fast and we won’t pass in any parameters.

Lambda will automatically generate the date/time when it was called and check if it’s present in the table, returning either a holiday reason or “noHoliday”. In order to verify what comes back you will make use of a Check contact attributes node.

Make sure to enter all holiday options and route them down the appropriate path. For testing purposes you can simply set up a prompt node that will read back the external item found in “reason” ($External.reason) . Keep in mind the noHoliday option will be your default behavior and can transfer into the regular call center flow.

At this point you are essentially done with the Connect integration, however reading your holiday table is not the easiest task since everything is entered as a timestamp. Also updating a new Holiday will take some work including a timestamp conversion tool.

Luckily AWS makes it easy to set up a simple static website within S3 that can read entries from a table and even allow supervisors to upload a new holiday.

Setting up an admin website

We will use two slightly different Lambda functions to display and update data on our web page. You can create a Holiday Read function that will grab all data available and look similar to this.

const AWS =require('aws-sdk');
const docClient = new AWS.DynamoDB.DocumentClient({region: 'us-east-1'});

exports.handler = (event, context, callback) => {
    
     var scanningParams ={
       TableName:'connect-holiday-schedule',
       Limit:100,

   };
  
  docClient.scan(scanningParams, function(err,data){
      if(err){
          callback(err,null);
        }
      else{
          callback(null,data);
          }
      });
}

It is essentially the same function we used earlier in the Amazon Connect contact flow, but this time we are not filtering out one entry instead returning everything we find in the DynamoDB table.

The table update code will be a bit more involved since it needs to accept 3 data points entered by a supervisor and make sure it can accept a connection from an external website. Please note that the code below will accept requests from any website domain. This is not ideal and in practice you should limit requests to a specific domain by changing the Access-Control-Allow-Origin headers.

const AWS =require('aws-sdk');
const docClient = new AWS.DynamoDB.DocumentClient({region: 'us-east-1'});

exports.handler = (event, context, callback) => {
    console.log(event.body);

    var obj = JSON.parse(event.body);

    console.log(obj.dateStart);
    
   const params ={
       Item:{
           dateStart: obj.dateStart,
           dateEnd: obj.dateEnd,
           reason: obj.reason 
       },
       TableName:'connect-holiday-schedule'
   }
  
  console.log(params);
  
  
  docClient.put(params, function(err,data){
      if(err){
          callback(err,null);
      }else{
          const response = {
      statusCode: 200,
      headers: {
        "Access-Control-Allow-Origin" : "*", // Required for CORS support to work
        "Access-Control-Allow-Credentials" : true 
      },
      body: JSON.stringify({ "message": "Success!" })
    };

    callback(null, response);
      }
  });
    
 
};

With both functions created you can take a look at configuring the API gateway. This service will allow us to make a call to our Lambda functions from an outside source. If you are not familiar with configuring API gateway reading through the getting started documentation should get you started.  You will need to create a new API, a resource and add a GET as well as a POST method.

The GET method should be pretty straightforward and once you select integration type as Lambda Function you will be able to select your Lambda function for reading table entries.

The post method can be set up as a proxy integration as shown below.

Finally make sure to enable CORS for the resource you created. You can do this by selecting the resource and navigating to Actions. For testing purposes you can enable access from any origin by using Access-Control-Allow-Origin:’*’. As noted above this will allow anyone to access your table and should not be kept in production.

After enabling CORS you can deploy the API and build the actual website your supervisors can check. There are several ways you can handle calling the API gateway we just created and display the values received back, depending on your favorite front-end approach. Here is a github repo for a pretty bare-bones example of how you can use jQuery to display all holidays and allow new entries.

Once you have created your web page you can simply upload it in an S3 bucket configured to host a static website. Now your call center supervisors can see all holidays entered in the system and add new ones if need be.

Hopefully this post gave you some ideas on how you can better customize your own call center, but if you have any questions around configuration, best practices or how to build a unique integration please reach out to Craig Reishus.

]]>
https://blogs.perficient.com/2017/12/20/building-an-amazon-connect-holiday-calendar-in-4-easy-steps/feed/ 5 196489
Setting up Callbacks within Amazon Connect https://blogs.perficient.com/2017/11/21/setting-up-callbacks-within-amazon-connect/ https://blogs.perficient.com/2017/11/21/setting-up-callbacks-within-amazon-connect/#respond Tue, 21 Nov 2017 20:41:42 +0000 https://blogs.perficient.com/integrate/?p=5001

When configuring the call center experience administrators should keep in mind that long wait times will invariably lead to dropped calls and frustrated customers. An easy way to make sure your callers don’t have to wait on hold for a long time when no agents are available is to make use of the callback feature. This will keep a caller’s position in queue, even after they hang up and have an available agent automatically dial the caller when their turn comes up.

Out of the box Amazon Connect has two examples of contact flows making use of callbacks, but between different IVR types, branching nodes and queue settings things can get a bit confusing. This blog post will walk you through all the steps necessary to set up a customer friendly, easy to manage callback contact flow.   

The first step, before diving into the contact flow editor is deciding what should trigger the callback option. It’s not an ideal experience to force callers into a callback if they would rather wait so we will always make sure to offer a menu with multiple options. However, when reaching a menu choice you want to make sure callers have enough information about the queue they are waiting in to make an informed decision.

Here are some of the nodes that we can use to collect current call center status in order to inform our callers how long their wait might be:

Check Queue Status:  This node can check if a queue is at capacity or if the wait time in queue is currently higher than a certain threshold (milliseconds, seconds, minutes or hours).

The best way to make use of this node today for callback purposes is to branch based on time in queue. For example, in the screenshot below we are checking the status of a queue and if the wait time is more than 30 seconds we let the caller know and give them the option to ask for a callback or be placed in queue to wait for 30+ seconds. Something to note: make sure you set up a branch for queue time less than 30, otherwise the branch will go down the no match path for any short wait time.

 

Something to consider is offering a path for wait time over several minutes as well. At that point, you can offer the caller the option of a callback or being transferred to voicemail, since something is clearly going wrong within the queue.

Check Staffing:  This node will check how many agents are available or staffed (available, on call, after call work or custom status) within a queue. This node may be more useful in capturing the current queue status compared to the check queue status node as it’s possible the wait time for a queue shows as less than 30 seconds, but only because there is no one waiting in a queue that just closed for the day.

By checking if there are any available agents you can make sure the caller will be able to talk to an agent immediately, otherwise you can offer a menu with a choice for a callback.

Check Hours of operation: This node will look up the current business hours for a queue or can be set up to check if the caller is within a certain time frame.  This second option can be useful during particularly busy times of the day. If you know your call center gets a lot of calls between 12 & 1 every afternoon, you can set up a special check to offer all lunch callers the option of a callback.

Transfer to Queue (at capacity):  Another good place to offer callbacks is the transfer to queue node. While you can check queue capacity within the check queue status node, transfer to queue is the last node in the IVR and will contain the most recent queue data. Depending on how complex your call flow is this might be a better point in the call experience to route callers to a callback menu option.

Once we use a branch node to asses the status and the caller is aware there are no immediately available agents, it makes sense to offer them a callback option through a get customer input node.  Here is an example on how that might look.  In the contact flow shown below we are checking if any agents are in the basic queue and if no one is available right away we let the caller know they have the callback option (this would be path 1 which is not connected to any options below).

Now it’s time to finally set up the callback.

The most important component of the callback flow will be the Transfer to Queue node. At it’s most basic you could set up an entire callback flow just around this node. Try creating a new contact flow, set it up as shown below and connect it directly to an inbound phone number. Make sure you select transfer to callback queue inside the transfer to queue node and play a prompt after a success, otherwise the caller will not hear anything.

 

When you dial in Amazon Connect will store your phone number and automatically use it to place a callback once an agent in the queue you set up (in our case Marketing) becomes available.

If you have a very simple use case this should be all you need to get started with callbacks, simply transfer to a callback queue instead of a regular queue and the caller will be automatically called back at the number they used to dial into the system. If you want the caller to hear a message before being connected to an agent create a new Outbound Whisper flow and set it under the optional settings for the queue.

 

In this case a caller receiving a callback from the marketing queue will hear the default outbound whisper of “This call is not being recorded.”

Something to keep in mind is that today there is no out of the box way to make sure the caller still wants a callback before already assigning an agent to the call. The system works in the following way:

  1. It assigns an available agent with skills for the appropriate queue to the callback
  2. The agent needs to accept the callback within the CCP or it will get routed to another available agent
  3. Once the agent accepts the callback Amazon Connect will dial the customer using the queues outbound settings
  4. Amazon Connect will play any whisper flow configured for the outbound queue
  5. Caller and Agent are connected (even if we reached just the caller’s voicemail)

Your agents should expect they may need to leave a voicemail and call back later.  They should make note of the callback number if possible or at least the time of the call so a supervisor can look up the session later.

 

Of course a nice option would be to collect a preferred callback phone number from the caller in case they called from their office but want a call back on their cellphone.

To enhance the basic callback flow the first step should be collecting the caller’s phone number. You can do this by using a store customer input node. Make sure you select phone number format and add a delay between entries so the caller has time to enter each digit. This will store the preferred callback phone number as a customer input variable.

Once the callers preferred number is stored you can make use of the set callback number node. This node will validate the number and store it for callback. Make sure the invalid and “not diable” paths let the caller know their entry will not work and then go back to the phone number collection node. If everything is correct, play a prompt to let the caller know we will call them back and add a transfer to queue node.

Make sure that after adding the transfer to queue node, you select the transfer to callback queue option. This will open several options relevant only for callbacks. You can select an initial delay and maximum number of attempts from the callback.

 

One last configuration is choosing the queue to use for the callback. This is an optional setting since best practices recommend you set the queue early in the call flow making use of the set queue node. In either case you need to make sure a queue is set for the callback as the system needs the queue settings to properly handle callbacks.

 

After the transfer to queue node is properly set up the last step is making sure all error nodes are handled and you  hang up the transfer node. The call flow should look similar to this.

Publish the contact flow and place a test call through, requesting a callback. After the contact flow hangs up it may seem like nothing happened, you will need to sign in as an agent associated to the queue containing the callback and set yourself as available.

The CCP should show a callback is connecting and any whisper flow configured for the queue will kick in. Note that this whisper flow will play for the caller, not the agent. This could be a good opportunity to remind them they requested a call back and let them know an agent will talk to them now.

That should be it, you have set up a callback flow that is easy to update and will improve your callers experience. There are several ways you can expand on this flow, for example you can use the system Customer Number variable to read back the caller’s phone number and ask them if they want to enter a new phone number or would rather use the default.

Hopefully this post gave you some ideas on how you can introduce callbacks into your own call center. Want to talk more with our experts about Amazon Connect solutions? Reach out to Craig Reishus.

]]>
https://blogs.perficient.com/2017/11/21/setting-up-callbacks-within-amazon-connect/feed/ 0 196475
Best practices on using Amazon Connect metrics https://blogs.perficient.com/2017/10/19/best-practices-on-using-amazon-connect-metrics/ https://blogs.perficient.com/2017/10/19/best-practices-on-using-amazon-connect-metrics/#respond Thu, 19 Oct 2017 17:46:25 +0000 http://blogs.perficient.com/integrate/?p=4753

Depending on the use case call centers can rely on very different KPI’s for reporting. What can be a key metric for an inbound call center often turns out to be merely a nice-to-have if the call center focuses on quick outbound calls. In the end probably the most important reporting feature for a call center solution is giving supervisors the ability to choose among multiple reports, allowing admins to dig into just the data that is relevant for their use case.

Amazon Connect does exactly that, offering several reporting options that cover most common metrics used by a call center. However, between a dashboard, real-time metrics, historical metrics, a contact search as well as cloud watch dashboards and logs it can be a bit confusing to identify what are the best metrics for each call center and where to find them. In this post we’ll look at some common reporting areas and recommend some best practices when setting up an Amazon Connect instance.

While there are many ways to group and analyze call center information, we will focus on:

Productivity metrics. These are KPI’s that allow supervisors to identify how efficient agents are during the day and do basic workforce management.

Service quality metrics. These data points will help infer the customer experience and allow supervisors to quickly address pain-points.

Agent metrics. Along with quality assurance monitoring of calls and recordings these indicators will help determine how well agents are doing their job.

 

Productivity metrics

Productivity metrics are usually closely tied to agent utilization (total number of calls handled over number of work hours) and ultimately help supervisors make sure agent time is utilized in the most efficient way. These metrics will also help plan headcount and forecast how many agents are needed during particularly busy times.

While Amazon Connect doesn’t have any per agent costs that need to be handled ahead of time, supervisors can use the following metrics to monitor agent productivity:

  • Occupancy (average percent of time agents are actively occupied/connected on a call)
  • Average handle time
  • After work time
  • Number of calls handled and calls missed

One of the most useful reporting tools for monitoring day-to-day productivity is the real-time metrics agent report. This report offers several agent performance metrics including the four above and can be configured to show data for an entire day or just a few hours. This report will also allow supervisors to control agent status remotely and listen into calls for quality assurance purposes. Something to note about joining a call in silent mode is that the call must be recorded.

 

Agent reports have several filters which will determine the agents displayed including: queues, routing profiles and agent hierarchy. For a complex call center with multiple teams and supervisors it is almost always recommended to set up an agent hierarchy. While this may seem complicated at first it is a great tool to break out the call center into smaller units for reporting purposes.

To better understand how to make use of hierarchies consider the following example. Imagine you are running a call center which has locations in two countries and agents in several cities in each country. Your local supervisors want to be able to run reports on just agents in their country or city.

In order to set this up, navigate to Users-> Agent Hierarchy. We will start by configuring the countries as the first level of separation with cities falling underneath them.

After all the levels of the hierarchy are created it’s easy to set up countries and cities under the appropriate level. Just click on a country after it’s added to fill in the details for the appropriate cities.

Now when creating a new agent, we will select which country and which city they will be working in, which in turn will ensure they only show up in the appropriate reports.

After everything is set up when looking at reports a supervisor in Chicago can easily filter out any agents who are in a different country or outside of Chicago. This report can be saved in order to make it even easier for supervisors to only see relevant data.

You can add up to 5 levels of hierarchy to make sure each supervisor only sees data that is relevant for them. For example, if each city has an IT and a Sales call center which report data differently each business unit can be added as a 3rd level. However, if supervisors will need to report on the entire IT organization rather than geography you should consider creating the Business unit (IT or Sales) as the first hierarchy level then add the agent’s location underneath. For more details on how to make use of agent hierarchy you can review the official Amazon documentation.

Another useful tool to visually understand and improve productivity is the dashboard, especially when used as a wallboard that agents can see and use to self-monitor. The dashboard is available from the home page after hiding the initial configuration guide (you can always open this guide by using the “See the guide” link in the top right corner.

From the “Configure” button supervisors can select which queues they want to monitor as well as determine what the percentage triggers are for color changes in the two gauges showing Service level and Occupancy.

Keeping this data visible to the entire call center will help identify if agents are falling behind in productivity and will allow supervisors to quickly bring in an overflow team if the service level is not met.

Finally, the best way to analyze historical productivity trends is making use of the agent performance historical metrics report. Something to note regarding all historical reports is that while the data can be grouped in several ways (by agent, by queue, agent hierarchy), some metrics will not be available unless the grouping makes sense. For example, if all data is grouped by queue, the report cannot show agent status data as an agent’s status is not correlated in any meaningful way to a queue.

 

For more details on grouping in historical reports you can review the Amazon documentation.

As with real- time reports these metrics can be filtered by hierarchy levels to get averages for just the relevant section of the call center. Going back to the example used above where we have a call center spread out across two countries and several cities, making use of agent hierarchy can quickly tell supervisors how all agents in the US are doing compared to agents in another country. Using a different level of the hierarchy we can tell how agents in a city are doing compared to agents in other cities.

 

Service quality metrics

Service metrics are usually focused on the caller’s experience; for example, how fast did they end up talking to the right agent or how long was the wait in queue.  Combined with caller surveys these metrics should help supervisors identify ways they can improve customer service and streamline the call center. Some typical metrics used to measure service quality are:

  • Average speed of answer
  • SLA (how many calls were answered under a previously agreed upon service level)
  • Average hold time
  • Abandoned rate
  • Call duration
  • First call resolution rate

 

Supervisors can make use of either the queues or the routing profiles real-time metrics report, depending on how they need the data to be grouped. Something to note is that queue data will contain routing profiles sub-groups as well as a link that will pull up all agent’s that are part of that routing profile, making the queues grouping  more useful for most day-to-day usage.

Below is an image showing the difference in grouping options. Note that the queues report also offers a summary of all queue data, while routing profiles can contain duplicates of the queues and therefore don’t offer a summary total.

Something else to keep in mind is that while the service level cannot be configured on a per queue basis, reports will allow supervisors to select the most appropriate SL they want to report on in increments of 5 seconds (and doubling after 40 and 60 seconds).

One of the metrics not immediately available on any report is the first call resolution rate. Amazon Connect does not have a historic or real-time report for all callers. However, using the contact search supervisors can look up all recent calls from a certain phone number or contact ID and gather details on several metrics.

This type of analysis can be very helpful in determining how many callers reach out multiple times, who are the agents they end up talking to and how fast  their questions are resolved.

An additional metric that may be useful when considering the caller experience is looking at how many calls end up transferred out of the call center and on how many calls the agent had to consult with someone else to resolve the issue. An agent that routinely cannot resolve issues might need more training or should be moved to a different routing profile that routes easier calls to them first.

Finally, if more in-depth reporting on the caller is needed Amazon Connect can easily integrate with several CRM systems like Salesforce or ticketing systems like Freshdesk. If a full integration is not needed it is very easy to make use of a Lambda function to tracks the number of times a caller entered the call center and maybe even keep track of the call reason. If this type of tracking sounds interesting a good starting point could be this premium call flow guide.

Agent metrics

Besides making sure callers have a positive experience while the call center remains efficient supervisors must also make sure agents are adhering to QA standards and completing their tasks in a reasonable time frame. To make it easier to do this Amazon Connect offers a few agent specific reporting options:

  • Agent status report
  • Agent activity audit
  • Quality assurance analysis tools

Both the agent status report and the activity audit are available under historical metrics and are a bit different from other  reports. While most reports in Amazon Connect either show the current call center status or group data to calculate average performance these two agent reports are focused on just providing all relevant status data for certain agents.

The agent status report will show how long an agent was online, when they logged on and when they logged off. This is a perfect way to keep track of how long agents work each day. Note that you can also pull all the agents within a routing profile into a report if the work hours of an entire team of agents are needed.

If more details are needed about the agent, for example how long they were in a status as well as details on the calls they handled the agent activity audit will provide more detailed information for any given day.

From the agent audit status, it’s easy to navigate to the contact search page and gather more details about each call the agent handled.

While there are several other ways to slice the data gathered by Amazon Connect, hopefully this post helped point you in the right direction and gave you some ideas on what you should consider while setting up your call center. To talk to our experts about our Amazon Connect solutions, email Craig Reishus.

]]>
https://blogs.perficient.com/2017/10/19/best-practices-on-using-amazon-connect-metrics/feed/ 0 196458