Perficient Blogs Expert Insights Wed, 14 Nov 2018 20:30:54 +0000 en-US hourly 1 30508587 Copyright © Perficient Blogs 2011 (Perficient Blogs) (Perficient Blogs) 1440 Perficient Blogs 144 144 Blogs at Perficient Perficient Blogs Perficient Blogs no no Dell Boomi Enhances its Enterprise Portfolio Wed, 14 Nov 2018 19:58:20 +0000 Did you get a chance to attend, or check out the exciting things that happened at Boomi World 18 last week? During the national event, Dell Boomi announced in a press release that latest additions to its integration Platform as a Service (iPaaS) for today’s enterprise. Dell Boomi offers a unified platform with built-in data quality and connectivity across the enterprise —from people, applications and databases to devices and things—Boomi helps define the iPaaS industry today. While there were several highlights of Dell Boomi’s capabilities geared toward the enterprise in the press release, one in particular stood out for Perficient. Boomi announced that they would be partnering with Pivotal to offer customers faster deployment of the Boomi iPaaS, with deployment options available for Pivotal Kubernetes and Pivotal Application Services environments (PKS/PAS) from the Pivotal Cloud Foundry marketplace. One of Perficient’s general managers, specializing in our cloud platform solutions group, was able to give more insight on Perficient’s interest in the Dell Boomi and Pivotal Partnership,

“Perficient has strong, strategic partnerships and implementation experience with Dell Boomi and Pivotal, key technologies which support IT modernization. The depth and breadth of our experience and expertise positions us to expertly deliver these platforms, accelerating digital transformation for our clients. Together, these technologies unleash powerful application development and cloud integration capabilities that our customers need.”

Interested in learning more about Dell Boomi’s latest additions to its iPaaS for today’s enterprise? Check out the press release, or head over to their webpage. You can also Follow our blog for lessons learned from the conference and download our guide for best practices to get the most value out of the Boomi platform.

]]> 0 233410
Making Sense of Colorado’s New Sales Tax Requirements for In-state and Out-of-state Sellers Tue, 13 Nov 2018 18:30:15 +0000 Editor’s Note: This guest blog post comes courtesy of Gail Cole with Avalara.

Approximately 20 states have started requiring remote sellers to collect and remit sales tax since June 21, 2018, when the Supreme Court of the United States removed the physical presence rule that long prevented states from taxing remote sales. Eight more states will do the same in the coming months, including Colorado, where sales and use tax compliance can be extraordinarily complex.

Colorado’s economic nexus law will impact out-of-state businesses with substantial sales into the state (more than $100,000 or 200 transactions). It won’t affect in-state sellers, though they may have to comply with remote seller sales tax laws in other states. However, Colorado is also implementing new sales tax sourcing rules, and these will affect both in-state and out-of-state sellers.

Currently, Colorado law requires retailers to collect taxes the customer and seller have in common. When a Colorado customer makes a purchase at a Colorado brick-and-mortar store, all applicable taxes are shared (e.g., state, city, county, and special district). When a Colorado customer purchases an item for delivery from an in-state seller, as with online, mail, or phone sales, the state sales tax may be the only tax the customer and the seller have in common.

Once the new sourcing rules come into play, both Colorado and out-of-state sellers will be required to collect and remit the full sales tax rate in effect at the location of the sale. This is typically the ship-to address for deliveries. In addition to the state sales tax, the total sales tax rate may include city, county, and special district taxes.

Remote sellers that make sales in Colorado and don’t qualify for the small-seller exception will have to start collecting and remitting Colorado sales tax on December 1. The new sourcing rules for sales tax are expected to take effect that same day. Learn more about them in this Avalara blog.


]]> 0 233356
Callback or Voicemail: What’s the Best CX for Your Contact Center? Tue, 13 Nov 2018 17:06:17 +0000 Hello, 1980 called, they want their Voicemail back…

In the 70s we started using Voicemail in the work place, it gained momentum and reached popularity in the 80s but by 2012 it was in decline. (Information found on Google!)

While businesses grow and evolve; they find more effective ways to communicate in the moment and on demand. So what is it that keeps us checking voicemail?

If you think about it, there are many more effective ways to leave messages other than voicemail; SMS (text messaging), IM (Instant Messaging), Social Media just to name a few.

If you look back at how Contact Center businesses in 1980 operated, would we consider it effective? Maybe; but there is always room for improvement. For years technology has been on the move changing the status-quo.

When thinking about how to move calls through Contact Centers, we also need to think about how to manage Agent time in relation to those calls.

In the past few years Technology has been able to meet most of our needs for efficiently moving calls and effectively managing Agent time but it may just be that emotion and habit stand in our way from effectively implementing those features that will heighten efficiencies.

An effective feature is automated Call Back. Let’s consider two features for our waiting customers;

The Manual Voicemail option requiring all subsequent interactions to be human and the Call back option that allows for automation and little human interaction, with no distraction from queued calls!

Voicemail VS Call Back

  • Is one better than the other?
  • Would using automation to keep agents available for customer calls be best Practice?
  • Why is Callback so appealing to contact centers?
  • What is our priority in bound callers or inbound voicemails?


Let’s take a look at the call flow…..

It is very easy for our agents to get caught up in the cycle of having to ignore inbound calls to access voicemail to return messages left by customers; only to have other customers waiting to be answered. Maybe you have addressed this break in the process by adding a few agents who are responsible for returning voicemail messages, or maybe you have removed the option for your customers all together making it great for your agents but hurting your overall Customer experience.

If we look closer at what is happening its very clear that a drastic change has to be made! We are not actually effectively handling our calls we are effectively moving them around causing frustration for both Agents and Customer.


Inbound Agents

Agents login to queue to receive calls; without callback, they have to;

  1. Logout of queue leaving their peers to continue to take inbound calls, increasing customer wait times and creating disconnects in reporting.
  2. Agents then have to login to voicemail and transcribe recorded names and telephone numbers which may have to be repeated a number of times for accuracy.
  3. Agents then have to logout of voicemail and return the call
  4. Login to the queue to start receiving inbound calls again.

This is a very manual process leaving lots of room for human error.

Automated Blended Agents

Blending your Contact center for both Inbound and Outbound calls will allow for equalized work flow between all agents.

  • Minimize fragmented reporting.
  • Lessen frustration levels for customers requiring simplicity with a callback option.
  • Decrease the risk associated with leaving the responsibility of transcribing messages.
  • Reduce the risk of stacking calls in your contact center due to high percentages of Agents logging off to return messages.


Outbound Agents

Staffing your contact center for outbound only Agents is not only ineffective but extremely expensive, unless outbound calling is all you do. Imagine calls waiting in queue while you have agents trying to reach clients who are unresponsive.

When designing your Contact Center routing, it’s best to keep your customers in mind and what would give them the best possible experience.


Offering your customers the option for a callback will keep them happy in high traffic times. Maintaining a place in queue, customers can go about their business instead of being tethered to the phone. This will alleviate frustrations for both your customer and your Agent! Taking it one step further and offering scheduled callbacks to customer provides next level CX for your clients.


Removing your voicemail option removes the negative underlying notations that you will respond whenever you get a chance to! The message here is voicemail jail; it’s where calls and customer service go to expire!


Whenever and wherever possible, it is always best practice to use automation instead of relying on human interactions. Having a system that will distribute calls evenly to those available agents eliminates a lot of manual time consuming processes. Keeping things simple and easy to manage!


Always remember that what you do at the top will always have lasting effects down the line. As a contact Center leader, you will need clean data to report on.  The time, energy and processes that have to be put in place for Voicemail retrieval will obviously lead to disjointed data streams and inaccurate reports!

Use Callback for;

  • customer first focused Contact center
  • ensuring your agents are available for calls
  • for evenly distributed calls
  • seamless blended calling
  • accurate metrics



]]> 0 233335
The Golden State Warriors Step Up Their Innovation Game Tue, 13 Nov 2018 14:59:46 +0000 I grew up a Golden State Warriors fan and went to many games in my childhood. While we didn’t have Steph Curry, Kevin Durant, and Klay Thompson, the team I grew up watching had Antawn Jamison, Tim Hardaway, Chris Mullin, and Chris Weber.

Still, it wasn’t as hard or expensive to get tickets as it is today. Three hundred straight sellouts (and counting), 44,000 people on the season-ticket waiting list, and crazy expensive prices make catching a game live quite the challenge.

But the Warriors have done something special. Special for fans and special from a business perspective. Per a report from the Bleacher Report, for $100 a month, the team is giving fans a chance to enter Oracle Arena, grab the free giveaways, and watch the games on one of the bars or restaurants inside. The caveat is that they don’t get to see the court.

The Warriors realized they had an opportunity to create a good fan experience for the many people who can’t get the traditional game ticket. The domino effect is super. While the fans get what they want, the team will be able to generate additional revenues from the monthly entrance pass, parking, food and beverage, and apparel.

During a time in which we have seen so many well-known retailers close shop due to their lack of innovation and dedication to the customer, it’s nice to see that the sports and entertainment industry is thinking of ways to up their game.

]]> 0 233353
Getting Started with California Consumer Privacy Act Compliance Tue, 13 Nov 2018 14:04:09 +0000 Compliance with the CCPA requires robust processes for identifying, governing, distributing, and securing consumer personal information.

The first steps are to document the current usage of this information:

  • Data inventory: Generate lists of personal data related to clients, investors, employees, counter parties, prospects and other entities.
  • Data recipients: Compile a list of entities, such as administrators, custodians, transfer agents, investment managers, and other service providers.
  • Data policies: Review current policies to process, retain and delete data.
  • IT security: Assess information security and data protection mechanisms from a business and technical perspective.
  • Third-party compliance: Review and conduct gap analysis of third-party provider data security policies

After the initial assessment is complete, financial institutions will be in a position to:

  • Confirm what personal data they hold and for what purpose
  • Understand whether there is a strong legal basis for holding this data
  • Modify business processes that do not comply with the CCPA
  • Develop revised policies and procedures
  • Reinforce data governance, distribution, and protection mechanisms
  • Ensure third-party providers are in compliance

We recently published a guide examining the California Consumer Privacy Act of 2018, and the steps any financial institution must take in its response to the new law to evaluate its exposure and current state of readiness. You can download the guide below.

]]> 0 230479
Using TypeScript with the Twilio Flex Agent UI Sample Mon, 12 Nov 2018 19:43:34 +0000 When we first started working with the Flex samples provided by Twilio, the source code was provided in TypeScript. This provided a dev-time programming environment that I tend to prefer, even though I am far from a TypeScript expert. As Flex and the quick-start samples moved toward general availability, Twilio made the choice to switch to using the popular create-react-app package with “plain javascript”. This decision is understandable as not everyone has TypeScript expertise or wants to use TypeScript in their front-end development pipeline.

After Flex v. 1.0 was released at Signal, the sample projects for both the agent console (flex-ui-sample) and web chat (flex-webchat-ui-sample) are now provided at GitHub. In addition. the new flex-plugin-builder sample is available there as well. This allows developers to consume the latest Flex sample code just like any other NPM package.

Decision Considerations

We’ve had a number of internal debates over whether to standardize on TypeScript internally for our Flex projects or stay close to the samples. Twilio is still using TypeScript for most internal projects and it is still a highly recommended approach. The number of beneficial things you get with TypeScript is no secret, including interface definition, better typing of variables and objects and additional hooks for IDEs for intellisence and other syntactic sugar. A developer also typically catches a lot more errors at compile-time vs. run-time.

That being said, standardizing on TypeScript means you need to keep track of changes to the default sample yourself. You are responsible to keep those changes synced up with your TypeScript version. This may not make sense for simple projects. However, we expect longer term projects to eventually have a large and unique code-base. This probably means that the sample projects are used more for reference anyway.

flex-ui-sample: The TypeScript version

I decided to go through the exercise of getting the flex-ui-sample running in TypeScript. It certainly was not as straight-forward as I was expecting. Ultimately it wasn’t incredibly difficult, but you certainly need to have a bit of familiarity with TypeScript, including how typings files work. See the much longer read here.

To keep things rather boilerplate, I went down the path of generating a new project using create-react-app with the TypeScript support flag. The Twilio node module was then added to package.json. I then manually merged and updated the standard files to closely mimic the GitHub standard JavaScript version. I guess you could also try to get TypeScript working in an original flex-ui-sample clone, but this seemed error prone and trickier to manage to me.

Basic Steps

(As I mentioned I would not consider myself an expert on this yet, so happy to take any comments or suggestions for better ways to do the following.)

  • If you don’t already have a default repo of flex-ui-sample from GitHub, grab that first so you have something to compare to.

  • Create a new directory for your repo.

  • if you don’t already have the create-react-app package installed, install it (npm install -g create-react-app).

  • In your new directory, run create-react-app your-app-name –scripts-version=react-scripts-ts

  • Add the current Twilio UI reference to your package.json. I used the latest version as of the writing of this post. (“@twilio/flex-ui”: “^1.1.0”). Run npm install again to pull in the dependencies.

  • I used Beyond Compare for some of this work. Use whatever compare/diff tool you like or you can also just put the two projects side by side. I’m not going to go through all the changes line by line, but this should get you close.

    • Update the public\index.html to match the flex-ui-sample default. You can just copy over the file contents if you wish.

    • Create an assets directory in public and copy a valid appConfig.js from a working project (or create a new one).

    • The default registerServiceWorker.ts file should be fine.

    • You’ll need some @types files.

      • npm install @types/react-router

      • npm install @types/react-router-redux

      • The types file for twilio-taskrouter isn’t set up properly for the compiler to find it. Go here to get it. It won’t work as-is, you will need to make some changes. Save this file (index.d.ts) in a new path you set up – node_modules\@types\twilio-taskrouter

        • in the Worker class, add the : void as the return type for the disconnect and updateToken methods.

        • in the Task class, get rid of the curly braces around the reason parameter in the wrapUp method.

        • in the Reservation class, add : Promise<Reservation> as the return type for the redirect method.

  • At this point if you compile you are going to get some additional complaints. You can try to solve them or you can just go into tsconfig.json and add “skipLibCheck”: true to the compilerOptions object.
  • Similarly, you can either resolve all the default tslint options or take the lazy way out like I did. During development, I like to use a lot of console logging so I set the “no-console” rule to false. I also set “ordered-imports” and “jsx-no-lambda” to false. All depends on how strict you want your project to be. You can always get more strict later.

Flex Project Specifics

  • Replace the “body” style in index.css with the “body, #root” style from the default sample.

  • For index.tsx, you basically want to make it look like the index.js from the default sample. However, you’ll need to update it for TypeScript.

      • Imports section

        import * as React from 'react';
        import * as ReactDOM from 'react-dom';
        import * as Flex from "@twilio/flex-ui"
        import './index.css';
        import registerServiceWorker from './registerServiceWorker';

    • const mountNode – you need : HTMLElement as the type and you’ll need to add an exclamation point (non-null assertion marker) after the document.getElementById(“root”) call. This basically tells tslint “i’m making sure you’ll find a root element”.

       const mountNode: HTMLElement = document.getElementById("root")!;

    • const predefinedConfig – a couple of ways to do this, but I used (window as any).appConfig to get the TypeScript compiler to be happy

    • in renderApp, set the manager type to Flex.Manager

    • Replace <App manager={manager} /> with <Flex.ContextProvider manager={manager} > <Flex.RootContainer /> </Flex.ContextProvider>

      function renderApp(manager: Flex.Manager) {
      <Flex.ContextProvider manager={manager} >
      <Flex.RootContainer />

    • in handleError, set the parameter “error” type to : any

    • I wrapped the ReactDOM.render call in a /* tslint: disable */ just to not fight the “lambda in render” complaint. If you want to be strict about this, you can re-factor accordingly.

      /* tslint: diable */
      setRuntimeConfig(loginData, runtimeDomain);
      /* tslint: enable */

    • in setRuntimeConfig, I set the two parameters types to : any. You can of course be more strict if you wish.


At this point, you should be close enough to get the app to compile. Run npm start to get the default agent console available in your local browser. Keep in mind the default TypeScript port is 3000, not 8080 as in the typical sample. There may be additional work here if you want to get other elements working, such as the Actions or Notifications framework. I haven’t tried this yet, although those two examples worked fine in older samples built in TypeScript. And it’s possible there will be additional issues with typing files or other packages in a more fully implemented project. So consider this more of a proof of concept. As we continue to build out Flex implementations on my team, I’m sure we will continue to discuss when TypeScript makes sense when starting a project.

Have you tried using TypeScript since Flex went GA? Curious to hear your experiences. Also curious to hear your reasons for sticking to plain JavaScript and why that might be a better fit for you.

]]> 0 233334
Data Lake and Information Governance – The Key Takeaways Mon, 12 Nov 2018 17:40:57 +0000 A Data Lake can be a highly valuable asset to any enterprise, and there is a myriad of technology solutions available for leveraging the processes to feed, maintain and retrieve information from the Lake.

But all this technology is, if not worthless, significantly less valuable, if the environment is not well governed and managed. This is the primary Takeaway to keep in mind when a Data Lake solution is being considered – or is already in place but needing improvement – by any organization.

Another takeaway is the idea of positioning the Data Lake as an Aggregator of information – and for it to operate analogically like a Warehouse store – positioned to serve Consumers, but ultimately is responsible for determining how best to collect, store, and make available, the information it houses. This takeaway significantly influences how the Governance of the environment is set up and run.

Accepting the above two statements – the criticality of Governance and the Operating Model of an Aggregator – some other observations can be made:

The Supplier

  • Needn’t have knowledge of the Consumer(s) as they work directly and exclusively with the Aggregator
  • Needs to be willing to conform to the formats, mechanisms and timings of information delivery as defined (through negotiations as necessary) by the Aggregator
  • Needs to be able to describe the information they supply in a “common language” that focuses upon “what” the information is, regardless of how or where it is represented

The Consumer

  • Needn’t have knowledge of the Supplier(s) as they work directly and exclusively with the Aggregator
  • Needs to be willing to conform to the formats, mechanisms and timings of information delivery as defined (through negotiations as necessary) by the Aggregator
  • Needs to be able to describe the information they require in a “common language” that focuses upon “what” the information is, regardless of how or through what mechanism it is delivered

The Aggregator

  • Is the “lynchpin” between Suppliers and Consumers, therefore is responsible for ensuring Consumer satisfaction through appropriate “sourcing” (supplier systems) to address the needs of all Consumers
  • As the central repository for the information transferred between suppliers and consumers, the Aggregator is keeper of the “common language” referred to in the Supplier and Consumer observations. This may take the form of a Master Information Catalog, a Semantic or Canonical Model, a Business Glossary of Terms or any combination thereof
  • Guides both Suppliers and Consumers through the defined interaction processes and the use of the standards and templates defined for aiding these interactions


  • Defines and ensures all parties adhere to the Rules, Rights and Processes for the use and management of the Data Lake
  • Identifies and defines all standards and templates needed to ensure the consistency, efficiency and effectiveness of the interactions
  • Governance is the ultimate and final authority for negotiating the relationships, duties, rights, obligations and privileges of all parties (Suppliers, Consumers and Aggregator)

As mentioned in a previous entry, these observations may sound dictatorial, but for this to be successful, when it comes to the information assets housed in the Data Lake, a highly collaborative environment where all parties are willing to compromise and reach consensus must be an integral part of the culture of the enterprise.

So, this completes my journey into Data Lakes and the Information Governance needed. I hope you found this interesting and helpful. Feel free to reach out with any comments or observations you may have. Thanks so much for reading my blog.

]]> 0 228358
Perficient Atlanta Helps Out at the Atlanta Community Food Bank Mon, 12 Nov 2018 16:56:50 +0000 This past month Perficient Atlanta members volunteered a Saturday morning to help the community at the Atlanta Community Food Bank. The ACFB’s mission is to help provide for the food-insecure in the Greater Atlanta area. When parents have to make the difficult decision between feeding themselves and their children or paying bills, the food bank is able to take food out of the equation allowing for parents to pay these bills. Spending time helping the Atlanta Community Food Bank reach their goals was both a rewarding and fun experience with my colleagues.

PRFT Employee at ACFB

What did our visit entail? We divided ourselves into three separate groups. The first group would sift out items that could not be distributed such as chocolates. The second group would check the expiration dates on the food items that were passed along by the first group. The third and final group would then pack the acceptable items into cardboard boxes to be placed onto a truck. The truck would distribute the boxes of food to various distributors in the Atlanta area. We worked in these groups for a few hours while listening to some classic hits on the radio.

ACFB In Action

By the end of our visit, we were able to pack 7,622 lbs of food. This equates to about 5,377 meals. This will greatly impact our community. This was not the first time Perficient Atlanta volunteered its time at the Atlanta Community Food Bank, and it certainly will not be the last time we participate. We would like to thank ACFB for having us and for being a positive contributor to the community!

Perficient Employees at ACFB

]]> 0 233315
Top Service Challenges Enterprises Face in 2019 Mon, 12 Nov 2018 12:51:02 +0000 Migrating from one technology to another always has its unique challenges, and migrating to Salesforce Service Cloud is no exception. Our years of Salesforce migration experience have shown some common themes and best practices, which we cover in our free guide, Salesforce Service Cloud Migrations Made Easy.

Top Service Challenges for 2019

As digital transformation occurs, service organizations today are facing new challenges:

  • Planning for diversification and growth
  • Executing on the need to scale
  • Keeping up with changing system complexities
  • Cross-platform integration and customization
  • Creating an effortless customer experience

Don’t Forget the People Factor

As the market ebbs and flows, so does the demand on businesses to keep up. As companies grow, new technology becomes essential to maintaining operations, but those tools can sometimes come with a price their own — and we’re talking more than dollars.

The technology organizations use is only as good as the people using it. At Perficient, while we do leverage Salesforce’s educational platform, Trailhead, we take it a step further by creating customized programs for our clients to follow to make migrating to Salesforce Service Cloud that much simpler. That way you can focus on addressing the top challenges for your business and not get stuck on the technology.

Upgrading Your Service Technology

Do you want to improve your customer experience but aren’t sure where to start? Begin with intelligent data and actionable insights just like NextGen, GoPro, HULU, and many other leading brands did by migrating to Salesforce Service Cloud. Learn more by accessing our free guide and calling a Perficient Salesforce expert to talk through your business challenges.

]]> 0 233243
Microsoft Teams PowerShell gets a face lift! (November 2018) Sat, 10 Nov 2018 15:15:36 +0000 One big problem for Teams admins in the past has been the lack of PowerShell cmdlets available for Microsoft Teams. Well, you no longer have to suffer in silence because Microsoft has finally released some updates for the Microsoft Teams Cmdlet module. With update 0.9.5  you can now perform “most” of the administrative operations available in the Teams and Skype for Business Online Admin Center. With the latest update you can do the following:

  • Create teams
  • List teams in the tenant
  • Update current settings for a team
  • Create new channels

In addition to what was mentioned above, they’ve made some significant improvements to the Get-Team cmdlet. In prior releases of the cmdlet this would only return names of teams that you were a member of, so unless you were a member of every single team in your organization you wouldn’t have much use for this command. With this latest update the Get-Team cmdlet returns a full list of ALL teams listed in the tenant! For a full list of cmdlets available to you in Microsoft Teams, check out the Microsoft docs page here. To get started you’ll need to install this module  from the PSGallery by doing the following:

  • Uninstall any old version of the PowerShell cmdlet
    • Uninstall-Module -Name MicrosoftTeams
  • Install the most up to date Teams PowerShell cmdlet
    • Install-Module -Name MicrosoftTeams -Repository PSGallery
    • Connect-MicrosoftTeams

With all that said, Microsoft still has a long ways to go and will need to expand on parameters for each cmdlet as well as just adding new cmdlets in general before our Microsoft Teams admins will be content. I’m sure I speak for all admins alike when I say that if Microsoft can get Teams PowerShell cmlets to the level that Skype for Business and Skype for Business Online PowerShell have today, then we would be very grateful! As always stay tuned, I release content on Skype for Business, Teams, and other exciting Microsoft related UC news on a weekly basis! I hope you have found this quick update helpful and I encourage you to experiment with these new PowerShell cmdlet updates to see what exactly you can get out of it!

]]> 0 233289