Business Intelligence Articles / Blogs / Perficient https://blogs.perficient.com/tag/business-intelligence/ Expert Digital Insights Mon, 18 Nov 2024 19:20:55 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Business Intelligence Articles / Blogs / Perficient https://blogs.perficient.com/tag/business-intelligence/ 32 32 30508587 Manage Rising Expenses in Insurance https://blogs.perficient.com/2024/07/23/manage-rising-expenses-in-insurance/ https://blogs.perficient.com/2024/07/23/manage-rising-expenses-in-insurance/#comments Tue, 23 Jul 2024 13:39:55 +0000 https://blogs.perficient.com/?p=366090

Have you noticed your expenses rising lately? Eating out costs are up over 4% year over year, housing expenses have increased by 5-6%, and opening your auto insurance bill reveals a shocking 22% hike. These figures highlight the inflationary pressures impacting various sectors but are particularly severe in the property and casualty (P&C) insurance industry. 

Insurance Industry Challenges 

2023 was one of the costliest years on record for the P&C industry due to several factors: 

  • Extreme weather 
  • Inflationary pressures 
    • Labor costs have jumped nearly 12%.
    • Residential building costs have risen almost 28%.

As a result, industry combined ratios have reached 103.9%, forcing carriers to take corrective underwriting actions, including significant premium increases, to bring transparency in coverage processes (TCRs) back in line. 

Cost Optimization Strategies for Consumers and Carriers 

While consumer advocates encourage policyholders to shop for better rates, bundle multiple products for discounts, and optimize their policy structure, there are several strategies insurance carriers can adopt to retain and grow their customer base through effective expense management. 

Personalize Your Product 

According to a recent JD Power survey, nearly half of auto insurance consumers shopped their policy last year, and a staggering 29% switched from their current carriers. The driving force? Customers are more likely to engage with companies that genuinely understand their needs and relationships. Personalization offers the consumer a perceived ROI on the premiums they are paying – they are understood and (most importantly) protected.   

Successful carriers leverage their extensive operational data to generate actionable insights, creating integrated, seamless, and tailored customer experiences. Personalization goes beyond sending birthday acknowledgments. It involves continuously learning about your customers’ evolving needs and communicating with them in an authentic tone. 

The benefits of personalization are significant. A life insurance survey indicated that understanding the customer and tailoring offers can reduce customer acquisition costs by up to 50%, generate up to 10% more new premiums, and reduce customer churn by 30%. And, despite the rise in privacy regulations such as the GDPR or CPRA, 70% of global customers are willing to share their data in exchange for better pricing, experiences, or tailored offers from their carrier. 

Improve Internal Efficiencies 

The recent economic challenges have, without a doubt, exposed process inefficiencies, highlighting the need for greater automation in both customer-facing and back-office operations. The insurance industry has one of the highest ratios of labor expense to final product price. Industry forecasts estimate that within 15 years, 10-50% of current insurance processes will be automated, significantly reallocating resources and value propositions. 

By 2025, 60% of organizations will be using automation to address staffing challenges, moving human intervention to the highest priority work. Automated processes have been proven to reduce paperwork by 80% and speed up claims processing by 50%, resulting in substantial productivity gains. 

Automation Success In Action: Our client needed a claims processing platform that could handle high-volume operations, provide all business areas insight into best practices, and provide customers with self-service capabilities. We redesigned the underwriting and claims processing applications using Pega’s Claims for Insurance framework. We created customized workflows and case types, enabled document ingestion and indexing, and provided access to the first notice of loss reports. 

Embrace Artificial Intelligence and Machine Learning 

Beyond operational efficiencies, artificial intelligence (AI) and machine learning (ML) also offer promising advancements for the industry. Integrating data-driven analytics into core underwriting elements will enhance product pricing (e.g., telematics) and development. 

For claims organizations, AI can significantly improve settlement and fraud detection processes. Moreover, predictive and preventative services enabled by AI can help prevent risks before they occur. The use of predictive analytics has positively impacted loss ratios by 3-9%.  

While the current P&C rate increase is helping to bring down combined ratios, 2024 is likely to continue to experience more underwriting pressure, with AM Best predicting a combined ratio of 100.7%.  

AI Success in Action: We developed a virtual assistant to redirect 25% of our client’s incoming call volume of 5,000 calls per month to a self-service model. ​ By off-loading a large volume of third-party mortgage inquiries, our client could focus on providing personal attention to customers when they need it most while also saving significant amounts of labor on tasks that are ripe for automation.   

The Need for Digital Solutions 

Companies that invest in smart technology to future-proof their expense ratios will not only mitigate near-term profitability challenges but also establish a strong foundation for ongoing productivity and customer satisfaction enhancements. Embracing personalization, automation, and AI will enable carriers to navigate the evolving landscape of the insurance industry effectively. 

Your Expert Partner 

Are you prepared to embrace the future of insurance? 

We invite you to explore our insurance expertise, or contact us today to learn how we can optimize your insurance practice. 

]]>
https://blogs.perficient.com/2024/07/23/manage-rising-expenses-in-insurance/feed/ 1 366090
Crafting AEP Schemas: A Practical Guide https://blogs.perficient.com/2024/07/01/crafting-aep-schemas-practical-guide-2/ https://blogs.perficient.com/2024/07/01/crafting-aep-schemas-practical-guide-2/#respond Mon, 01 Jul 2024 16:42:03 +0000 https://blogs.perficient.com/?p=366761

Welcome to the world of Adobe Experience Platform (AEP), where digital transformation becomes a reality. If you’ve landed here, you’re already on your way to making significant strides in your organization or your career.

In a digital era where data reigns supreme, and the Customer Data Platform (CDP) landscape is ever-evolving, businesses strive to maximize their investments to thrive in a fiercely competitive market.

Whether you’re a marketer or an aspiring AEP developer, this blog is your go-to resource. Together, we’ll lay the foundation for building schemas and crafting strategies from scratch. Using a real-life example, I’ll break down the requirements and demonstrate how to translate them into a technical blueprint for your schemas.


Now, let’s dive into the core components: Adobe Experience Platform (AEP), XDM (Experience Data Model), Schemas, and Field Groups.

XDM: The Universal Language

Imagine XDM as the universal language for digital experiences. It’s like a rulebook crafted by Adobe to decipher customer experience data. When you work with AEP, ensuring your data speaks this XDM language is crucial. It streamlines data management, much like ensuring all puzzle pieces share the same shape for a perfect fit.

Schemas: The Blueprints

AEP relies on schemas, which act as templates, to maintain consistent and organized data. Schemas describe how your data looks and where it should reside within the platform, providing a structured framework to keep everything working in an orderly fashion.

Field Groups: The Organizers

Now, enter Field Groups – the unsung heroes within AEP. They resemble categorized drawers in your data cabinet, ensuring data consistency and organization within your schemas. Each Field Group is like a labelled drawer, helping you effectively organize your data points.


In practical terms, XDM is the language spoken by all the toys in your store. Schemas provide blueprints for your toy displays, and Field Groups are the labelled drawers that keep your toys organized. Together, they ensure your toy store runs smoothly, helping you offer personalized toy recommendations, like finding the perfect toy for each child in your store.


Now that we’ve grasped the fundamentals let’s apply them to a real-life scenario:

Real-Life Use Case: Lead Generation Example

Imagine you’re on a mission to enhance your data collection and personalization use cases using AEP.  Your goal is to send data to both Adobe Analytics [to keep your power users engaged while they level up their skills in Customer Journey Analytics] and Customer Journey Analytics [being future-ready for omnichannel journey analysis] simultaneously, ensuring a seamless analysis process. To achieve this, you need to configure data collection on your website and send specific data points.

Now, let’s get into the nitty-gritty. You’re running a lead generation site, and you want to track several data points:

  • You aim to monitor all traffic data related to web page details.
  • You’re keen on tracking interactions with Call-to-Action (CTA) links.
  • You want to capture custom form tracking information, including the form name and the specific form event.
  • Additionally, you have your eyes on tracking videos, complete with their names and the events associated with them.
  • To top it off, once users authenticate, you intend to pass User ID information. More importantly, this ID will serve as a Person ID to stitch users across channels in future.
  • And, of course, capturing valuable web page information such as the web page template, web page modification date, and the corresponding business unit.

Now that we’ve listed our requirements, the next step is translating them into an XDM schema. This schema will serve as the blueprint to encompass all these data points neatly and effectively.

Breaking Down the Requirements

Navigating the AEP Technical Landscape

To effectively implement data collection on our website using the AEP Web SDK, we’ll start by integrating the ‘AEP Web SDK ExperienceEvent’ predefined field group into our schema. This step ensures that our schema includes field definitions for data automatically collected by the AEP Web SDK (Alloy) library.

Additionally, considering that we’re dealing with website data, which involves time-series records (each with an associated timestamp), we’ll require an ‘Xperience event’ [class] type of schema. This schema is tailored to accommodate this specific data structure, ensuring seamless handling of our web-related records.

Let’s talk about Field Groups:

  • Business Requirement: Select AEP Web SDK Experience Event Template in the schema to send data to AEP.
    • Field Group Type: Adobe’s Predefined Field Groups
    • Field GroupName/Path: Adobe Web SDK Experience Event Template.
      • This is a mandatory field group if you are capturing onsite data using web SDK.

Adobe Web SDK ExperienceEvent Template

 

 

 

 

 

 

 

 

 

 

  • Business Requirement: Send data to Adobe Analytics from web SDK (traditional eVars,props,events).
    • Field Group Type: Adobe’s Predefined Field Groups
    • Field GroupName/Path: Adobe Analytics ExperienceEvent Template
      • This will take care of all your existing / new Adobe Analytics implementation needs.
      • Using this will eliminate the need to create a processing rule in the Adobe Analytics console if you map directly to evars/prop/events within this field group in schema via Adobe Launch setup.

Adobe Analytics ExperienceEvent Template

 

 

 

 

 

 

 

 

  • Business Requirement: Monitoring all traffic data related to web page details.
    • Field Group Type: Adobe’s Predefined Field Groups
    • Field GroupName/Path: Web Details
      • Path: web.webPageDetails

web.webPageDetails

 

 

 

 

 

  • Business Requirement: Tracking interactions with Call-to-Action (CTA) links.
    • Field Group Type: Adobe’s Predefined Field Groups
    • Field GroupName/Path: Web Details
      • Path: web.webInteraction

web.webInteraction

 

 

 

 

 

 

  • Business Requirement: Capturing custom form tracking details, including form names and events.
    • Field Group Type: Hybrid: Adobe’s Predefined Field Groups + Custom Field Group.
    • Field GroupName/Path: Web Details
      • Path: web.webInteraction._democompany.form
      • web.webInteraction._democompany.form={
        formName:<form name>,
        formEvent:<form event such as start/complete/error>
        }

        Form Fields

 

 

 

 

 

 

  • Business Requirement: Keeping an eye on video interactions, including video names and associated events.
    • Field Group Type: Hybrid: Adobe’s Predefined Field Groups + Custom Field Group.
    • Field GroupName/Path: Web Details
      • Path: web.webInteraction._democompany.video
      • web.webInteraction._democompany.video={
        videoName:<video name>,
        videoEvent:<video event such as start,stop,milestones etc>
        }

        video

 

 

 

 

 

 

  • Business Requirement: Business specific custom web page information
    • Field Group Type: Hybrid: Adobe’s Predefined Field Groups + Custom Field Group.
    • Field GroupName/Path: Web Details
      • Path: web.webInteraction._democompany
      • web.webPageDetails._democompany={
        webPageTemplate:<custom web page template>,
        businessUnit:<business unit>
        }

business specific

 

 

 

 

 

  • Business Requirement: Lastly, once users authenticate, pass user ID information.
    • Field Group Type: Custom Field group
    • Field GroupName/Path:_democompany.identity.userID. This is set at the root level
      • Assign this as an Identity, but not as primary [You may wonder why?  see below ].

identity

 

 

 

 

*_democompany= _perficientincpartnersandbox for us as the tenant ID assigned to our account is perficientincpartnersandbox.

Key Points

Here are the key points and recommendations, explained in simpler terms:

  • Understanding Field Groups: Field Groups are like organized drawers for your data. Each field within them is connected to a specific space known as a namespace. Predefined Field Groups come with predefined namespaces, while custom Field Groups are linked to your unique namespace, usually marked by an underscore (_) followed by your company’s name.
  • Flexibility to Customize: You can modify predefined field groups, like Web Details, to match your needs. You can do this either through the user interface or using APIs. This flexible approach is what we call a “HYBRID field group.” It lets you adjust according to your requirements. As a result, your custom namespace (usually something like _<your tenant ID/company ID>) takes priority, and all customizations fall under this category. (You can check the final schema below for reference.)
    • Why Use HYBRID Field Groups: If you’re an architect or strategist, creating solutions that are reusable, efficient, and scalable is second nature. That’s why I highly recommend using HYBRID field groups whenever possible. These field groups offer the best of both worlds. They leverage the power of predefined field groups while allowing you to add your custom touch, all within a field group type. It’s like tailoring a ready-made suit to fit you perfectly, saving time and effort while ensuring the best results.
  • Choosing a Primary ID: For website data, when it comes to identifying a user, we won’t set this user ID as the primary ID. You might wonder, “What should be the primary ID for on-site data, especially when you might need to connect it with offline data later?” You’re partially correct. While you can use this user ID as an identity to link with offline data, it doesn’t have to be the primary one.
    • Pro-tip: Use the Identity map to include all your possible custom identities by configuring this in the Identify map data element in Adobe Launch. By default, the ECID will be used as the primary identifier for stitching.
    • Using an XDM identityMap field, you can identify a device/user using multiple identities, set their authentication state, and decide which identifier is considered the primary one. If no identifier has been set as primary, the primary defaults to be the ECID

Important Note: Remember that if you specify a primary ID in the schema and it’s missing in a data entry (for example, on pages where users aren’t authenticated and don’t have a user ID), keep in mind that AEP will exclude those data entries because they lack the primary ID we’ve specified. This helps maintain data accuracy and integrity.

We’re making excellent headway! Our requirements have evolved into a detailed technical blueprint. Our XDM schema’s foundation is strong and ready to roll. Just remember, for website data, we use the “experience event” type schema with the experience event class. If we ever need to capture user profiles, we’ll craft an Experience Profile schema with the Experience Profile class. This adaptability ensures we’re prepared for diverse data scenarios.

Schema Creation 

With all the defined field groups, we can now combine this information to construct the schema. When it comes to building your schema, you’ve got two main paths to choose from:

  • API-First Approach (Highly Recommended): This is the best approach if you want to align with AEP’s API-first philosophy.
  • User-Friendly UI Interface (Great for Simple Use Cases): If the thought of working with APIs sounds intimidating, don’t worry! You can also create schemas through a user-friendly UI interface. This option is perfect for straightforward scenarios and when APIs might seem a bit daunting.

Final Schema Output

In this blog, we’ve opted for the UI method to construct our schema, and here’s the result:

Schema

 

In conclusion, Adobe Experience Platform empowers you to navigate the complex digital landscape easily. By understanding the language, creating blueprints, and organizing your data, you’ll unlock the potential to provide personalized experiences that resonate with your customers. Your journey to digital success has just begun!

]]>
https://blogs.perficient.com/2024/07/01/crafting-aep-schemas-practical-guide-2/feed/ 0 366761
Unleash the Power of Your CloudFront Logs: Analytics with AWS Athena https://blogs.perficient.com/2024/05/22/unleash-the-power-of-your-cloudfront-logs-analytics-with-aws-athena/ https://blogs.perficient.com/2024/05/22/unleash-the-power-of-your-cloudfront-logs-analytics-with-aws-athena/#comments Wed, 22 May 2024 06:48:07 +0000 https://blogs.perficient.com/?p=362976

CloudFront, Amazon’s Content Delivery Network (CDN), accelerates website performance by delivering content from geographically distributed edge locations. But how do you understand how users interact with your content and optimize CloudFront’s performance? The answer lies in CloudFront access logs, and a powerful tool called AWS Athena can help you unlock valuable insights from them. In this blog post, we’ll explore how you can leverage Amazon Athena to simplify log analysis for your CloudFront CDN service.

Why Analyze CloudFront Logs?

CloudFront delivers data, videos, applications, and APIs to customers globally with low latency and high transfer speeds. However, managing and analyzing the logs generated by CloudFront can be challenging due to their sheer volume and complexity.

These logs contain valuable information such as request details, response status codes, and latency metrics, which can help you gain insights into your application’s performance, user behavior, and security incidents. Analyzing this data manually or using traditional methods like log parsing scripts can be time-consuming and inefficient.

By analyzing these logs, you gain a deeper understanding of:

  • User behaviour and access patterns: Identify popular content, user traffic patterns, and potential areas for improvement.
  • Content popularity and resource usage: See which resources are accessed most frequently and optimize caching strategies.
  • CDN performance metrics: Measure CloudFront’s effectiveness by analyzing hit rates, latency, and potential bottlenecks.
  • Potential issues: Investigate spikes in errors, identify regions with slow response times, and proactively address issues.

Introducing AWS Athena: Your CloudFront Log Analysis Hero

Amazon Athena is a serverless query service that allows you to analyze data stored in Amazon S3 using standard SQL. Here’s why Athena is perfect for CloudFront logs:

  • Cost-Effective: You only pay for the queries you run, making it a budget-friendly solution.
  • Serverless: No infrastructure to manage – Athena takes care of everything.
  • Familiar Interface: Use standard SQL queries, eliminating the need to learn complex new languages.

Architecture:

Arcgi

Getting Started with Athena and CloudFront Logs

To begin using Amazon Athena for CloudFront log analysis, follow these steps:

1. Enable Logging in Amazon CloudFront

If you haven’t already done so, enable logging for your CloudFront distribution. This will start capturing detailed access logs for all requests made to your content.

2. Store Logs in Amazon S3

Configure CloudFront to store access logs in a designated Amazon S3 bucket. Ensure that you have the necessary permissions to access this bucket from Amazon Athena.

3. Create an Athena Table

Create an external table in Amazon Athena, specifying the schema that matches the structure of your CloudFront log files.

Below is the sample query we have used to create a Table :

 CREATE EXTERNAL TABLE IF NOT EXISTS cloudfront_logs (

  date STRING,

  time STRING,

  location STRING,

  bytes BIGINT,

  request_ip STRING,

  method STRING,

  host STRING,

  uri STRING,

  status INT,

  referrer STRING,

  user_agent STRING,

  query_string STRING,

  cookie STRING,

  result_type STRING,

  request_id STRING,

  host_header STRING,

  request_protocol STRING,

  request_bytes BIGINT,

  time_taken FLOAT,

  xforwarded_for STRING,

  ssl_protocol STRING,

  ssl_cipher STRING,

  response_result_type STRING,

  http_version STRING,

  fle_encrypted_fields STRING,

  fle_status STRING,

  unique_id STRING

)

ROW FORMAT DELIMITED FIELDS TERMINATED BY ‘\t’ ESCAPED BY ‘\’ LINES TERMINATED BY ‘\n’

LOCATION ‘paste your s3 URI here’;

Click on the run button!

Query

Extracting Insights with Athena Queries

Now comes the fun part – using Athena to answer your questions about CloudFront performance. Here are some sample queries to get you going:

Total Requests

Find the total number of requests served by CloudFront for a specific date range.

SQL

SELECT

    COUNT(*) AS total_requests

FROM

    cloudfront_logs

WHERE

    date BETWEEN ‘2023-12-01’ AND ‘2023-12-31’;

 

Most Requested Resources

Identify the top 10 most requested URLs from your CloudFront distribution. This query will give you a list of the top 10 most requested URLs along with their corresponding request counts. You can use this information to identify popular content and analyze user behavior on your CloudFront distribution.

SQL

SELECT

    uri,

    COUNT(*) AS request_count

FROM

    assetscs_cdn_logs

GROUP BY

    uri

ORDER BY

    request_count DESC

LIMIT 10;

Traffic by Region

Analyze traffic patterns by user location.

This query selects the location field from your CloudFront logs (which typically represents the geographical region of the user) and counts the number of requests for each location. It then groups the results by location and orders them in descending order based on the request count. This query will give you a breakdown of traffic by region, allowing you to analyze which regions generate the most requests to your CloudFront distribution. You can use this information to optimize content delivery, allocate resources, and tailor your services based on geographic demand.

SQL

SELECT

    location,

    COUNT(*) AS request_count

FROM

    cloudfront_logs

GROUP BY

    location

ORDER BY

    request_count DESC;

 

Average Response Time

Calculate the average response time for CloudFront requests. Executing this query will give you the average response time for all requests served by your CloudFront distribution. You can use this metric to monitor the performance of your CDN and identify any potential performance bottlenecks.

SQL

SELECT

    AVG(time_taken) AS average_response_time

FROM

    cloudfront_logs;

 

Number of Requests According to Status

The below query will provide you with a breakdown of the number of requests for each HTTP status code returned by CloudFront, allowing you to identify any patterns or anomalies in your CDN’s behavior.

SQL

SELECT status, COUNT(*) as count

FROM cloudfront_logs

GROUP BY status

ORDER BY count DESC;

Athena empowers you to create even more complex queries involving joins, aggregations, and filtering to uncover deeper insights from your CloudFront logs.

Optimizing CloudFront with Log Analysis

By analyzing CloudFront logs, you can identify areas for improvement:

  • Resource Optimization: Resources with consistently high latency or low hit rates might benefit from being cached at more edge locations.
  • Geographic Targeting: Regions with high traffic volume might warrant additional edge locations to enhance user experience.

Conclusion

AWS Athena and CloudFront access logs form a powerful duo for unlocking valuable insights into user behavior and CDN performance. With Athena’s cost-effective and user-friendly approach, you can gain a deeper understanding of your content delivery and make data-driven decisions to optimize your CloudFront deployment.

Ready to Unleash the Power of Your Logs?

Get started with AWS Athena today and unlock the hidden potential within your CloudFront logs. With its intuitive interface and serverless architecture, Athena empowers you to transform data into actionable insights for a faster, more performant CDN experience.

]]>
https://blogs.perficient.com/2024/05/22/unleash-the-power-of-your-cloudfront-logs-analytics-with-aws-athena/feed/ 1 362976
Microsoft Fabric: NASDAQ stock data ingestion into Lakehouse via Notebook https://blogs.perficient.com/2024/04/01/microsoft-fabric-nasdaq-stock-data-ingestion-into-lakehouse-via-notebook/ https://blogs.perficient.com/2024/04/01/microsoft-fabric-nasdaq-stock-data-ingestion-into-lakehouse-via-notebook/#respond Mon, 01 Apr 2024 08:45:16 +0000 https://blogs.perficient.com/?p=360790

Background

Microsoft Fabric is emerging as one-stop solution to aspects revolving around the data. Before the introduction of Fabric, Power BI faced few limitations related to data ingestion, since Power Query offers limited ETL & data transformation functionality. Power Query M Language scripting lacks ease of development, compared to popular languages like Java / C# / Python etc., which might be the need for complex scenarios. Lakehouse of Microsoft Fabric eliminates this downside by providing power of Apache Spark, which can be used in Notebooks to handle complicated requirements. Traditionally, organizations used to provision multiple services of Azure Services, like Azure Storage, Azure Databricks, etc. Fabric brings all the required services into a single platform.

Case Study

A private equity organization wants to have a close eye on equity stocks it has invested in for their clients. They want to generate trends, predictions (using ML), and analyze data based on algorithms developed by their portfolio management team in collaboration with data scientists written in Python. Reporting Team wants to consume data for preparing Dashboards, using Power BI. Organization has subscription of Market Data API, which can pull live market data. This data needs to be ingested on a real-time basis into the warehouse, for further usage by the data scientist & data analyst team.

Terminologies Used

Below are few terms used in the blog. A better understand of these by visiting respective website is advisable for better understanding:

  • Lakehouse: In layman terms, this is the storehouse which will store unstructured data like CSV files in folders and structured data i.e., table (in Delta lake format). To know more about Lakehouse, visit official documentation link: https://learn.microsoft.com/en-us/fabric/data-engineering/lakehouse-overview
  • Notebook: It is a place to store our Python code along with supporting documentation (in Markdown format). Visit this link for details on Fabric Notebook: https://learn.microsoft.com/en-us/fabric/data-engineering/how-to-use-notebook
  • PySpark: Apache Spark is an in-memory engine for analysis of bigdata. Spark supports languages like Java / Scala / SQL / Python / R. PySpark is Python based SDK for Spark. More information on spark can be found on the official website: https://spark.apache.org/
  • Semantic Model: Power BI Dataset is now re-named as Semantic Model.
  • Postman: Postman is a popular tool mostly used for API testing (limited feature free edition available). Postman offers Graphical Interface to make HTTP requests & inspect their response in various format like JSON / HTML etc.
  • Polygon.io: It is a market data platform offering API to query stock prices & related information.

Flow Diagram

Below is the flow diagram to help understand how Fabric components are interlinked to each other to achieve the result.

Flow Diagram

API Feed Data Capture

In this case study, a free account sign-up of website https://polygon.io was done, which allows querying End-of-Day data with cap of max 5 API request / minute. Considering this limitation, hourly data of only 3 securities have been ingested, to demonstrate POC (Proof-of-Concept). Viewers are encouraged to use a paid account, which supports real-time data with unlimited API request, for their development / testing / production usage.

Below is the screenshot of HTTP request with response made via postman for single security, to be implemented in Notebook, for data ingestion.

Postman Api Request

JSON response contains property named results, of type object array containing hourly status of specific security.
o = open / c = close / h = high / l = low / v = traded volume / t = timestamp (in Unix style)

Step 01: Create Fabric Capacity Workspace

For the POC, we will create a workspace named Security Market, for our portfolio management division, using New Workspace button (available to Fabric Administrator), with settings as per below screenshots.

Fabric Workspace Setting

It is crucial that in Premium tab of settings, one needs to choose Fabric capacity (or Trial), which offers Lakehouse (refer below screenshot).

Fabric Workspace Capacity

Once created, it should look as below (refer below screenshot).

Fabric Workspace Preview

Step 02: Setup Lakehouse

Next, we will create a new Lakehouse to host API feed captured data. Click New button and choose more options (if Lakehouse is not visible in menu). A detailed page as shown in the screenshot below would appear.

Create Lakehouse Menu

Use Lakehouse option to create a new Lakehouse. Rename this Lakehouse as per your choice.

Lakehouse can host structured data Table & Semi-structured / Unstructured data Sub-Folder to store raw / processed files. We will create a sub-folder named EOD_Data to store data received from API request in CSV file format, which in-turn would be available for Data Scientist for further processing (refer below screenshot).

Lakehouse Create Folder Option

 

Step 03: Create Notebook

Once Lakehouse is ready, we can proceed towards the next step, where we will be writing Python code to capture & ingest data. Click on Open Notebook > New Notebook to initialize a blank Notebook (refer below screenshot).

Create Notebook Option

This would open a blank Notebook. Copy-paste below Python code into code cell as shown in below screenshot.

import datetime as dt
import requests as httpclient
from notebookutils import mssparkutils

api_key = 'hfoZ81xxxxxxxxxxxxxxxx'  # Secret API Key
symbol_list = ['MSFT', 'GOOG', 'PRFT']  # Symbol list

target_date = dt.datetime.today()
file_content = 'symbol,timestamp,open,high,low,close,volume\n'  # insert CSV header
dt_YYYYMMDD = target_date.strftime('%Y-%m-%d')  # YYYYMMDD

for symbol in symbol_list:  # Iterate through each symbol (security)
    api_url = f'https://api.polygon.io/v2/aggs/ticker/{symbol}/range/1/hour/{dt_YYYYMMDD}/{dt_YYYYMMDD}/?apiKey={api_key}'
    resp_obj = httpclient.get(api_url).json()
    for r in resp_obj['results']:  # Iterate through each rows of security for respective frequency of timestamp
        price_open, price_close, price_high, price_low, trade_volume = r['o'], r['c'], r['h'], r['l'], r['v']
        timestamp = dt.datetime.fromtimestamp(r['t']/1000).strftime('%Y-%m-%dT%H:%M:%S') # decode unix timestamp
        file_content += f'{symbol},{timestamp},{price_open},{price_high},{price_low},{price_close},{trade_volume}\n' # append row
    
mssparkutils.fs.put(f'Files/EOD_Data/{dt_YYYYMMDD}.csv', file_content)  # Save file into Datalake with Date identifier
df = spark.read.load(f'Files/EOD_Data/{dt_YYYYMMDD}.csv', format='csv', header=True, inferSchema=True) # Read file into dataframe
df.write.saveAsTable('nasdaq', mode='append')  # Append dataframe rows to "nasdaq" table

Execute the above code, after the NASDAQ market is closed. Let us understand in nutshell, what this Python code does:

  1. Every Market Data platform offers a secret API key, which needs to be provided in URL or HTTP header (as defined in API documentation).
  2. Just to experiment, we have selected 3 securities MSFT (Microsoft Corp), GOOG (Alphabet Inc – Class C) and PRFT (Perficient Inc).
  3. URL requires date to be in YYYY-MM-DD format, which variable dt_YYYYMMDD is holding.
  4. Next, we run a loop for every security we want to query.
  5. HTTP Get request is made to Market API platform by dynamically preparing URL with target date, security (symbol) and API key, setting frequency of hourly data to be returned.
  6. In the JSON response, result property holds array of hourly data changes of security attributes (like open / close / high / low / etc.) as depicted in postman request screenshot. Kindly refer to respective market platform API documentation to know this in detail.
  7. Next, we run a loop to iterate and capture hourly data and append them to a text variable named file_content in comma separated format, to prepare our CSV file (notice we already wrote CSV header in line no 9 of code).
  8. Post execution of both the loops, in line no 20, a file with naming structure (YYYYMMDD.csv) is created under sub-folder EOD_Data.
  9. In the last, this saved CSV file is read using spark reader into data frame, and the result is appended to a table named “nasdaq” (spark will auto create table if not found).

Let’s preview the data to ensure success of Python script. Navigate to Lakehouse, expand Tables, and ensure a table named “nasdaq” is created. Refer below screenshot for sample data.

Lakehouse Table Preview

 

Step 04: Schedule Job

This notebook code needs to be run every day. Notebook offers a feature of scheduling the code to run automatically on set frequency event. This option is available in Notebook under option Run > Schedule.

Notebook Schedule Menu

This would open detailed scheduling option page as below. Assuming 4.00 pm EST as closing timing and adding buffer of 30 min for safety, let us apply timer to execute this Notebook Daily at 4:30 pm (refer below image).

Notebook Schedule Timer

The job would run daily even on weekend when market is closed. Ideally this should not affect analytics, as for weekend Friday day-end position would continue. Data scientists are free to delete weekend data or ignore that data from their further calculation scripts.

 

Step 05: Generate Semantic Model

Semantic Model (previously known as Dataset) serves as data source for Power BI reports. Lakehouse contains an option to generate semantic model providing option to choose specific tables to be loaded into model required by BI developer (refer below screenshot).

Lakehouse Load Symantic Model

BI Developer can further build upon that semantic model creating relationships & measures. Only limitation is that calculated columns cannot be added into tables from model editor, as in backend there is no Power Query. Columns need to be added using in Notebook.

 

Conclusion

The story does not end here but continue with authoring dashboards & reporting from Power BI based on the semantic model produced by Lakehouse. Fabric enables integration of team of data scientist, data engineers & data analyst on a single unified platform. Azure administrator just need to provision Fabric Capacity, which is scalable just like regular Azure Workload, based on CU (Consumption Units), which can be tweaked on hourly basis, to accommodate for peak workload hours. Blog intends to share few capabilities of Fabric for dealing real scenario. There are many components of Fabric like Data Activator, ML Model, Data Pipeline, which for further complex level use-cases, which can be a great for exploration.

]]>
https://blogs.perficient.com/2024/04/01/microsoft-fabric-nasdaq-stock-data-ingestion-into-lakehouse-via-notebook/feed/ 0 360790
5 Tactics to Safeguard Institutions Against Senior-Level Embezzlement  https://blogs.perficient.com/2024/03/18/5-tactics-to-safeguard-institutions-against-senior-level-embezzlement/ https://blogs.perficient.com/2024/03/18/5-tactics-to-safeguard-institutions-against-senior-level-embezzlement/#respond Mon, 18 Mar 2024 21:39:13 +0000 https://blogs.perficient.com/?p=359180

Protecting financial institutions from the perils of high-level embezzlement requires a proactive approach rooted in ethical conduct and stringent compliance measures. To fortify defenses against such threats, financial entities must implement proactive measures aimed at ensuring ethical conduct and compliance within their organizations.  

This blog outlines five key strategies to safeguard your business and mitigate the risks associated with senior-level embezzlement. 

SEE ALSO: A Guide to Fortify Your Institution Against Senior-Level Embezzlement Risks

1. Code of Conduct and Ethics Training

Regularly educate employees, especially senior management, on ethical conduct and the consequences of fraudulent activities.

Foster a strong ethical culture within the organization by addressing topics such as:  

  • Ethical decision-making 
  • Compliance with laws and regulations 
  • Role-specific training 
  • Continuous educational resources and updates 
  • Leadership and culture examples from senior management 
  • Online sources and support

2. Whistleblower Mechanisms

Encourage and support the reporting of suspicious activities through anonymous whistleblower hotlines or platforms. Create a culture that values transparency and integrity through implementing mechanisms like:

  • Hotlines 
  • Internal reporting systems 
  • Legal protections 
  • Third-party reporting services 
  • Policy awareness and continuous training

3. Background Checks and Screening

Conduct thorough background checks on employees, particularly those handling sensitive financial information or holding senior positions.

These checks help in making informed decisions around the following:  

  • Hiring 
  • Partnerships

4. Rotation of Responsibilities

Implement periodic rotation of job responsibilities to prevent any single individual from having prolonged, unchecked control over financial matters.

This helps in: 

  • Facilitates cross-training among employees 
  • Aids in early detection of anomalies 
  • Risk mitigation of fraud or errors 

5. Regular Audits and External Reviews

Conduct both internal and external audits regularly to detect irregularities or discrepancies in financial records. Engage independent third-party auditors to provide an unbiased perspective and valuable insights into areas of improvement.

Regular audits and reviews can:  

  • Identify weaknesses  
  • Provide compliance assurance 
  • Mitigate risks and other gaps  

Periodically seeking the expertise of external auditors or consultants to review internal controls can offer additional assurance and recommendations for enhancing your institution’s overall security and compliance framework.  

By implementing these proactive measures, institutions can effectively mitigate risks associated with senior-level embezzlement while supporting a culture of accountability, transparency, and integrity across all levels of the organization. 

Reach out today to discuss your compliance efforts with our regulatory and risk services experts.  

Our Expertise 

Perficient’s Risk and Regulatory CoE was established to confront potential compliance issues. This proactive approach enables our clients to mitigate legal and financial risks while upholding a positive reputation and maintaining stakeholder trust. 

Understanding the intricacies of the risk and regulatory landscape is fundamental to our team members within the Risk and Regulatory CoE. With over 500 financial institutions relying on Perficient’s expertise, we equip them software and technologies to navigate these challenges seamlessly. 

Learn More: Risk and Reputation Matter  

]]>
https://blogs.perficient.com/2024/03/18/5-tactics-to-safeguard-institutions-against-senior-level-embezzlement/feed/ 0 359180
Transforming Treasury Market Regulations https://blogs.perficient.com/2024/03/14/transforming-treasury-market-regulations/ https://blogs.perficient.com/2024/03/14/transforming-treasury-market-regulations/#respond Thu, 14 Mar 2024 19:35:56 +0000 https://blogs.perficient.com/?p=358945

On December 13, 2023, the Securities and Exchange Commission (SEC) made a landmark decision by voting to adopt significant rule changes mandating central clearing of certain secondary market transactions within the U.S. Treasury market.

These transactions include repurchases (repos), reverse repurchases (reverse repos) and U.S. Treasury securities. The rule change, one of the most substantial reforms in decades, aims to reduce risk and increase efficiency in the U.S. Treasury markets by introducing a clearinghouse to facilitate transactions between buyers and sellers.  

Changing Treasury Market Regulations

According to an SEC press release, The Treasury Market, valued at  $26 trillion, serves as the backbone of our capital markets. However, only a small portion—20% of repos, 30% of reverse repos, and 13% of Treasury cash transactions—are centrally cleared via the Fixed Income Clearing Corporation (FICC), the only Covered Clearing Agency (CCA) offering clearing services for such transactions.  

Covered Clearing Agencies (CCA) act as an intermediary between buyers and sellers, ensuring efficient transaction settlement by netting transactions on behalf of each counterparty and requiring margin from both parties to mitigate the risk of default. The low percentage of Treasury securities cleared through CCAs underscores significant industry-wide risk, which centralized clearing requirements aim to mitigate. 

To support the migration, the Fixed Income Clearing Corporation (FICC) must: 

  • Establish policies and procedures outlining how participants will clear all eligible transactions.  
  • Develop policies and procedures to calculate, collect, and hold a participant’s margin, separating proprietary and customer transactions. 
  • Implement policies and procedures to facilitate access to clearance and settlement services.  
  • Propose rule amendments for Rule 15c3-3 (the Customer Protection Rule) to permit margin required and on deposit to be included as a debit in the customer reserve formula. 

Important Compliance Dates 

The SEC will enforce the new requirements using a phased approach: 

  • By March 31, 2025, the FICC must propose necessary rule changes regarding the separation of house and customer margin, the broker-dealer customer protection rule, and access to central clearing.  
  • By December 31, 2025, direct participants must clear eligible cash transactions through a CCA. 
  • By June 30, 2026, direct participants must clear eligible repurchase and reverse repurchase transactions through a CCA. 

Your Expert Partner

For organizations navigating risk and regulatory challenges, our financial services expertise coupled with digital leadership across platforms equips the largest organizations to solve complex challenges and drive growth compliantly.  

Contact us today to discuss your specific needs. 

]]>
https://blogs.perficient.com/2024/03/14/transforming-treasury-market-regulations/feed/ 0 358945
Top 6 Trends for the Banking Industry in 2024 https://blogs.perficient.com/2024/02/29/top-6-trends-in-the-banking-industry-for-2024/ https://blogs.perficient.com/2024/02/29/top-6-trends-in-the-banking-industry-for-2024/#respond Thu, 29 Feb 2024 19:04:41 +0000 https://blogs.perficient.com/?p=357527

This blog was co-authored by Perficient banking expert: Scott Albahary

A slowing global economy, coupled with a divergent economic landscape, poses challenges for the banking industry in 2024. Driven by technological advancements, regulatory changes, and shifting consumer preferences, the banking industry must evolve and respond accordingly.

As institutions adapt, Perficient’s financial services expert, Scott Albahary, has identified six key trends to shape the banking landscape in the year ahead.

1. Credit Scoring and Decisioning

Influenced by the pandemic and subsequent economic shifts, has necessitated a more sophisticated approach to credit scoring and decision-making. Banking institutions are responding by integrating advanced technologies, particularly artificial intelligence and data analytics, into their lending operations to enhance efficiency and adaptability.

The emergence of modern alternatives to traditional credit scoring signifies a broader movement toward financial inclusion. By harnessing alternative data sources and supplementing conventional credit reports, institutions can offer fairer assessments of creditworthiness, extending credit opportunities to underserved populations. Through the analysis of diverse data sets, automation of loan processing, and consideration of varied factors, financial institutions are not only increasing customer satisfaction and reducing operational costs but also fostering resilience in the face of evolving economic landscapes.

Going forward, banks should:

  • Implement AI-driven systems to streamline credit decision processes, reducing decision times, and enabling faster responses to loan applications.
  • Utilize advanced algorithms and data analytics to enhance risk assessment methodologies, allowing banks to identify and mitigate default risks more effectively, thereby making more informed lending decisions.
  • Leverage data analytics tools to optimize portfolio performance by identifying trends, patterns, and potential risks, enabling banks to make proactive adjustments and maximize returns.
  • Explore and integrate alternative data sources and innovative scoring models to offer fairer assessments of creditworthiness. This approach extends credit opportunities to traditionally underserved populations, promoting financial inclusion within the banking sector.

2. Embedded Finance

Embedded finance, characterized by the integration of financial products into non-financial apps or websites, is gaining significant traction, especially in the commercial side of the banking industry. As new regulations come into play, embedded lending is becoming increasingly prevalent, highlighting the need for banks to leverage data analytics and automation effectively while ensuring compliance with regulatory standards.

Embedded finance offers banks in the commercial sector numerous advantages, including:

  • Smooth integration of financial services: Embedding financial services into non-financial platforms allows for seamless integration, providing customers with a unified experience.
  • Enhanced consumer access to credit: Enables consumers to access credit more conveniently, thereby improving the accessibility and usability of financial products.
  • Increased data analytics: Recognizing the importance of data analytics and automation is pivotal in successfully implementing embedded finance solutions. Intelligent automation and other data analytic tools enable banks to optimize processes, enhance decision-making, and improve customer experiences.
  • Facilitation of embedded lending while ensuring compliance: Embedded finance initiatives must adhere to regulatory requirements. By prioritizing compliance alongside embedded lending, banks can mitigate risks and ensure trust in their financial services offerings.

3. Banking Rewards and Loyalty Programs

Throughout the year, banking rewards and loyalty programs will take on increased significance, highlighting the critical role of personalization in enhancing customer retention and maintaining competitiveness in the face of external pressures such as money markets. Banks are recognizing the need to analyze customer data and behavior patterns comprehensively to tailor rewards programs to individual preferences, thereby fostering stronger relationships and increasing loyalty.

To remain relevant and competitive, banks should seriously consider upping their rewards and loyalty programs, ensuring they reflect the following characteristics:

  • Showcasing individualized incentives, preferences, and rewards based on customers’ unique spending habits and behaviors. Personalization not only enhances the customer experience but also strengthens the bond between banks and their clientele.
  • Offering customized loyalty programs that stand out from competitors. By providing unique benefits tailored to specific customer segments, banks can attract and retain customers more effectively.
  • Understanding customers’ preferences and behaviors allows banks to deliver tailored experiences that resonate with individuals, ultimately driving loyalty and long-term engagement.

As conversations around the competitive landscape intensify, banks must prioritize enhancing their rewards and loyalty programs to not only retain existing customers but also attract new ones. By embracing personalization and customization, banks can strengthen their position in the market and build lasting relationships with customers.

Improve the Customer Experience: Our Success in Action

Our client sought ways to improve its feedback processes to more accurately collect and respond to feedback, both internally and externally.We implemented artificial intelligence (AI) and natural language process solutions to analyze feedback across multiple channels. The system can accurately identify feedback across claims, sales, and internal employees, and resulted in data being processed 5 times more efficiently and a 98% reduction in response time.

4. Operational Resiliency

In an environment marked by heightened regulatory scrutiny and evolving customer expectations, operational resilience stands as a paramount concern for banks. To address this, institutions are increasingly turning to technology-driven solutions aimed at enhancing service reliability, compliance, and security.

The controlled integration of AI, intelligent automation, and machine learning empowers banks to achieve the following:

  • Leveraging AI and machine learning enables banks to deliver personalized services, thereby enhancing customer satisfaction and fostering loyalty.
  • Automation of tasks not only reduces costs and errors but also liberates resources for higher-value activities, thus streamlining operational efficiency.
  • Empowers banks with actionable insights, enabling faster and more informed decision-making processes.
  • Instrumental in proactively identifying potential risks and issuing alerts, thereby enhancing the institution’s ability to respond swiftly and mitigate adverse outcomes.

Perficient’s Expertise

A client needed to improve its loan operations to overcome challenges with productivity reporting, system maintenance, and time-consuming compliance processes. We facilitated a low-risk, efficient transition from a legacy enterprise content management platform to IBM FileNet P8 and enhanced the P8 environment with Trex, our proprietary transaction-processing application framework.

The solution automated content-centric workflows for loan documentation review, loan operations, quality assurance, and closed loan processing.

5. Debt Collections

During times of financial difficulty, considered customer communications are essential. Modern technologies, such as machine learning models, offer banks the opportunity to enhance efficiency and compliance throughout the debt collection process.

AI-powered debt collections allow banks to achieve to meet objectives, such as:

  • Facilitating faster resolution of outstanding debts: Automating routine tasks like reminders, follow-ups, and data analysis, streamlines the debt collection process and enhances operational efficiency while ensuring adherence to compliance standards.
  • Tailoring collection strategies: Personalization is key in debt collections. AI enables banks to tailor collection strategies based on individual circumstances, thereby increasing the likelihood of successful debt recovery while preserving positive customer relationships.
  • Utilizing more data analytics: Through the analysis of vast data sets, AI technologies identify trends, predict payment behavior, and optimize collection strategies. This data-driven approach empowers banks to make informed decisions and allocate resources effectively, enhancing overall debt recovery outcomes.
  • Ensuring regulatory compliance: AI-driven debt collection systems standardize collection practices and flag potential risks, thereby reducing legal and reputational liabilities associated with debt collection activities. By ensuring compliance with regulations, banks mitigate risks and maintain trust with customers and regulatory authorities alike.

6. Fraud Detection

Banks are increasingly turning to AI-powered solutions to effectively detect and prevent fraudulent activities. Through advanced AI algorithms, banks can swiftly identify and mitigate emerging fraud risks while ensuring regulatory compliance and safeguarding customer data. The advent of generative AI introduces disruptive capabilities across industries, particularly in fraud detection and transaction security enhancement. Key advancements include:

  • Synthetic Data Generation: By creating synthetic datasets that mirror real-world transactions, banks can train fraud detection models on diverse and realistic data sets without compromising customer privacy. This approach enables banks to enhance the robustness and accuracy of their fraud detection systems.
  • Novel Pattern Detection: AI-powered systems excel in uncovering previously unseen patterns and anomalies within transaction data. By leveraging these capabilities, banks can enhance the effectiveness of their fraud detection systems by identifying emerging fraud schemes and swiftly adapting to evolving threats. This proactive approach strengthens the overall security posture of banks and mitigates potential financial losses due to fraudulent activities.

Looking Ahead

The banking industry in 2024 is characterized by innovation, resilience, and a relentless focus on customer-centricity. By embracing emerging technologies, leveraging data analytics, and adapting to regulatory changes, banks can position themselves for sustainable growth and success in the coming year.

Staying ahead of these trends will be critical for banks to meet the needs and expectations of their customers while driving operational excellence and mitigating risks effectively.

Interested in optimizing your banking practice?

Contact us today or explore our comprehensive financial services offerings to learn more.

]]>
https://blogs.perficient.com/2024/02/29/top-6-trends-in-the-banking-industry-for-2024/feed/ 0 357527
Perficient Interviewed for Forrester: The Future Of Insurance https://blogs.perficient.com/2024/02/28/perficient-interviewed-for-forrester-the-future-of-insurance/ https://blogs.perficient.com/2024/02/28/perficient-interviewed-for-forrester-the-future-of-insurance/#respond Wed, 28 Feb 2024 16:25:37 +0000 https://blogs.perficient.com/?p=356820

With new risks, shifting market dynamics, and the unstoppable march of technology, the insurance industry finds itself at a crossroads. The imperative for transformation has never been clearer, and this is highlighted in Forrester’s report, The Future Of Insurance.

Embracing Change

The report states, “The business of insurance is in a heightened state of transformation…,” and insurance leaders must proactively “…change their business models, products, and processes over the coming decade to thrive in this volatile environment.” Perficient’s insurance experts, who were interviewed for this report, echo this sentiment, emphasizing the need for insurers to embrace innovation to stay relevant.

A Call for Transformation

  • Forrester’s “Six Factors [that] Will Challenge Insurers’ Profits In The Next Decade”:
  1. Geopolitical uncertainty
  2. Challenging economies
  3. Risk protection gap
  4. Technological advancements
  5. Regulatory changes
  6. Climate change
  • Embrace Technology Transformation: Recognizing and embracing digital innovation is not just advantageous; it’s essential for survival amidst evolving consumer expectations.
  • Better Pinpoint Your Risk(s): Predictive analytics to better target risks, artificial intelligence to identify fraud, and intelligent automation to improve operational efficiency are at the heart of insurance digital transformation moving forward.

Evolution of Business Models

Embedded Insurance represents a rapidly evolving distribution channel, with a significant emphasis placed on adapting insurance distribution to align with consumer preferences. One of Perficient’s insurance experts, Brian Bell, Insurance Principal, further speaks on this trend stating, “It is projected that up to 25% of the total P&C premium could flow through embedded distribution channels by the end of the decade.”

Embedded insurance enables consumers to purchase coverage the moment they are most inclined to do so, thereby broadening purchase opportunities for carriers and partners alike. The transformation potential of embedded insurance offers enhanced convenience and control throughout the purchasing process.

A robust digital strategy and API development plan are imperative for success:

  • Partnerships and experiences serve as extensions of carrier brands, necessitating careful cultivation.
  • The digital experience demands real-time, frictionless interaction facilitated by robust cloud infrastructure and API programs.

READ MORE: Data, Personalization, and Embedded Insurance

Future of Insurance Product Design

Going forward, insurance products will be characterized by high levels of individualization, holistic approaches, anticipatory measures, and inclusivity. Strategic investments toward smart and innovative technologies like artificial intelligence and generative automations will lead to improved efficiency and elevated customer experiences, particularly within the insurance sector.

Artificial intelligence (AI) emerges as a pivotal force within the insurance industry, especially for regional carriers seeking to thrive in a competitive and dynamic market. These insurers grapple with challenges such as customer retention and brand recognition, underscoring the growing importance of AI solutions for their success.

AI brings significant value to insurance practices in key areas:

  • Process Automation: AI streamlines tasks like claims processing and underwriting, reducing operational costs and enhancing efficiency.
  • Risk Assessment: AI algorithms analyze extensive datasets to assess risk accurately, empowering carriers to offer personalized policies and pricing while mitigating potential risks.
  • Personalized Customer Interactions: By harnessing AI to analyze customer data, carriers can deliver tailored recommendations and experiences, fostering enduring customer loyalty.
  • Pricing Optimization: AI optimizes pricing strategies by scrutinizing market trends and customer behavior, enabling carriers to offer competitive premiums while ensuring profitability.

LEARN MORE: How Can Regional Insurance Carriers Harness the Power of AI?

Strategies for Success

  • Prioritize customer-centricity, operational efficiency, and financial stability.
  • Blend automation with empathy to deliver superior outcomes.
  • Embrace technology to streamline expenses and drive revenue growth.
  • Harness various insurance assets, including platforms, to unlock business value.

Navigating the Future

The business of insurance stands at a pivotal moment in its history. Those who heed the call to transform and innovate will carve out a prosperous future, while those who resist change may find themselves struggling to stay afloat. One thing is certain: the only way forward is through evolution and adaptation.

Unlock Innovation

In navigating the insurance landscape, we believe our thought leaders play a pivotal role in guiding companies towards success. By embracing transformation, adopting agile methodologies, and leveraging innovative technologies, insurers can position themselves as industry leaders in the digital era.

To learn more, download The Future Of Insurance, available to purchase or to Forrester subscribers.

The future of insurance is ripe with opportunities for those willing to embrace transformation. Contact us today to learn more about our insurance offerings.

]]>
https://blogs.perficient.com/2024/02/28/perficient-interviewed-for-forrester-the-future-of-insurance/feed/ 0 356820
Future-Proofing Financial Services: Rule 3110 Updates Empower Brokers https://blogs.perficient.com/2024/01/23/future-proofing-financial-services-rule-3110-updates-empower-brokers/ https://blogs.perficient.com/2024/01/23/future-proofing-financial-services-rule-3110-updates-empower-brokers/#respond Tue, 23 Jan 2024 17:19:50 +0000 https://blogs.perficient.com/?p=352611

This post has been updated to reflect FINRA Regulatory Notice 24-02, issued January 23, 2024.

The COVID-19 pandemic prompted several unprecedented shifts in society, notably impacting the workplace and necessitating the adoption of innovative technologies that facilitate collaboration and efficiency in a work-from-home (WFH) environment.

For brokers, in the financial services sector, remote work became especially difficult due to the requirement for firms to register and supervise all home office “branches.” However, as remote work has become the new norm, the Securities Exchange Commission (SEC) has provided its approval to revise Rule 3110, easing the requirements for brokers choosing to work from home.

Work-From-Home (WFH) Background

Before the pandemic, firms were required to submit branch office applications on behalf of all the “branches.” Additionally, these branches underwent annual on-site inspections to ensure compliance with regulations.

Throughout the pandemic, the Financial Industry Regulatory Authority (FINRA) temporarily suspended the requirement for firms to submit applications for all office locations that were opened in response to the pandemic. FINRA also implemented a temporary rule (FINRA Rule 3110.17), which allowed member firms to conduct the annual inspections of their branch locations remotely.

Without action, this temporary relief would have expired on June 30, 2024, and would have significantly impacted the industry due to an estimated 75% increase in residential non-branch locations between December 2019 and December 2022.

What’s New?

Luckily, FINRA proposed two main revisions to Rule 3110:

  1. Categorize residential home offices as “residential supervisory locations” (RSLs), which should be treated as non-branch locations, subject to safeguards and limitations.
  2. Adopt a three-year “Pilot Program” for remote inspections.

Other key changes are as follows:

    • RSLs must be inspected by the member firm on a periodic schedule, assumed to be at least once every three years.
    • Member firms are responsible for ensuring surveillance and technology tools are suitable for remote locations.
    • Member firms are responsible for conducting and documenting a risk assessment for remote locations.
    • Member firms are responsible for establishing, maintaining, and enforcing written supervisory procedures for remote inspections.
    • Member firms are responsible for keeping written inspection records on file for a minimum of 3 years, or until the next inspection report has been completed.
    • Member firms are responsible for providing the FINRA with quarterly data, disclosing the number of inspections and any related findings.

The Benefits

FINRA anticipates the WFH model to endure, regardless of the state of the pandemic. The shift to remote work prompted significant lifestyle and work habit changes, fostering workplace flexibility.
This shift also led to technological advancements enabling firms to closely monitor broker activity to ensure full compliance at all times.

This approval indicates that the industry has gained the support of regulators to leverage technology for supervisory and surveillance purposes. Additional benefits brought by this change are:

  • Workplace flexibility promotes diversity and attracts stronger talent.
  • Increased employee satisfaction and retention.
  • Elimination of registration costs associated with registering all RSLs as branches.
  • Reduction in inspection frequency from annually to every three years.

The SEC approved FINRA’s revisions to Rule 3110 in November 2023 and, in January 2024, FINRA announced the following effective dates: 

  • Rule 3110.19 (Residential Supervisory Location) becomes effective on June 1, 2024; and
  • Rule 3110.18 (Remote Inspections Pilot Program) becomes effective on July 1, 2024.

Interested in exploring more of our financial services expertise?

Contact us today!

]]>
https://blogs.perficient.com/2024/01/23/future-proofing-financial-services-rule-3110-updates-empower-brokers/feed/ 0 352611
Understanding U.S. Regulator’s Proposed Extended Comment Period https://blogs.perficient.com/2023/12/21/understanding-u-s-regulators-proposed-extended-comment-period/ https://blogs.perficient.com/2023/12/21/understanding-u-s-regulators-proposed-extended-comment-period/#respond Thu, 21 Dec 2023 18:19:43 +0000 https://blogs.perficient.com/?p=351598

Earlier this year, the Office of the Comptroller of the Currency (OCC), the Board of Governors of the Federal Reserve System (Fed), and the Federal Deposit Insurance Corporation (FDIC) unveiled a proposed rule that would reshape the landscape for certain financial institutions.

In this article, we delve into the latest developments around the extended comment period, providing stakeholders an increased opportunity to share their insights.

A Brief Overview

Initially slated for comments by November 30, 2023, the proposed rule mandates specific financial institutions to issue and maintain a minimum amount of outstanding long-term debt (LTD), including:

  • Large depository institution-holding companies
  • U.S. intermediate holding companies of foreign banking organizations
  • Select insured depository institutions

The intent behind this regulatory move is to enhance the stability and resilience of these institutions by fostering responsible financial practices.
Learn More: U.S. Regulators to Bank Boards: “Debt is Good”

Crucial Update: Extension of the Comment Period

Since Perficient’s Risk and Regulatory Compliance Center of Excellence (CoE) analyzed this decision in September, a significant development has occurred. On November 22, 2023, the FDIC released an update titled “Agencies Extend Comment Period on Proposed Rule to Require Large Banks to Maintain Long-Term Debt.”

This extension stretches the original comment deadline to January 16, 2024, providing stakeholders with an additional window to thoroughly analyze and provide thoughtful commentary.

How to Engage in the Commenting Process

Stakeholders are strongly encouraged to actively participate in the commenting process, leveraging the extended timeframe until January 16, 2024. To submit your comments to the OCC, we suggest exploring the following methods:

  • Visit the Federal Rulemaking Portal: Visit Regulations.gov, enter “Docket ID OCC–2023–0011” in the Search Box, and click “Search.” Submit public comments via the “Comment” box below the displayed document information.
  • Email: Send your comments to regulationshelpdesk@gsa.gov.
  • Mail: Forward your comments to Chief Counsel’s Office, Attention: Comment Processing, Office of the Comptroller of the Currency, 400 7th Street SW, Suite 3E–218, Washington, DC 20219.
  • Hand Delivery/Courier: Personally deliver your comments to 400 7th Street SW, Suite 3E–218, Washington, DC 20219. Ensure “OCC” is the agency name, and “Docket ID OCC–2023–0011” is in your comment.

It’s important to note that the OCC will include all received comments in the docket and publish them on the Regulations.gov website without alterations, maintaining confidentiality.

Navigating the New Landscape

For more understanding of the proposed rule, the federal register notice from the Department of The Treasury (OCC), Federal Reserve Board, and Federal Deposit Insurance Corporation is available here: Federal Register Notice.

Connect with Our Experts

For those seeking more guidance and insight into specific risk and regulatory challenges, our experts from the Risk and Regulatory Compliance CoE are here to help.

Our industry knowledge in financial services, coupled with digital leadership across platforms and business needs, empowers large organizations to navigate complex challenges and foster growth.

Contact us today or explore our Financial Services solutions options to see how Perficient can further propel your business.

]]>
https://blogs.perficient.com/2023/12/21/understanding-u-s-regulators-proposed-extended-comment-period/feed/ 0 351598
Headless BI? https://blogs.perficient.com/2023/11/29/headless-bi/ https://blogs.perficient.com/2023/11/29/headless-bi/#respond Wed, 29 Nov 2023 14:31:15 +0000 https://blogs.perficient.com/?p=350442

Imagine you’re running a business and using analytics to make decisions. You have reports, dashboards, data visualizations – all sorts of content that helps you understand what’s happening in your business. This content goes through a life cycle: it’s created, used, updated, and eventually retired. Managing this cycle effectively is crucial to ensure the information remains accurate and useful.

Now, traditionally, this entire process is tightly woven into the specific business intelligence (BI) tools you’re using. The creation, updating, and viewing of analytics content all happen in the same place, tied to the same interface. It’s like having your entire analytics operation in one big room where everything happens.

A “headless” approach changes this by separating the backend (where all the content is created and managed) from the frontend (where it’s viewed and interacted with). Think of it as having a central kitchen (the backend) where all the meals (analytics content) are prepared, but these meals can be served in different dining rooms (frontends) depending on who’s eating.

In this approach, you create and manage your content in one place, but you can display it anywhere – on different types of devices, within various applications, or even integrated into other systems. This separation offers a bunch of advantages:

  1. Flexibility: You’re not stuck with one way of presenting your data. You can tailor it to fit different platforms or user preferences.
  2. Customization: It’s easier to personalize how information is displayed to different users.
  3. Scalability: As your business grows and changes, you can adapt more easily without being tied down to a single system’s limitations.
  4. Efficiency: Automating the management of this content can streamline operations, making things run smoother and faster.
  5. Security and Compliance: It’s simpler to apply consistent security rules and comply with regulations when you’re managing everything centrally.

In essence, a headless approach in the world of BI and analytics is about having more control and flexibility over how you handle your data and insights. It lets you adapt quickly to changing needs and makes it easier to deliver the right information in the right way to the right people.

TL;DR: A “headless” approach in analytics separates the backend (creation and management of content) from the frontend (where it’s viewed). This offers flexibility in how data is presented, allows for customization, improves scalability and efficiency, and simplifies security and compliance. Essentially, it gives more control over how business intelligence content is handled and displayed.

]]>
https://blogs.perficient.com/2023/11/29/headless-bi/feed/ 0 350442
How Process Mining Accelerates Efficiency for Highly Regulated, Customer-Obsessed Industries https://blogs.perficient.com/2023/11/27/how-process-mining-accelerates-efficiency-for-highly-regulated-customer-obsessed-industries/ https://blogs.perficient.com/2023/11/27/how-process-mining-accelerates-efficiency-for-highly-regulated-customer-obsessed-industries/#respond Mon, 27 Nov 2023 17:37:32 +0000 https://blogs.perficient.com/?p=349851

This blog was co-authored by Carl Aridas and Joel Thimsen.

In the dynamic environment of highly regulated industries like healthcare and financial services, leaders often balance competing goals to delight customers while cutting costs. This has challenged many organizations to better optimize and intelligently automate business processes and experiences.

According to The Forrester Wave™: Process Intelligence Software, Q3 2023 report, “Customer-obsessed companies are adapting how they work internally to deliver shorter turnaround times at higher quality and/or lower cost. They are shifting from an efficiency model where improvements focus on optimizing internal functions to an effectiveness model that looks at customer outcomes holistically.”

Imagine a technology that can precisely pinpoint where a process bottlenecks, track where inefficiencies lie, and offer ideas for automation opportunities.

Process mining offers a data-driven, automated, and objective approach to analyzing business processes. When approached well, it enables organizations to:

  1. Unearth hyper-detailed insights into how work is done
  2. Identify processes that hinder productivity and are ripe for a rethink

This is accomplished by combining data science and process management to deeply understand operational processes based on an organization’s widely available activity logs. From there, business processes can be modeled, analyzed, and then optimized.

Diagnosing and Correcting Process Failures

Despite the billions spent yearly to digitize processes, companies often are not operating at their maximum potential. This is partly due to processes being forced to run across a rigid and fragmented technological landscape. Instead of creating value, digitization often creates execution gaps.

Common signs that your organization is suffering from execution gaps:

  • Inability to measure how your processes run
  • You do not know which gaps and root causes have the biggest impact on KPIs
  • You cannot act quickly enough (or do not have the means to remove) the gaps in the underlying transaction systems, forcing costly action workarounds

Process mining is technology-agnostic, so it works on any system with an activity log that contains as few as three data points: unique identifier or item ID, timestamp, and activity (i.e., what was done).

Process mining helps organizations MEASURE and KNOW so they can most effectively ACT:

  • MEASURE capacity and see how processes really run. An ideal solution combines top-performing analysts with innovative AI.
  • KNOW which gaps have the greatest impact, and the right course of action to close them. View custom-tailored results to make data-driven decisions that clearly outline opportunities and plans.
  • ACT to remove gaps in real-time and unlock your capacity. Accelerate implementation with proven delivery methodologies that power the seamless execution of the plan.

The “People Factor” Of Process Mining

Every process is backed by the people who rely on its accuracy and impact. For this reason (and more), process mining reaches its potential when coupled with experts who can provide interpretation and drive iterative improvements.

Ideally, process mining combines sophisticated mathematical models and algorithms with human expertise to discover patterns, analyze, and quantify improvement areas across all systems involved in a process.

After discovering and analyzing these complete business processes, they can be optimized and automated for real, tangible, impactful results.

You May Enjoy: Perficient Named in Forrester’s Digital Transformation Services Landscape, Q3 2023

GOAL: Turn Event Data Into Process Optimization Insights and Actions

Process mining can compare the discovered process models against predefined ideals or regulatory standards. This comparison enables your teams to assess process effectiveness and identify deviations that require attention due to inefficiencies and/or non-compliance.
Process mining is particularly valuable in industries like healthcare and finance where strict regulatory requirements and compliance are especially critical.

Healthcare: Highly Protected Data

Automation Examples: Compliantly manage HIPAA-protected patient/member data while increasing accuracy, efficiency, and productivity to help improve patient outcomes and mitigate risk. Reduce denial write-offs and streamline the revenue cycle by pinpointing common data issues, like missing patient information, that can be rectified through data automation and accurate, proactive account updates.
See Also: The Healthcare Executive’s Guide to Intelligent Automation

Financial Services: Regulatory Compliance Complexities

Automation Examples: Ensure rapid response times to address complaints, while adhering to response compliance regulations. Build a reliable risk management strategy using accurate estimations and predictions. Quickly and consistently evaluate transactions against set business or regulatory policies and route cases to the appropriate domain investigators.
Related: Automation Industry Trends and Business Outcomes

Jump Start Greater Efficiencies + Business Outcomes

Process mining simplifies critical, complex business processes that often span multiple steps and stakeholders, and helps to isolate the inefficiencies, errors, and/or delays that can negatively impact organizational performance.

In the true spirit of efficiency, our process mining operating model pairs Perficient experts with your business leaders for rapid, data-driven understanding and a clear path forward. We accurately define process models, outline variations to those processes (identified from most to least common), and equip your enterprise with a report of prime opportunities to automate manual workflows and optimize existing automation.

Contact us today!

View our Process Mining Strategic Position, or we also invite you to our expertise in intelligent automation, financial services, and healthcare.

]]>
https://blogs.perficient.com/2023/11/27/how-process-mining-accelerates-efficiency-for-highly-regulated-customer-obsessed-industries/feed/ 0 349851