Data Governance Strategy Articles / Blogs / Perficient https://blogs.perficient.com/tag/data-governance-strategy/ Expert Digital Insights Wed, 04 Dec 2024 21:46:46 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Data Governance Strategy Articles / Blogs / Perficient https://blogs.perficient.com/tag/data-governance-strategy/ 32 32 30508587 Responsible AI: Expanding Responsibility Beyond the Usual Suspects https://blogs.perficient.com/2024/12/04/responsible-ai-expanding-responsibility-beyond-the-usual-suspects/ https://blogs.perficient.com/2024/12/04/responsible-ai-expanding-responsibility-beyond-the-usual-suspects/#respond Wed, 04 Dec 2024 21:36:15 +0000 https://blogs.perficient.com/?p=373095

In the world of AI, we often hear about “Responsible AI.” However, if you ask ten people what it actually means, you might get ten different answers. Most will focus on ethical standards: fairness, transparency, and social good. But is that the end of responsibility? Many of our AI solutions are built by enterprise organizations who aim to meet both ethical standards AND business objectives. To whom are we responsible, and what kind of responsibility do we really owe? Let’s dive into what “Responsible AI” could mean with a broader scope. 

Ethical Responsibility: The Foundation of Responsible AI 

Ethical responsibility is often our go-to definition for Responsible AI. We’re talking about fairness in algorithms, transparency in data use, and minimizing harm, especially in areas like bias and discrimination. It’s crucial and non-negotiable, but ethics alone don’t cover the full range of responsibilities we have as business and technology leaders. As powerful as ethical guidelines are, they only address one part of the responsibility puzzle. So, let’s step out of this comfort zone a bit to dive deeper. 

Operational Responsibility: Keeping an Eye on Costs 

At its core, AI tools are a resource-intensive technology. When we deploy AI, we’re not just pushing lines of code into the world; we’re managing data infrastructure, compute power, and – let’s face it – a budget that often feels like it’s getting away from us.  

This brings up a question we don’t always want to ask: is it responsible to use up cloud resources so that the AI can write a sonnet? 

Of course, some use cases justify high costs, but we need to weigh the value of specific applications. Responsible AI isn’t just about can we do something; it’s about should we do it, and whether it’s appropriate to pour resources into every whimsical or niche application. 

 Operational responsibility means asking tough questions about costs and sustainability—and, yes, learning to say “no” to AI haikus. 

Responsibility to Employees: Making AI Usable and Sustainable 

If we only think about responsibility in terms of what AI produces, we miss a huge part of the equation: the people behind it. Building Responsible AI isn’t just about protecting the end user; it’s about ensuring that developers, data scientists, and support teams innovating AI systems have the tools and support they need.  

Imagine the mental gymnastics required for an employee navigating overly-complex, high-stakes AI projects without proper support. Not fun. Frankly, it’s an environment where burnout, inefficiency, and mistakes become inevitable. Responsible AI also means being responsible to our employees by prioritizing usability, reducing friction, and creating workflows to make their jobs easier, not more complicated. Employees who are empowered to build reliable, ethical, and efficient AI solutions ultimately deliver better results.  

User Responsibility: Guardrails to Keep AI on Task 

Users love pushing AI to its limits—asking it quirky questions, testing its boundaries, and sometimes just letting it meander into irrelevant tangents. While AI should offer flexibility, there’s a balance to be struck. One of the responsibilities we carry is to guide users with tailored guardrails, ensuring the AI is not only useful but also used in productive, appropriate ways.  

That doesn’t mean policing users, but it does mean setting up intelligent limits to keep AI applications focused on their intended tasks. If the AI’s purpose is to help with research, maybe it doesn’t need to compose a 19th-century-style romance novel (as entertaining as that might be). Guardrails help direct users toward outcomes that are meaningful, keeping both the users and the AI on track. 

Balancing Responsibilities: A Holistic View of Responsible AI 

Responsible AI encompasses a variety of key areas, including ethics, operational efficiency, employee support, and user guidance. Each one adds an additional layer of responsibility, and while these layers can occasionally conflict, they’re all necessary to create AI that truly upholds ethical and practical standards. Taking a holistic approach requires us to evaluate trade-offs carefully. We may sometimes prioritize user needs over operational costs or support employees over certain ethics constraints, but ultimately, the goal is to balance these responsibilities thoughtfully. 

Expanding the scope of “Responsible AI” means going beyond traditional ethics. It’s about asking uncomfortable questions, like “Is this AI task worth the cloud bill?” and considering how we support the  people who are building and using AI. If we want AI to be truly beneficial, we need to be responsible not only to society at large but also to our internal teams and budgets. 

Our dedicated team of AI and digital transformation experts are committed to helping the largest organizations drive real business outcomes. For more information on how Perficient can implement your dream digital experiences, contact Perficient to start your journey.

]]>
https://blogs.perficient.com/2024/12/04/responsible-ai-expanding-responsibility-beyond-the-usual-suspects/feed/ 0 373095
Streamline Your PIM Strategy: Key Techniques for Effective inriver Integration https://blogs.perficient.com/2024/11/13/streamline-your-pim-strategy-key-techniques-for-effective-inriver-integration/ https://blogs.perficient.com/2024/11/13/streamline-your-pim-strategy-key-techniques-for-effective-inriver-integration/#comments Thu, 14 Nov 2024 05:03:27 +0000 https://blogs.perficient.com/?p=370634

In today’s digital landscape, efficiently managing product information is vital for businesses to enhance customer satisfaction and drive sales growth. A robust Product Information Management (PIM) system with excellent integration features, like inriver, will streamline your PIM strategy. By utilizing the integration frameworks and APIs provided by inriver, businesses can ensure relevant, accurate, and consistent product information across all channels. This article explores key inriver integration techniques that have the potential to transform your PIM approach.

The importance of PIM Integration

Automating PIM processes leads to significant improvements in efficiency, accuracy, and scalability. By eliminating manual data entry, automated integration reduces errors and ensures that information remains consistent and current across all systems. This not only saves time and cuts labor costs but also enhances business agility and customer satisfaction. With automated integration, companies can swiftly adapt to market changes, make informed decisions, and provide timely, personalized information to their customers.

Streamline the PIM process

Exploring inriver Integration Options

There are several ways to automate the integration between systems that are used to send or receive data –

Leveraging APIs (application programming interface)-

  • inriver REST APIs – These can be utilized to build integrations in any programming language and customize interfaces within inriver, including creating enriched PDF/Preview templates.
  • inriver Remoting APIs – These require C# programming knowledge and are used with hosted solutions. The Remoting API services consist of six major components:
    • Channel Service – Methods related to channels. e.g. Channel Structure, Publish/Unpublish a channel, Retrieve entities and links from a channel etc.
    • Data Service – One of the most widely used Service for creating, updating, deleting and finding entities and links in the system.
    • Model Service – Contains methods for building and maintaining PIM data model.
    • Print Service – Used for developing the inriver print plugin.
    • User Service – Provide methods for maintaining uses, role, permissions and restrictions.
    • Utility Service – Contains various method including Connector states, HTML Templates, Languages, and Notifications.

Remoting Services

Remoting Services

  • Content API  – A set of APIs designed to facilitate the onboarding and distribution of large volume of product data.
    • Content Onboarding API – help standardize the data onboarding process by dividing them into five key steps – Landing Area, Field Mapping, Staging area, PIM validations and Import.
    • Content Delivery API – used for distribution of product data to various channels and platforms, ensures that product data is uniform across all channels.

Integration Framework (IIF) – The Integration Framework is a foundation for building adapters and outbound integrations in inriver. It transforms customer’s unique data model into a standard integration model. It supports custom entity types, delta functionality and provide standard functions to deliver product data.

High level integration framework flow

High level integration framework flow

The following table highlights the key aspects when considering integration within inriver –

Feature/Aspect REST API Remoting API inriver Integration Framework (IIF) Content API
Functionality Basic to advance functionality Extensive functionality Outbound integrations Build on IIF, Standardizes inbound and outbound data handling
Programming Language Technology-agnostic Requires C# programming Requires C# programming Technology-agnostic
Use Cases Remote solutions Hosted solutions, advanced operations Exporting data to storefronts, building adapters Onboarding product data, distributing product data
Performance Better performance for remote solutions Better performance for hosted solutions Efficient for outbound data handling Efficient for both inbound and outbound data handling
Flexibility High flexibility, suitable for various platforms Less flexible, specific to inriver environment Moderate flexibility, decouples standard adapters High flexibility, suitable for various platforms
Scalability Highly scalable Scalable within inriver cloud service Scalable for outbound integrations Highly scalable
Common Applications eCommerce platforms, CMS, BI tools ERP systems, custom extensions eCommerce platforms, Marketplaces Supplier onboarding, ERP, content distribution

 

These integration techniques can significantly enhance your PIM strategy, ensuring your product data remains accurate, consistent, and up to date across all channels. At Perficient, we engage in comprehensive discussions throughout our elaboration process and continue to validate during implementation phase. We help finalize best practices tailored to each customer’s unique needs, recognizing that one approach may work better for one client than another. Get in touch to explore how we can support you on your PIM implementation journey, whether you’re starting fresh or facing challenges with an existing system.

]]>
https://blogs.perficient.com/2024/11/13/streamline-your-pim-strategy-key-techniques-for-effective-inriver-integration/feed/ 2 370634
Risk Management Data Strategy – Insights from an Inquisitive Overseer https://blogs.perficient.com/2024/08/19/risk-management-data-strategy/ https://blogs.perficient.com/2024/08/19/risk-management-data-strategy/#comments Mon, 19 Aug 2024 14:47:52 +0000 https://blogs.perficient.com/?p=367560

We are witnessing a sea-change in the way data is managed by banks and financial institutions all over the world. Data being commoditized and, in some cases, even monetized by banks is the order of the day. Though this seems to be at a stage where some more push is required in terms of adoption in the risk management function. Traditional risk managers, by their job definition, are highly cautious of the result sets provided by the analytics teams. I have even heard the phrase “Please check the report, I don’t understand the models and hence trust the number”.

So, in the risk function, while this is a race for data aggregation, structured data, unstructured data, data quality, data granularity, news feeds, market overviews, its also a challenge from an acceptance perspective. The vision is that all of the data can be aggregated, harmonized and used for better, faster and more informed decision making for Financial and Non Financial Risk Management. The interdependencies between the risks were factors that were not considered in the “Good Old Days” of risk management (pun intended).

Based on my experience, here are the common issues that are faced by banks running a risk of not having a good risk data strategy.

1. The IT-Business tussle (“YOU don’t know what YOU are doing”)

This according to me is the biggest challenge facing traditional banks, especially in the risk function. “The Business”, in traditional banks, is treated like a larger-than-life entity that needs to be supported by IT. This notion of IT being the service provider, whilst business is the “bread-earner”, especially in the traditional banks’ risk departments; does not hold good anymore. It has been proven time and again that the two cannot function without each other and that’s what needs to be cultivated as a management mindset for strategic data management effort as well. This is a culture change, but it’s happening slowly and will have to be adapted industry-wide. It has been proven that the financial institutions with the most organized data have a significant market advantage.

2. Data Overload (“Dude! where’s my Insight”)

The primary goal of data management, sourcing and aggregation effort will have to be converting data into informational insights. The team analyzing the data warehouses, the data lakes and aiding the analytics will have to have this one major organizational goal in mind. Banks have silos, these silos have been created due to mergers, regulations, entities, risk types, chinese walls, data protection, land laws or sometimes just technological challenges over time. The solution to most this is to start with a clean slate. The management mandate for getting the right people to talk and be vested in this change is crucial, challenging but crucial. Good old analysis techniques and brain storming sessions for weeding out what is unnecessary and getting the right set of elements is the key. This needs an overhaul in the way the banking business has been traditionally looking at data i.e. something that is needed for reporting. Understanding of the data lineage and touchpoint systems is most crucial.

3. The CDO Dilemma (“To meta or not to meta”)

The CDO’s role in most banks is now well defined. The risk and compliance analytics and reporting division almost solely depends on the CDO function for insights on regulatory reporting and other forms of innovative data analytics. The key success factor of the CDO organization lies in allocation of the right set of analysts to the business areas. A CDO analyst on the market risk side, for instance, will have to be well versed with market data, bank hierarchies, VaR Calculation engines, Risk not in VaR (RNiV); supporting reference data in addition to the trade systems data that these data elements will have a direct or indirect impact on. Notwithstanding the critical data elements. An additional understanding of how this would impact other forms of risk reporting, like credit risk and non-financial risk is definitely a nice to have. Defining a meta-data strategy for the full lineage, its touch-points and transformations is a strenuous effort in analysis of systems owned by disparate teams with siloed implementation patterns over time. One fix that I saw working is that every significant application group / team can have a senior representative for the CDO interaction. Vested stakeholder interest is turning out to be the one major success factor in the programs that have been successful. This ascertains completeness of the critical data elements definition and hence aid data governance strategy in a wholesome way.

4. The ever-changing nature of financial risk management (“What did they change now?”)

The Basel Committee recommendations have been consistent in driving the urge to reinvent processes in the risk management area. With Fundamental Review of the Trading Book (FRTB) the focus has been very clearly realigned to data processes in organizations. Whilst the big banks already had demonstrated a sound understanding of modellable risk factors based on scenarios, this time the Basel committee has also asked banks to focus on Non-Modellable Risk factors (NMRF). Add the standard approach (sensitivities defined by regulator) and internal models approach (IMA – Bank defined enhanced sensitivities), the change from entity based risk calculations to desk based is a significant paradigm shift. Single golden-source definition for transaction data along with desk structure validation seems to be a major area of concern amongst banks.

Add climate risk to the mix with the Paris accord, the RWA calculations will now need additional data points, additional models and additional investment in external data defining the physical and transition risk associated. Data-lake / Big Data solutions with defined critical data elements and a full log of transformations with respect to lineage is a significant investment but will only work in favor of any more changes that come through on the regulations side. There have always been banks that have been great at this consistently and banks that lag significantly.

All and all, risk management happens to be a great use case for a greenfield CDO data strategy implementation, and these hurdles have to be handled before the ultimate Zen goal of a perfect risk data strategy. Believe me, the first step is to get the bank’s consolidated risk data strategy right and everything else will follow.

 

This is a 2021 article, also published here –  Risk Management Data Strategy – Insights from an Inquisitive Overseer | LinkedIn

]]>
https://blogs.perficient.com/2024/08/19/risk-management-data-strategy/feed/ 1 367560