Perficient Blogs https://blogs.perficient.com Expert Insights Thu, 19 Jul 2018 22:30:15 +0000 en-US hourly 1 https://wordpress.org/?v=4.9.7 30508587 Copyright © Perficient Blogs 2011 gserafini@gmail.com (Perficient Blogs) gserafini@gmail.com (Perficient Blogs) 1440 https://blogs.perficient.com/wp-content/plugins/podpress/images/powered_by_podpress.jpg Perficient Blogs https://blogs.perficient.com 144 144 Blogs at Perficient Perficient Blogs Perficient Blogs gserafini@gmail.com no no 5 Ways to Identify User Trends with Surveys https://blogs.perficient.com/2018/07/19/salesforce-user-trends-from-surveys/ https://blogs.perficient.com/2018/07/19/salesforce-user-trends-from-surveys/#respond Thu, 19 Jul 2018 22:30:15 +0000 https://blogs.perficient.com/?p=229453

Why Guess at User Trends?

With advancements in technology paired with our everyday consumer experiences, today’s user trends are in constant flux. When the technology that’s supposed to help us doesn’t, it’s frustrating, and even with the most successful projects, your users are going to have questions.

Identifying the reasons behinds user trends can sometimes be a guessing game, however there are several data measuring tools available to help. One extremely useful tactic and a best practice we recommend (and use ourselves) is the regular use of user surveys.

Implementing Survey Feedback

Receiving and implementing the learnings and suggestions from surveys is not an overnight process. When you’re receiving feedback, it’s important that your users feel heard, even if you can’t address all feedback. Consider starting a “You Asked, We Listened” program where you address some recurring employee feedback items every quarter.

Survey Tip #1: Ask Open-Ended Questions

How do you use Salesforce? Talk me through your process.
What do you like best about Salesforce?
What would you change about Salesforce?
What is most frustrating about Salesforce?
What information do you need that you can’t find?

Survey Tip #2: Learn About Your User

How long have you been using Salesforce?
Rate your overall productivity using Salesforce:  (more) 1-10 (less)
Rate your overall satisfaction with Salesforce: (more) 1-10 (less)

Survey Tip #3: Assess Usability & Training Opportunities

I have the tools I need to do my job: Yes or No
It’s easy to work in Salesforce: Yes or No

Survey Tip #4: Use a Survey App

Use a survey app to conduct a formal survey, measure overall satisfaction, and identify any pain points. This can be useful for tracking responses, blind surveys, and many have prebuilt templates to get you set up and sending in just a few clicks. Some popular survey tools include Survey Monkey, Google Forms, and Doodle (best for scheduling).

Bonus: Poll Straight From Salesforce in Chatter

Create an informal Chatter poll to gather quick insights. Users can also provide written feedback in the poll’s comment thread.

Survey Tip #5: Build Focus Groups

Bring together a group of customers or employees and let them answer a set of specific questions. Assign a moderator for a guided discussion or let the group take the conversation away. Document the discussion in a report and share learnings with your team.

Your Turn

You can use several methods to survey your employees and customers, and a combination of various forms is best. Additionally, we discuss several ways to use employee communities below for feedback. Which methods do you use to collect feedback? Let us know in the comments below.

]]>
https://blogs.perficient.com/2018/07/19/salesforce-user-trends-from-surveys/feed/ 0 229453
Driving Better Decisions with Data Governance https://blogs.perficient.com/2018/07/19/driving-better-decisions-data-governance/ https://blogs.perficient.com/2018/07/19/driving-better-decisions-data-governance/#respond Thu, 19 Jul 2018 18:49:59 +0000 https://blogs.perficient.com/?p=228163 The business capabilities presented in our new guide demonstrates how forward-thinking financial services companies are leveraging data governance to create value for the enterprise. Accurate and timely information continues to be a key driver of enabling better decision making.

Capabilities such as data principles and strategy, data architecture, organizational roles, authoritative sources, data lineage, data quality, and data contracts can be used individually or in concert to create new value for financial management, regulators, or risk management. Leading firms are leveraging these capabilities to maintain excellence in a highly competitive marketplace.

Through technological advances and well-defined business capabilities, new paradigms have been created for leveraging data governance to accelerate value for financial services organizations.

]]>
https://blogs.perficient.com/2018/07/19/driving-better-decisions-data-governance/feed/ 0 228163
Hype Cycle for 3 Phases of an Agile Backlog Groom https://blogs.perficient.com/2018/07/19/3-phases-agile-backlog-groom/ https://blogs.perficient.com/2018/07/19/3-phases-agile-backlog-groom/#respond Thu, 19 Jul 2018 16:51:57 +0000 https://blogs.perficient.com/?p=229306 I was reading about the Gartner Hype Cycle and realized the same curve could be applied to Agile Backlog Groom (or simply Grooming).  Grooming is the process for an agile scrum team to absorb new work and add details. Every piece of work, no matter the size, follows this generic evolution. That is, the following discussion is Epic, Story, etc. agnostic and if you are a water-scrum-fall champion, this applies to you too.

Agile Backlog Groom

The groom session goal is for the scrum team to move from initial introduction to common understanding. This common understanding of the scope is a cornerstone of agile scrum and further, Mike Willis explains the cone of uncertainty.  The team moves from confusion (conscious or not) to tangible scope. After grooming, the team can estimate and commit to the work and this is applied equally to requirement specification step of a waterfall model as suggested by Philip Wang.

PS: At any point the scrum team can (should) delay this activity pending inquiry or action.

1 Expand the “What”

On initial consumption, the team discovers “the what.”  The team learns of the business request and asks questions to make sure they understand the vision. This first phase is a real work out session. Let the ideas rip!

Hype Cycle Backlog Groom

The agile groom hype cycle.

The team members rapidly expand the total scope and include the“Nice to have’s” too. Everyone on the scrum team is busy, especially the supporters, for example, Architect, DB, UX, SME, SEC, OPS, etc.

Nothing is outstanding at the termination of this phase. We are at “scope peak.” Consider edge cases, tech debt, and refactoring, not just nominal flows. In a verbal contract, the team members all agree that the story could not mean anything more. The Product Owner has done her part by actively ranking the pieces/parts, for example through use of MoSCoW (must, should, could, would).

2 Constrain the “What”

After the story pieces/parts have expanded [beyond belief]. Then the team brings it back to a minimum viable addition to the software baseline. The “Musts” definitely remain and the “Shoulds” are nearby.

They put the scrum car into reverse, so that the scope boundary tightens down. Stories spin off by decomposition methods (Christiaan Verwijs has some good ideas). Now, only the most necessary use cases, test scenarios, and edge cases, etc. remain.

The important thing is that all team members are moving to a common understanding. The team’s cone of uncertainty has narrowed.

3 Add the “How”

The team hits the “scope trough” and the scope has bottomed out. Then they begin to kick around “the how.” There is an expansion here – adding necessary dependencies, refactoring, documentation, databases, technical prototyping, design styling, test assets, etc.

This is the development engineering feedback to the Product Owner.  If it is an easy get based on the engineering approach, the team can add a “Could” or “Would.” Most importantly, the cone of uncertainty, that is, the level of mismatched understanding, has narrowed even more.

Go Build It

The team has reached a “plateau of understanding.” Everyone involved knows [near] exactly what is needed and how they will get there. The story details are quite essential, understood, and bounded. These steps help to manage expectations. The scrum team can estimate with confidence, commit to the MoS’s and excite with the CoW’s.

A mature scrum team can groom a story in ~15 minutes, but clearly this depends on the work size. Using the hype cycle, the team sees agile backlog groom transitions. If they are so inclined, groom efficiency can be measured with a clock.

 

 

]]>
https://blogs.perficient.com/2018/07/19/3-phases-agile-backlog-groom/feed/ 0 229306
Are you valuing Data as an asset on your Balance Sheet? https://blogs.perficient.com/2018/07/19/are-you-valuing-data-as-an-asset-on-your-balance-sheet/ https://blogs.perficient.com/2018/07/19/are-you-valuing-data-as-an-asset-on-your-balance-sheet/#respond Thu, 19 Jul 2018 16:17:27 +0000 https://blogs.perficient.com/?p=229298 The average age of a company listed on the S&P 500 has fallen from almost 60 years old in the 1950s to less than 20 years old today. Innovative companies that are willing to embrace transformative technologies make the list today, while businesses that are hesitant to embrace change risk becoming obsolete.

Thriving companies, innovators, value their data as an asset. They use big data solutions as a competitive advantage to increase revenue, reduce cost, and improve cash flow. Data is woven into the fabric of every organization.  It records what happened, but increasingly, it’s being used to drive change and transformation at unprecedented rates.

Any business leader looking to maximize their data needs to ask themselves: Does your organization have a comprehensive data strategy? Does that strategy address both structured and unstructured data? Do you have a platform that allows your organization to analyze transactional data and social sentiment?

If you answered “No” to any of these questions, chances are you have untapped data resources or, at the very least, under-utilized data resources.

Experts claim that there is a 10x return on investment in analytics.  For some organizations, that’s the low end estimate of value they’ve created.  Industry analyst firm IDC has even estimated there is a $430B economic advantage to organizations that analyze all data and deliver actionable insights.  The bottom line is that the opportunity is big, and growing.

 

How valuable are your Data Assets?

Data has been doubling every couple of years for a while now. With the exponential rate of growth in data volume and data types, traditional data warehouse architecture cannot solve today’s business analytics problems. You need new approaches to handle the growing complexity while trying to maintain expenses and stay ahead of the competition.

Your customers, channels and competitors are digital. So are your employees and increasingly even your products. Digital transformation is critical and according to Forrester, 89% of executives see it impacting their business in the upcoming 12 months – and that survey was taken in 2017!

Also, machine learning is more than just a buzzword. It’s a core part of the solution. For many, even most companies, it’s the most important part of the solution. With the arrival of big data the data itself it quite complex as are the interactions between different data sets or types of data. Machine learning algorithms are able at some level to figure things out for themselves.

The point is that there are new types of business challenges that organizations are facing today. To get the most return from your organization’s data capital, you need to be well versed in transformative technologies that are available today and approaches that you can use to reduce cost and yield valuable business insights. You should plan to invest more in advanced analytics tools to get the most value out of big data that you continue to accumulate over time.

 

What steps are Organizations taking?

Organizations are building a modern analytics platform, they are demanding access to all the data they need. Data to inform every decision, when and where it matters. They want to rely on a modern algorithm to crunch their data. Pretty much any ML algorithm, is likely to give better or more accurate results when there’s more data to work with. Whether you are trying to build a better view of your customers’ wants and needs, or figure out why a component is breaking, you’ve got to start with as much data as possible.

Data science is becoming a key part in enabling organizations capitalize on their data. Companies are looking to use data science, and to figure out how to incorporate it into their businesses.

Finally, they want all the data and all of these algorithms and this modern technology to be put to work in support of the applications that are used to run their business. For example, if you look at Oracle’s Adaptive Intelligent Applications, it combines Artificial Intelligence, machine learning, and decision science with data captured from Oracle SaaS applications and third-party data. The unique value of these learning-enabled applications is that they learn from results, which increases their accuracy as they are used over time.

]]>
https://blogs.perficient.com/2018/07/19/are-you-valuing-data-as-an-asset-on-your-balance-sheet/feed/ 0 229298
Key Takeaways About Compliant IT Systems In The Cloud https://blogs.perficient.com/2018/07/19/key-takeaways-about-compliant-it-systems-in-the-cloud/ https://blogs.perficient.com/2018/07/19/key-takeaways-about-compliant-it-systems-in-the-cloud/#respond Thu, 19 Jul 2018 12:20:27 +0000 https://blogs.perficient.com/?p=226386 This is the final post in our series on maintaining regulatory-compliant IT systems in the cloud. In this post, we’ll go over the key takeaways from the series and then we’ll send you on your way!

Regardless of how much control you have over your IT systems, if you are using them for regulatory purposes, it is your responsibility to ensure their compliance. This reality, however, should not deter you from adopting cloud-hosted systems, as their benefits are undeniable. Rather, be smart about how you select and manage them.

To make the most of the cloud, while maintaining regulatory compliance, you need a robust cloud vendor qualification procedure and a regulatory expert involved in your contract negotiations. Key topics include:

  • Physical security
  • Data security, privacy, and confidentiality
  • Technical support, including enhancements
  • Uptime, including backup and recovery
  • Data mobility
  • Regulatory compliance, especially change control
  • How the cloud vendor qualifies the cloud vendors it uses (e.g., data centers)

Additionally, be thoughtful about which tools you use in your cloud vendor qualification process, aligning the tools with the criticality of the system being selected. And, finally, ensure you have the appropriate application-level and quality assurance procedures in place to support the use of each system, once it has been validated and release for production use.

And that’s it! You made it through the series. If you haven’t yet downloaded your copy of the guide on this topic, be sure to fill out the form below. If you have any questions about any of the content of these posts or the guide, or if you need help assessing and resolving compliance issues with cloud vendors, please let us know! We always love hearing from our readers.

]]>
https://blogs.perficient.com/2018/07/19/key-takeaways-about-compliant-it-systems-in-the-cloud/feed/ 0 226386
XML Transformation in Informatica https://blogs.perficient.com/2018/07/19/xml-transformation-in-informatica/ https://blogs.perficient.com/2018/07/19/xml-transformation-in-informatica/#respond Thu, 19 Jul 2018 11:22:07 +0000 https://blogs.perficient.com/?p=229356 Introduction

In this document we will see how to process the XML data in Informatica Mapping. Before that we should know what is transformation in Informatica.

Informatica Transformations are repository objects which can read, modify or pass data to the defined target structures like tables, files, or any other targets required. A Transformation is basically used to represent a set of rules, which define the data flow and how the data is loaded into the targets.

What are Informatica Transformations

In Informatica, Transformations help to transform the source data according to the requirements of target system and it ensures the quality of the data being loaded into target.

Transformations are as classified as Active or Passive, Connected or Unconnected

Active Transformation:

An Active transformation can change the number of rows that pass through the transformation, change the transaction boundary, can change the row type.

Informatica Designer does not allow us to connect multiple active transformations or an active and passive transformation to the same downstream transformation or transformation input group because the Integration Service may not be able to concatenate the rows passed by active transformations except Sequence Generator Transformation(SGT). Because it generates unique numeric values since Integration Service does not encounter any problem while concatenating the rows.

Passive Transformation:

A passive transformation does not change the number of rows that pass through it, maintains the transaction boundary, and maintains the row type.

Informatica Designer allows you to connect multiple transformations to the same downstream transformation or transformation input group only if all transformations in the upstream branches are passive.

Connected Transformation:

Transformations which are connected to the other transformations or directly to target table in the mapping are called connected transformations.

Unconnected Transformation:

An unconnected transformation is not connected to other transformations in the mapping. It is called within another transformation, and returns a value to that transformation.

XML Transformation:

Informatica Power Center has powerful in-built functionality to process XML data. We can create an XML definition in Power Center from an XML file, DTD file, XML schema, flat file definition, or relational table definition.

We have three types of XML transformations and they are listed below:

XML Source Qualifier Transformation:

It is an active transformation as well as connected transformation. This is used only with an XML source definition. It represents the data elements that the Informatica Server reads when it executes a session with XML sources. XML Source Qualifier has one input or output port for every column in the source. If we remove an XML source definition from a mapping, the Designer also removes the corresponding XML Source Qualifier transformation.

Example:

Create Source Definition for XML Source Qualifier in Informatica

Once we connected to Informatica designer with our credentials, please navigate to Source Analyzer to define our XML data as a source. Let’s assume the XML file is at our Local File System. Import XML definition window will be called when we do as shown below

Import XML definition window:

Please select the Xml Contract.xml file from your local file system and click Open button.

This will open a XML wizard(Step 1). Click Next button

Click Finish button

In this example we are using the XML file with Entity relation so, we are selecting first option. If we are using a XML file Hierarchies then will select the second option.

In Entity relationship the Designer selects a root and creates separate views for complex types and multiple-occurring elements. It defines relationships and inheritance between complex types. It defines relationships between views with keys.

From the below screenshot we can observe that, we can see our newly created XML Source defenition in Informatica

We can use this Source definition in mapping window as we use generally. But this time it will show as XML Source Qualifier instead of Source Qualifier. See the below diagram.

We can use required fields from this Source Qualifier and load it to our target tables based on the business requirement. In our example the XML source Qualifier has many views for the XML Contract. See the below screenshot for XML Views. Double click on the Source definition.

If we require to edit any of the XML properties or column then will do the change in this view and will validate it. Please navigate to XML Views-> Validate XML Definition as shown below

In case if we want to edit the XSD file structure which was used earlier that will be taken from the following way in XML view itself. Click on View -> XML Metadata

Click on no namespace to get the XSD file it will open the file in installed XSD Editor like Microsoft Visual Studio

XML Parser Transformation:

It is an active transformation as well as connected transformation. This is used to extract XML inside a pipeline and then pass this to the target. The XML is extracted from the source systems such as files or databases. The XML Parser transformation reads XML data from a single input port and writes data to one or more output ports.

Example:

If the source definition is flat file or relational table which contains one column as XML data (Clob datatype) then XML parser will be used to retrieve the data.

Please navigate Source-> Import from Database

XDATA column contains the XML value. Only one XML data column can be parsed to XML Parser Transformation. Because one input port only allowed for XML Parser Transformation. But we can pass thru the other columns like REQ_ID, TRANSMISSION_ID columns to XML Parser Transformation.

Take the source definition into Mapping Designer

Now we need to create the XML parser Transformation by using the XSD file of the XDATA column from Source definition. Please navigate Transformation -> Create in Mapping designer

Select XML Parser from the Transformation type drop down list and click on create

It will open the Import XML definition window for the XSD file import. Click on Open

After importing the XSD file click on Next

We need to select they XSD structure type as we followed in XML Source Qualifier. Click on Finish in the below step.

Click on Done to complete the XML Parser Transformation creation

 

Once we are done with XML Parser Transformation creation we need to link the XDATA column from Source definition to Data Input in XML Parser Transformation.

In case if the normal fields from Source definition also needed for the target then that fields can be taken as $Pass Thru columns in XML Parser Transformation.

In the below example REQ_ID and TRANSMISSION ID are taken as Pass Thru columns in XML Parser Transformation.

We will take the required fields from XML Parser to downstream transformations.

If the data from multiple views need to be take it out to the target then we need to use Joiner transformation with the Sorted Input option.

Finally, the target table will be linked as given in the below diagram.

Note:

In case if we wish to edit the XSD structure in exiting mapping then we need to take the copy of the mapping and the copy one will be in checked in state. In the copy one we can able to re pull the new XSD structure by using synchronize xml definition(Right click XML Parser Transformation). Once done with all the changes in the copy one will rename the original mapping with _backup and the copy one should get renamed to original. Before doing Check out we need to make all the changes in copy mapping. Once done the corresponding session also should be checked out to point the correct mapping in which the changes have been made.

]]>
https://blogs.perficient.com/2018/07/19/xml-transformation-in-informatica/feed/ 0 229356
Oracle BI Data Sync: How to Add a New Dimension https://blogs.perficient.com/2018/07/18/oracle-bi-data-sync-how-to-add-a-new-dimension/ https://blogs.perficient.com/2018/07/18/oracle-bi-data-sync-how-to-add-a-new-dimension/#respond Wed, 18 Jul 2018 23:57:09 +0000 https://blogs.perficient.com/?p=229329 In this and the following post, I will cover the steps entailed in adding dimension and fact tasks in Oracle Data Sync. The latest releases of Data Sync included a few important features such as performing look-ups during an ETL job. So I intend to cover these best practices when adding new dimension and fact tasks. These instructions are based on Oracle BI Data Sync version 2.3.2 and may not apply to previous versions of Data Sync.

In this example, I already have 2 DataSync Tasks in my project, one to load the dimension table: W_OTLIS_STATE_D and one to load the fact table: W_OTLIS_TRANSACTIONS_F. In the following step by step instructions, I will demonstrate how to add a second dimension table. Refer to my following post if interested about the steps of adding a Fact task.

  1. In this example, I will add a dimension table called W_OTLIS_METRO_STATION_D to be loaded from the source table: OTLIS_METRO_STATION.
  2. Under the Project tab, select Relational Data, and then Data From SQL. (Other options work, but in this example I will use a SQL query to source data for the dimension since that allows maximum flexibility to customize the source query in the future if needed.)
  3. In the new window, enter the name of the new task (without any spaces), enter the name of the new target table for the dimension (follow your naming convention), chose the Relational output format, select the source Connection, and enter a SQL statement to select the columns needed from the source table. In my example, I am selecting all the columns. Click on OK.
  4. Wait for a message like the following to confirm the operation was successful. Then click OK.

  1. Under Project, you should now see a new record added for the new task. Click on the newly created task and click on the Load Strategy from below to edit it.

  1. By default the task does a full load of the table because the Load Strategy is set to “Replace data in table” option. Ideally, if your source table supports incremental tracking using a last update date, you want to switch the Load Strategy to support incremental loads and therefore achieve faster job run times. To do that, select the “Update table” option. Keep both checkboxes below that checked. Click Ok.

  1. You should now get another window with 2 tabs as follows. On the User Keys tab, select the column(s) that constitute the unique identifier of the source table. In my example, it is made up of one column: STATION_ID. This is the column that will be used to update the target dimension table on an incremental basis to avoid duplicating the same record once its updated in the source. On the Filters tab, select the date column that tracks the date/time of any updates/inserts that happen on the source table. When running incremental, Datasync will only extract source table records that are inserted/updated after the last DataSync job run time. Click OK.
  1. You should see a message as the following. Click OK to create the unique index on the user key. The purpose of this is to enhance the performance of incremental loads by creating an index on the matching criteria columns.

  1. Under Projects, click on the Target Tables/Data Sets tab, and select the newly added dimension table. Check the box: “Insert Unspecified Row”. The purpose of this option is to insert an “Unspecified” row into the dimension which has a primary key = 0. This allows us not to lose fact table records whose dimension foreign keys don’t exist in the dimension. Instead of losing these fact records, they will get loaded against the Unspecified dimension value.
  2. Still under Target Tables, select the newly created dimension table and then select the Table Columns tab from the bottom half of the screen. We will now add 2 standard columns to the dimension table. Click on the New button from the bottom half of the screen and add the following 2 columns:
    1. KEY – Data Type: NUMBER – Length: 38 – Uncheck Nullable
    2. W_LAST_UPDATE_DT – Data Type: Date – Uncheck Nullable

Save the changes.

  1. Click on the Relational Data tab under Projects. Select the newly created task for the dimension. In the bottom half of the screen, click on the “Unmapped Columns” button. This will show you a window with the 2 columns you just created. Move them over to the right side under Selected Columns and click OK.

  1. Edit the Target Expression for the KEY column, and enter the Default as: %%SURROGATE_KEY. This will generate a primary numeric unique key for the new dimension table. Click OK.

  1. Edit the Target Expression for the W_LAST_UPDATE_DT column, and enter the Default as: %%UPSERT_TIMESTAMP. This will automatically populate the date and time that the rows will be populated at in the target dimension table. Click OK.

  1. Click Save.
  2. Under the Projects, Target Tables tab, select the newly added dimension table and then select Indices from the bottom half of the screen. We will add a primary key index. So click New and add a Unique Index by checking the Is Unique checkbox. Click Save. Then click on the Columns button to add the name of the column: KEY. This is the same column name added in the previous steps as the surrogate key of the dimension. Save the changes.

  1. This is it for adding the dimension task. We are ready now to run the job. This should create the new dimension table with the indices and populate the table with the source data. Check to make sure that the record count in the target dimension table matches the record count from the source, plus one additional row where the KEY = 0 and the text attributes are “Unspecified”.
  2. Once the first job run is complete and validated, add some test incremental data in the source table, re-run the job and you should see the new changes updated into the target dimension table.
]]>
https://blogs.perficient.com/2018/07/18/oracle-bi-data-sync-how-to-add-a-new-dimension/feed/ 0 229329
Validating JSON Message in IIB https://blogs.perficient.com/2018/07/18/validating-json-message-in-iib/ https://blogs.perficient.com/2018/07/18/validating-json-message-in-iib/#comments Wed, 18 Jul 2018 11:17:57 +0000 https://blogs.perficient.com/ibm/?p=10090 Overview

This blog helps to understand JSON parser in IIB and how to validate the incoming JSON message.

What is JSON?

JavaScript Object Notation is a lightweight plain text format used for data interchange. It’s a collection of Name-Value pair.

How IIB parses the JSON message?

In IIB JSON message realized as Object (Name-Value pair) and Array. IIB provides the feature called JSON domain. JSON parser and serializer process the messages below Data under JSON domain.

JSON parser converts the incoming bit stream into logical tree structure. It just validates the syntax of the incoming JSON message but it won’t validate the content/value of incoming message against any schema (Swagger.json). Because JSON modelling is not supported by IIB. Serializer converts the logical tree structure into bit stream.

Below picture describes the JSON logical tree structure created by JSON parser,

If syntax of the incoming JSON message is wrong, IIB sends the json parser error response as like below:
E.g.: BIP5705E: JSON parsing errors have occurred. : F:\build\S1000_slot1\S1000_P\src\DataFlowEngine\JSON\ImbJSONParser.cpp: 257: ImbJSONParser::parseLastChild: ComIbmWSInputNode: MF_JSON_POC#FCMComposite_1_1

BIP5701E: A JSON parsing error occurred on line 6 column 1. An invalid JSON character (UTF-8: ‘0x00000022’) was found in the input bit stream.  The JSON parser was expecting to find one of the following characters or types: ‘”}”, “,”‘.  The internal error code is ‘0x00000108’. : F:\build\S1000_slot1\S1000_P\src\DataFlowEngine\JSON\ImbJSONDocHandler.cpp: 550: ImbJSONDocHandler::onInvalidCharacter: ComIbmWSInputNode: MF_JSON_POC#FCMComposite_1_1</text>

Creating REST API

This section describes how to create REST API to validate the JSON message using xsd
STEP 1: Create REST API project and specify the API base path.

STEP 2: Define the JSON schema (Swagger.json) under Model Definitions. Model definition helps to create a JSON schema to define a structure of the JSON message.

JSON Schema will be created under OtherResources folder with the default name “swagger.json”.

JSON Schema:

{“swagger” : “2.0”,
“info” : {
“title” : “JSONoverHTTP”,
“version” : “1.0.0”,
“description” : “JSON meesage custom validation”
},
“paths” : {
“/customValidation/xsd” : {
“post” : {
“operationId” : “postXsd”,
“responses” : {
“200” : {
“description” : “The operation was successful.”
}},
“consumes” : [ “application/json” ],
“produces” : [ “application/json” ],
“description” : “Insert a xsd”,
“parameters” : [ {
“name” : “body”,
“in” : “body”,
“schema” : {
“$ref” : “#/definitions/PersonDetail”
},
“description” : “The request body for the operation”,
“required” : true
} ]
}
}
},
“basePath” : “/json_overhttp/v1”,
“definitions” : {
“PersonDetail” : {
“type” : “object”,
“properties” : {
“name” : {
“type” : “string”
},
“age” : {
“type”: “number”
},
“address” : {
“type” : “object”,
“properties” : {
“street” : {
“type” : “string”
},
“city” : {
“type” : “string”
},
“phoneNumber” : {
“type” : “number”,
“format” : “length=10”
}
}
},
“ValidationFlag” : {
“type” : “string”
}
},
“required” : [ “name” ]
}
}
}

STEP 3: Create a new resource.

STEP 4: Define the resource path and select the operation as post

STEP 5: click the subflow icon to implement conversion of JSON to XML and validation.

REST API Message flow will be created after doing above mentioned step .postXsd is a sub flow where the actual implementations are done. 

STEP 6: Create an XSD as per the Swagger document under a shared library and name it as PersonDeatailSchema.xsd.

STEP 7: Refer the Library to REST API.

Right click the REST API Select à Manage Library references à choose the shared library in which you have created the PersonDeatailSchema.xsd schema.

STEP 8: Add mapping node in to subflow (postXsd) to convert JSON message to xml message.

STEP 9: select Swagger.JSON as input and PersonDeatailSchema.xsd as output format. This will convert the incoming JSON message to an XML.

STEP 10: After mapping node add a validator node. configure properties Domain as XMLNSC and Validation as content and value.

Note: If the XSD is created under library no need to refer explicitly in message model. At run time broker will take the right schema to validate the incoming message.

below is the postXsd Subflow,

Testing the Message Flow

In the below test util got error response back from the REST API as “The value “1234567B91” is not a valid value for the “phonetype” datatype”.

Phone number defined as data type: number as per the JSON schema and equally phone number defined as integer in the xsd schema definition.

After converting message from JSON to XML validator node validates the incoming message against the Schema definition. As PhoneNumber field has the string value as “1234567B91” in the incoming message so validator node throws the error

Conclusion

IIB doesn’t support JSON message model but it allows you to access and manipulate the JSON message. So, validation can be achieved by creating customized code.

]]>
https://blogs.perficient.com/2018/07/18/validating-json-message-in-iib/feed/ 1 214009
Cynefin Framework: Disorder in Healthcare https://blogs.perficient.com/2018/07/17/cynefin-framework-disorder-healthcare-2/ https://blogs.perficient.com/2018/07/17/cynefin-framework-disorder-healthcare-2/#respond Tue, 17 Jul 2018 18:43:03 +0000 https://blogs.perficient.com/?p=229061 During the last several blogs, we talked about the Cynefin framework and its four types of projects: Simple, Complicated, Complex and Chaotic.

The Cynefin framework is used to help project managers, policy makers and others reach decisions on how to execute based upon how well you know your end result. The framework consists of five decision making contexts of domains: simple, complicated, complex, chaotic and disorder.

All of which provide guidance and direction for managers to identify how they will need to proceed with execution.

However, do you know which domain all projects start in, that is correct – Disorder.  Until the project manager understands the needs and demands of the project, there is disorder.

The only way out of this domain is to break parts of a project into known domains.  For example if the business leaders cannot agree on the tenants of the project, that would be a good place to start.

For as good project managers we know how to gather requirements. So perhaps we start with a visioning work session where decision makers come together and agree on the vision of the project and several critical success factors of the project.

Once critical success factors are identified, and agreed to, individual use cases can be developed for a specific critical success factor. Now we have clear demands of the project.

By doing this we have partitioned a portion of disorder into a known domain – complicated perhaps, where solid analysis can begin on the defined use case(s).

Research to determine the best technical solution can begin based upon the defined vision, critical success factors and use cases.

To reiterate, Cynefin will guide a project manager to the appropriate domain based upon project need and objectives. The key is to break the project down into small enough components where these components can be isolated and assigned to one of the four known domains.

]]>
https://blogs.perficient.com/2018/07/17/cynefin-framework-disorder-healthcare-2/feed/ 0 229061
The Importance of a Digital Strategy for Women’s Health Services https://blogs.perficient.com/2018/07/17/importance-digital-strategy-womens-health-services/ https://blogs.perficient.com/2018/07/17/importance-digital-strategy-womens-health-services/#respond Tue, 17 Jul 2018 18:18:21 +0000 https://blogs.perficient.com/?p=228872 The strategy for women’s health and well-being has been widely discussed at global and domestic levels for many years.

With life expectancy remaining higher for women than it has for men, a shift towards prevention and wellness, the rapid adoption of digital technologies improving accessing to a wealth of healthcare information and services at the touch of a fingertip and much more, women are increasingly participating in decision-making on their health throughout their entire life-course.

Creating and maintaining a strategy for women’s health services that has a presence on your digital channels has become imperative for many healthcare providers as websites, mobile applications and virtual services are often the first point of call for many patients/consumers.

Whether you’re a healthcare provider whose women’s health services strategy focuses on the determinants of women’s health across the entire life from adolescent to senior adult, or have expertise in diagnosing and treating certain conditions that affect women differently than men, there are some steps you can take to adopt a digital experience that supports the care provided for women.

The below are tactics that can form part of your overall digital strategy.

  • Across your website and mobile applications have a dedicated area that supports consumers/patients in understanding the full breadth of services you offer for women. This includes highlighting your expertise in obstetrics and gynecology, specific health conditions and illnesses that commonly impact women, or other services related to the entire life-course should you offer them that apply such as primary care.
  • Educate patients/consumers in good health and well-being through advice, tips, and good habits to form. Focusing on information on the stages of a woman’s life for example by decade from adolescence/young adult through to senior adult will assist consumers/patients in finding relevant information. In addition to this, frequently updating content on your website will keep your digital content fresh and help engage consumers/patients over a longer period of time.
  • As an organization show your support for women’s health by participating in events such as National Women’s Health Week organized by U.S. Department of Health and Human Services’ Office. There are also a number specialized health awareness weeks and months to participate in including, American Heart Association’s Go Red For Women®. Show and share your support for these across your digital channels.
  • Invest in social media campaigns that focus on delivering high quality content regarding women’s health will give your content a boost by reaching a large audience with the use of relevant hashtags. This is another way to keep consumers/patients engaged in your content and can provide a wide variety of topics for your digital content calendar.

In essence, adopting tactics across your digital channels that supports and advocates women’s health, can assist you in having a digital strategy that segments and targets specific patients/consumers.

There many patient/consumer segments to target according to the healthcare services you offer, communicating with each of these across digital channels is one step of many to support the patient experience.

]]>
https://blogs.perficient.com/2018/07/17/importance-digital-strategy-womens-health-services/feed/ 0 228872