Yuan Meng, Author at Perficient Blogs https://blogs.perficient.com/author/ymeng/ Expert Digital Insights Thu, 09 May 2019 16:09:34 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Yuan Meng, Author at Perficient Blogs https://blogs.perficient.com/author/ymeng/ 32 32 30508587 Effectively Managing Mule API Versions https://blogs.perficient.com/2018/07/25/effectively-managing-mule-api-versions/ https://blogs.perficient.com/2018/07/25/effectively-managing-mule-api-versions/#respond Wed, 25 Jul 2018 23:20:10 +0000 https://blogs.perficient.com/?p=229727

I attended the MuleSoft Connect 2018 conference, the largest industry gathering focused on integration & APIs. While I was at this conference, I had the honor of presenting how to effectively manage Mule API versions and portal site in multiple environments.

Check out my presentation from the conference below:

Effectively Managing Mule API Versions and Portal Site in Multiple Environments – Connect 2018

 

In this presentation, learn more about Mule API, Mule API’s logical version vs. technical version, challenges with the platform, different ways to deploy an API, and much more!

This presentation also offers high quality resources such as the POM Snippet for Deployment and the API Version Naming Convention – 1.x.

 

With over 100 speakers and 90 sessions, the conference was certainly a success. I look forward to hopefully attending MuleSoft Connect next year!

 

 

 

]]>
https://blogs.perficient.com/2018/07/25/effectively-managing-mule-api-versions/feed/ 0 229727
MuleSoft Crowd Release vs. Mule 4 Release https://blogs.perficient.com/2018/06/28/mulesoft-crowd-release-vs-mule-4-release/ https://blogs.perficient.com/2018/06/28/mulesoft-crowd-release-vs-mule-4-release/#respond Thu, 28 Jun 2018 12:33:02 +0000 https://blogs.perficient.com/?p=228576

MuleSoft Crowd Release and Mule 4 Release have been out for a while. Crowd Release was out around May 2017, and Mule 4 was released at the end of 2017.

Many existing Mule clients have upgraded to the Crowd Release, more new clients have adopted Mule 4. However, even with the rapid adoption, I have noticed many people are still unclear on the differences between the two.

In this post, I’ll highlight the main differences between Crow Release and Mule 4. I’ll highlight some main features in each release. I plan to elaborate more about these features in future posts.

The details of each release are in the release notes. However, sometimes release notes can be hard to read. So here is a glance of the main differences.

Release API Manager Runtime Package Studio Notes
Crowd 2.x 3.x.x .zip 6 or prior Crowd: APIM is new 2.x, runtime still 3.x.x
Mule 4 2.x 4.x.x .jar 7 or newer Mule4: APIM2.x, package is .jar  not compatible with 3.x.x .zip

 

Crowd Release

Let’s take a look at Crowd Release first. At the very top, Crowd release is still 3.x.x. To the Mule 3.x users, the most noticeable change is the API manager 2.x, the new Design Center and improved Exchange server. Crowd release now uses API Manager 2.x https://docs.mulesoft.com/api-manager/v/2.x/

Crowd release promotes full life cycle API development and deployment. Here are the main steps to develop an API:

Please note: the following steps are for developing a full API with implementation, not just the API proxy, which is a slightly easier case.

Development:

  • Step 1: start RAML design in Design Center. It can be RAML fragment (data types etc.) or full API spec.
  • Step 2: publish RAML to Exchange
  • Step 3: download the API spec from exchange (or directly import it in studio), and develop your API flow as you normally do before.

Deployment:

Before your API can be deployed to Runtime, you need to obtain the parameters for the API auto-discovery (API name and version) from the API Manager (2.x). In the past, you can give arbitrary values (as long as they are unique). Now you have to get these values from the API Manager; otherwise, the API Manager won’t recognize your API.

Please note, APIM 2.x now differentiates between environments for the APIs. This is a major step forward to alleviate the chaotic API versioning headaches with the old APIM 1.x (see https://blogs.perficient.com/2017/10/04/what-is-in-a-mule-api-version/)

Continuing from the development steps above:

  • Step 4: grab the API name and version, as shown in the screenshot (on the right side of the API console), add API auto-discovery to your project (<api-platform-gw:api …> tag). As a best practice, store these parameters in the environment specific properties file.
  • Step 5: finish the project flows, and deploy to the API to runtime. Now the API Manager 2.x should see the API is running (registered).

Promoting API to Higher Environments

As mentioned above, now APIM 2.x manages APIs per each environment. It also enforces a process to promote the APIs to the higher environments systematically.

To promote an API, from the API Manager console, find your API in DEV environment, you will see a button “Promote from environment.” Follow the screens, it will generate the new parameters (API name, version) for the target (UAT, PROD etc). Copy these parameters, and place them in your environment-specific property files. Now you can deploy your API to the higher environment.

Mule 4:

Mule 4 is a major release. On top of API Manager 2.x (same as in Crow release), it also changed the packaging format to .jar. The project has to be deployed to runtime 4.x. As a side effect, Mule 3.x and Mule 4 projects are no longer compatible with each other.

Mule 4 introduces many other new changes. A few include:

  • You have to use Studio 7 to develop Mule 4 project.
  • Project is packaged as .jar; project structures are changed, and it heavily leverages YAML for many artifacts.
  • Dataweave uses 2.0, which has many new powerful features.
  • It supports exception with “try-catch” like Java.
  • It comes with enhanced data streaming support, which (supposedly) makes many of the confusing transformers (json-to-object, object-to-json) obsolete.
  • Oh, one small but import detail: API auto-discover uses a unique API Id (shown in the screenshot above).

Key Takeaways:

Crow Release still runs on Mule 3.x. It use APIM 2.x, Design Center and Exchange to manage the full API dev-deployment life cycle.

Mule 4 Release is built on top of Crowd release, but it uses Mule 4.x runtime. Besides many new features, you have to use Studio 7, and project is packaged as .jar and project files are no longer compatible between Mule 3.x and 4.x.

]]>
https://blogs.perficient.com/2018/06/28/mulesoft-crowd-release-vs-mule-4-release/feed/ 0 228576
OAuth Dance with Mule – Connect 2018 https://blogs.perficient.com/2018/06/08/oauth-dance-with-mule-connect-2018/ https://blogs.perficient.com/2018/06/08/oauth-dance-with-mule-connect-2018/#respond Fri, 08 Jun 2018 18:44:54 +0000 https://blogs.perficient.com/?p=227698

Here is the full presentation converted to PDF: Oauth Dance with Mule – Connect 2018.PDF

OAuth2 has become the de facto standard for REST APIs. Yet, there are still many misconceptions of OAuth2 and its relationship with REST API.

This session will first cover the general OAuth2 topics: Is OAuth2 a protocol?  What are grant types and OAuth dance? Why the valet key metaphor? We will then use live CURL scripts to show the steps of “OAuth dance” for each grant type, so readers can follow the steps and clearly see how OAuth2 work. Finally, we will dive into a real world use case to highlight how to configure a complex Ping Federate cluster as the external OAuth provider for Mule API.

]]>
https://blogs.perficient.com/2018/06/08/oauth-dance-with-mule-connect-2018/feed/ 0 227698
Join Me at My Speaking Sessions at MuleSoft Connect 2018 https://blogs.perficient.com/2018/04/24/will-speak-at-mulesoft-connect-2018/ https://blogs.perficient.com/2018/04/24/will-speak-at-mulesoft-connect-2018/#respond Tue, 24 Apr 2018 15:53:15 +0000 https://blogs.perficient.com/?p=196962

I’m speaking at MuleSoft CONNECT, May 8 – 10 in San Jose, CA. Join me at the premier conference for digital business, where CIOs, IT leaders and developers come together to exchange ideas and pragmatic insights on driving business transformation! www.connect.mulesoft.com/2018

If you plan to come speaking sessions, please note the time just got updated!

I have two speaking sessions which are scheduled to be on Tuesday 3:15 PM – 4:00 PM and Wednesday 4:15 PM – 5:00 PM. One topic is on API security, one topic is on API version and portal management. For details please view the latest Connect agenda.

Check it out here: https://connect.mulesoft.com/speakers, I’m listed right next to the MuleSoft CEO 🙂 of course by coincidence.

 

]]>
https://blogs.perficient.com/2018/04/24/will-speak-at-mulesoft-connect-2018/feed/ 0 196962
Upload and Download Files From AWS S3 Bucket Using Mule Connector and Access Token https://blogs.perficient.com/2018/03/12/upload-and-download-files-from-aws-s3-bucket-using-mule-connector-and-access-token/ https://blogs.perficient.com/2018/03/12/upload-and-download-files-from-aws-s3-bucket-using-mule-connector-and-access-token/#respond Tue, 13 Mar 2018 01:28:17 +0000 https://blogs.perficient.com/integrate/?p=5751

For the most part, it should be straightforward to transfer files to and from AWS S3 buckets with Mule connector. S3 connector has been out there for a long time. If you just use customer key and secrete, you can see an example here http://www.dejim.com/blog/2016/03/10/amazon-s3-connector-download-bucket/

The complication comes when you also want to use an access token in addition to the customer key and secrete. In order to that, you need a newer version of S3 connector “4.2.1.201704271749”. Once you update your connector, you will see the additional “Session Token” field. Put in your token here, you should be all good.

As you can see, the new connector has a release date stamp “2017-04-27”.  Here is a caveat, if you download the current (March 2018) Anypoint Studio Release 6.4.3, the S3 connector comes with the release is still old. However, when you try to update the connector, it says everything is up to date. You are stuck in between the worlds 🙁 go figure…

Before MuleSoft fixes the issue, here is a work around. Go find an older version of the studio and do an update, you most likely will see the newer S3 connector. I’m not going to dish out any advice what’s your best option from here. You can either develop your special project with this old version of Studio temporarily, or you can go figure out how to get the connector files from your older studio directory. Mine sits here .\my.studio.path\plugins\org.mule.tooling.ui.contribution.s3.3.5.0_4.2.1.201704271749.  Push to shove; you can skip S3 connector entirely. You can just use HTTP requester, follow the AWS SDK, do everything “manually”. I would strongly advise not to take this route, unless you have a compelling reason to do so.

Anyway, for the curious mind, here is the code snippet to upload and download a file to S3 bucket:

<s3:config name=“Amazon_S3__Configuration” accessKey=“#[flowVars.AccessKey]” secretKey=“#[flowVars.SecretKey]” sessionToken=“#[flowVars.Token]”     doc:name=“Amazon S3: Configuration” />

<http:request-config name=“HTTP_Request_Configuration”  protocol=“HTTPS” host=“xxx.com” port=“443”   doc:name=“HTTP Request Configuration” responseTimeout=“30000”>

<http:basic-authentication username=“xxx”   password=”xxx” preemptive=“true” />

<tcp:client-socket-properties connectionTimeout=“30000” />

</http:request-config>

<flow name=“s3-tokenFlow”>

<http:listener config-ref=“HTTP_Listener_Configuration”      path=“/s3” doc:name=“HTTP” />

<!—grab your token, whatever your setup is …–>

<http:request config-ref=“HTTP_Request_Configuration”       path=“/SO/api/S3Token” method=“GET” doc:name=“HTTPs” />

<json:json-to-object-transformer      doc:name=“JSON to Object” returnClass=java.util.HashMap />

<set-variable variableName=“Bucket” value=“#[payload.get(‘Bucket’)]”       doc:name=“Bucket” />

<set-variable variableName=“AccessKey”   value=“#[payload.get(‘AccessKeyId’)]”  doc:name=“AccessKey” />

<set-variable variableName=“SecretKey”  value=“#[payload.get(‘SecretAccessKey’)]”  doc:name=“SecretKey” />

<set-variable variableName=“Token” value=“#[payload.get(‘Token’)]”    doc:name=“Token” />

<set-payload value=“This is a test fie for the bucket” encoding=“US-ASCII”     mimeType=“text/plain” doc:name=“Set Payload” />

<logger level=“INFO” doc:name=“Logger”

message=“bucket=#[flowVars.Bucket], client=#[flowVars.AccessKey], sec=#[flowVars.SecretKey], token=#[Token], pay=#[payload]” />

<s3:create-object config-ref=“Amazon_S3__Configuration”

bucketName=“#[flowVars[‘Bucket’]]” key=“yourBucketKey/test.txt” acl=“PUBLIC_READ”     doc:name=“Amazon S3” />

<json:object-to-json-transformer    doc:name=“Object to JSON” />

<logger message=“resp=#[message.payloadAs(java.lang.String)]”       level=“INFO” doc:name=“Logger” />

<s3:get-object-content config-ref=“Amazon_S3__Configuration”

bucketName=“#[flowVars[‘Bucket’]]” key=“yourBucketKey/test.txt” doc:name=“Copy_of_Amazon S3” />

<object-to-byte-array-transformer       doc:name=“Object to Byte Array” />

<file:outbound-endpoint path=“c:\temp\blah.txt”       responseTimeout=“10000” doc:name=“File” />

<logger message=“resp=#[message.payloadAs(java.lang.String)]”       level=“INFO” doc:name=“Copy_of_Logger” />

 

</flow>

]]>
https://blogs.perficient.com/2018/03/12/upload-and-download-files-from-aws-s3-bucket-using-mule-connector-and-access-token/feed/ 0 196536
Json Data Processing with Mule Transformers and Dataweave https://blogs.perficient.com/2018/03/02/json-data-processing-mule-transformers-dataweave/ https://blogs.perficient.com/2018/03/02/json-data-processing-mule-transformers-dataweave/#respond Sat, 03 Mar 2018 02:05:41 +0000 https://blogs.perficient.com/integrate/?p=5703

As REST APIs are taking over the world, json has stood out and become the de facto data format for APIs. It’s important that developers are familiar with json data processing.

A couple of years ago I wrote a blog post discussing Mule Json transformers. Since then, I have seen many new nuances dealing with json in a Mule flow. In this new post, we’ll take another look at json data processing with Mule transformers and Dataweave.

MuleSoft DataSense

One interesting feature of Mule flow is it attempts to “automagically” interpret data format on your behalf. It’s called DataSense. At design time, when you drop a message processor somewhere in the flow, the Anypoint Studio IDE auto senses the inbound and outbound payload, and handles / converts data in the best format it can think of (Mule runtime does the same thing). This great feature can be a double-edged sword. It helps greatly when it works correctly since you don’t have to worry about what’s going on under the hood. However, when it doesn’t work, you are left to scratch your head for a long time.

We’ll look at some examples how DataSense works “automagically”, and when we need to step in and tell Mule what the correct data type should be.

The json transformers

Mule comes with quite a few json transformers. We’ll look at two of them closely in this post: “json to object” and “object to json”.

If you hover over the “json to object” and “object to json” transformers on the studio palette, you will see the following descriptions:

“The JSON to Object Transformer converts a JSON encoded object graph to a Java object” and “The Object to JSON Transformer converts a Java object to a JSON encoded object that can be consumed by other languages, such as JavaScript or Ruby”.

Examples

In all the examples, we assume we will start with a Json payload that looks like below. It’s an array with two entries of map (i.e. pairs of key and value):

[{“course1″:”Introduction to Mule”}, {“course2″:”Advanced Mule”}]

Example 1 – “JSON to Object” the default behavior

See code snippet below, in this example:

  1. We first set payload to the json string.
  2. Then use “json-to-object” transformer. By default, this transformer coverts the Json string to an internal JsonData object.

Please note: if the inbound payload is not a well-formed json string, the transformer will throw exception.

3. Now the payload can be accessed like:

  • JsonPath: #[json:[0]/course1] – This is a deprecated feature, won’t be supported in 4.0.
  • JsonData object: )#[payload.get(0)] – Very uncommon to do it this way. You have to look up JsonData API to figure out what to call.
  • Dataweave: mycourse: payload[0].course1 – The payload can also be parsed by Dataweave with array-like syntax. Although payload is JsonData, this is DataSense at work. It “knows” what to do to parse the data.

<set-payload value=[{“course1″:”Introduction to Mule”}, {“course2″:”Advanced Mule”}] doc:name=“Set Payload-json-map-text”/>

<json:json-to-object-transformer doc:name=“JSON to Object-default JsonData”/>

<logger message=“json:[0]/course1=#[json:[0]/course1], payload.get(0)=#[payload.get(0)], json:[1]/course2=#[json:[1]/course2]” level=“INFO” doc:name=“default json to obj”/>

<dw:transform-message doc:name=“Transform Message”>

<dw:set-payload>

<![CDATA[%dw 1.0

%output application/json

mycourse: payload[0].course1]]>

</dw:set-payload>

</dw:transform-message>

Example 2 – json to object with return type “java.lang.Object” or “java.util.HashMap[]”

In this example,

  1. we first set the json payload like in example 1
  2. then we call “json to object” also like example 1, but we added returnClass=“java.util.HashMap[]”. This way, we control the result payload type instead of getting the default JsonData. Please note the return class type has to be correct for the input json, otherwise the transformer will throw exception.

Please note, you can also use returnClass=“java.lang.Object”, let the transformer figure out the return class. If you do that in this case, it would return java.util.ArrayList. The list contains HashMap entries.

  1. Now the payload can be parsed easily with MEL, like #[payload[0].course1], #[payload[0][‘course1′]], #[payload[0].’course1’]
  2. For Dataweave, however, the DataSense can’t figure out the inbound type without some human help (see screen capture below). We need to tell Dataweave the input payload type by setting the input payload meta-data type as a custom data type: choose Java, then collection, then java.util.HashMap

With the meta-data, then Dataweave can parse the payload as an array like “new course1: payload[0].course1”

<set-payload value=[{“course1″:”Introduction to Mule”}, {“course2″:”Advanced Mule”}] doc:name=“Set Payload”/>

<json:json-to-object-transformer returnClass=“java.util.HashMap[]” doc:name=“Copy_of_JSON to Object-java.util.HashMap[]”/>

<!–json:json-to-object-transformer returnClass=“java.lang.Object” doc:name=“Copy_of_JSON to Object-java.lang.Object”/–>

<logger message=“array of map, #[payload[0].course1], #[payload[0][‘course1′]], #[payload[0].’course1’]” level=“INFO” doc:name=“Logger map”/>

<dw:transform-message doc:name=“DW wt input meta-data collection&lt;HashMap&gt;” metadata:id=“80d9dde1-a513-4e33-a1e9-d6d905da18a4”>

<dw:input-payload mimeType=“application/java”/>

<dw:set-payload><![CDATA[%dw 1.0

%output application/json

newcouse1: payload[0].course1]]></dw:set-payload>

</dw:transform-message>

<logger message=“payload=#[payload]” level=“INFO” doc:name=“Logger”/>

Example 3 – Object to Json

This example tests the default behavior of “Object to JSON” transformer. It’s nearly the same as example 2, except after we convert payload to HashMap[], we call “Object to JSON” right away, and the payload is converted into a plain string object with a mime type of “application/json”.

Does this example have any practical value? Maybe. If you have a long flow, at some point you need to access the payload using MEL and java map, then at some later stage you need to use dataweave again, instead of going through the troubles with the meta-data manipulation like in example 2, you can just convert it back to a json string (with mime type application/json).

Here is another practical use case. When you use DB connector to query database, the return result is a list of HashMap. If you want to emulate the DB query later (for debugging or test purpose) without connecting to DB, you can use this trick to capture the DB query result as a json array, write out the json array as a string in the log. Capture the string, then you can use that to emulate your DB query. You can modify the values any way you want. It can be a great trick to run different DB query test cases.

<set-payload value=[{“course1″:”Introduction to Mule”}, {“course2″:”Advanced Mule”}] doc:name=“Set Payload”/>

<json:json-to-object-transformer returnClass=“java.util.HashMap[]” doc:name=“JSON to Object-java.lang.Object”/>

<logger message=“array of map, #[payload[0].course1], #[payload[0][‘course1′]], #[payload[0].’course1’]” level=“INFO” doc:name=“array of map”/>

<json:object-to-json-transformer doc:name=“Object to JSON – result-payload-string-wt-mime-app/json”/>

<dw:transform-message doc:name=“DW-same-as-#1”>

<dw:set-payload><![CDATA[%dw 1.0

%output application/json

course1: payload[0].course1]]></dw:set-payload>

</dw:transform-message>

<logger message=“payload=#[payload]” level=“INFO” doc:name=“Logger payload”/>

Example 4 – what not to do

Not to beat the dead horse, but what if you try to convert the HashMap payload directly as a string in example 3 and give it an “application/json” mine type?

Unfortunately, our curiosity has gone too far this time, the result payload is:

{{course1=Introduction to Mule},{course2=Advanced Mule}}“.

Even if the payload has a “application/json” mime type, it almost looks like a json string, if you look closely, it is NOT a valid json structure! In fact, it is not of any known data structures. It is just a nice looking string for human eyes, but the flow can no longer parse it!

<set-payload value=[{“course1″:”Introduction to Mule”}, {“course2″:”Advanced Mule”}] doc:name=“Set Payload”/>

<json:json-to-object-transformer returnClass=“java.util.HashMap[]” doc:name=“JSON to Object-java.util.HashMap[]”/>

<set-payload value=“#[message.payloadAs(java.lang.String)]” mimeType=“application/json” doc:name=“Set Payload-to-string-not-json”/>

Example 5 – Mime type trick

If you just want to parse that sample json data with Dataweave, here is the simplest way to do it.

  1. We set payload to the same json string. However, we also set the mime type to “application/json”
  2. Although the payload type is still string, now DW knows how to parse it. I suppose that’s the magic power of DataSense.

Please note that you cannot parse the payload with MEL, JsonPath or anything else, because technically the payload is still a plain string.

<set-payload value=[{“course1″:”Introduction to Mule”}, {“course2″:”Advanced Mule”}] mimeType=“application/json” doc:name=“Set Payload Json-array-mime-app/json”/>

<dw:transform-message doc:name=“DW-works-only-wt-mime-app-json-payload[0].course1”>

<dw:set-payload>

<![CDATA[%dw 1.0

%output application/json

course1: payload[0].course1]]>

</dw:set-payload>

</dw:transform-message>

Example 6 – load Json map from a text file

I’ll be remiss not to show another Json processing technique: load json data from a text file, since it’s fairly common we get json data as a text file, or we want to use json data file to drive our emulation or test.

We first load the data file (stream), convert it to string, then “object to json” converts it Java object.

<set-payload value=“#[Thread.currentThread().getContextClassLoader().getResourceAsStream(‘json-map.json’)]” mimeType=“application/json” doc:name=“load-json-map-file”/>

<!—convert stream to string –>

<object-to-string-transformer doc:name=“Object to String”/>

<!—convert json string to internal Java class –>

<json:json-to-object-transformer doc:name=“JSON to Object” returnClass=”java.lang.Object”/>

]]>
https://blogs.perficient.com/2018/03/02/json-data-processing-mule-transformers-dataweave/feed/ 0 196532
MuleSoft – Correlating Array and HashMap with Dataweave https://blogs.perficient.com/2018/03/01/mulesoft-correlating-array-and-hashmap-with-dataweave/ https://blogs.perficient.com/2018/03/01/mulesoft-correlating-array-and-hashmap-with-dataweave/#respond Thu, 01 Mar 2018 23:45:40 +0000 https://blogs.perficient.com/integrate/?p=5691

In data processing, two of the most common collection data types are array (list) and map (HashMap). The key difference between a list and a map is how they are accessed. A list is accessed by an integer positional index, such as list. However, map is accessed by a key, such as map.getValue(“key1”) or simply “map.key1”, syntax may vary based on the context.

This post discusses the use case where a Mule flow needs to cross-reference a pair of related array and a map. We’ll use Dataweave as the example to demonstrate the use case.

The subtle difference between two similar Json data sets

Because this post uses Json format to describe the sample data, let’s clarify some subtle differences between map and list in Json format.

In the following examples, both json data sets describe the same collection of courses with ID and name:

{“course1″:”Introduction to Mule”, “course2″:”Advanced Mule”}
[{“course1″:”Introduction to Mule”}, {“course2″:”Advanced Mule”}]

However, if they are directly converted to java object inside a Mule flow, the first collection should be directly translated as a java map. A pseudo access syntax will be like courseMap[‘course1’], or courseMap.course1. The 2nd one should be converted to a java List (of Map). The pseudo access syntax would be like courseList[0][‘course1’], or courseList[0].course1.

Now, let’s define the hypothetical problem.

Definition of the hypothetical problem

We have two input json data sets #1 and #2. We need to generate the output structure #3 in Json.

#1-a collection of course IDs and names represented as a map:

{“course1″:”Introduction to Mule”, “course2″:”Advanced Mule”}

#2 – a raw grade report (list) for 2018, it looks like:

[{“year”:”2018″, “studentName”: “john”, “cid”: “course1”, “grade”: “A”},
{“year”:”2018″, “studentName”: “john”, “cid”: “course2”, “grade”: “B”},
{“year”:”2018″, “studentName”: “joe”,   “cid”: “course1”, “grade”:”C”}]

#3 – We want to generate a new grade report with course names and some header info like:

{“reportDate”: “2018-02-15”,
“schoolYear”: “2018”,
“schoolName”: “This is a Mule school”,
“grades”: [
{
“student”: “john”,
“course”: “Introuction to Mule”,
“grade”: “A”
},
{
“student”: “john”,
“course”: “Advanced Mule”,
“grade”: “B”
},
{
“student”: “joe”,
“course”: “Introuction to Mule”,
“grade”: “C”
}
]
}

Dataweave script explained

The full source code can be found at the end of this post. Let’s analyze the meat part of the project first – the Dataweave script.

Assume we store #1 in a flow variable “courseMap”, put #2 the grade list in the payload, also add flow variable “schoolNameVar”, here is the DW script will do the trick.

Notes for the script:

Line 4 and line 15 – indicate the output will be a single element, not an array

Line 5 – grab the current time and convert it to date string

Line 6 – if the array shares a common value in each element, you can reference the 1st in the output header

Line 7 – how to access flowVars

Line 8 to 14 – the output contains an array called “grades”

Line 9 – map is a lambda function that takes an input array (payload in this case), “map” has two implied arguments “$” and “$$”. So “map” really implies “map($, $$)”, where $ represent each element in the input array, $$ is index of the element inside the array.

You can add explicit arguments like “map(oneGradeRec, idx)”, then line 11, 12 will be like:

Course: flowVars.courseMap[oneGradeRec.cid],

Grade: oneGradeRec.grade

Complete source code:

In this implementation, we dynamically create a HashMap, then we set payload a Json array as text string, and use “object to json” to convert payload to JsonData before invoke DW.

Please keep your eyes on the two high-level data structures while reading the source code:

  1. we first created a flow variable “courseMap”. It is a LinkedHashMap java object (I believe a HashMap would work just fine for this case). We initialized the variable with the map values
  2. then we set payload as a json string, it is logically is an array, internally represented as a JsonData object after “json-to-object” conversion.

<flow name=“mainFlow”>

<http:listener config-ref=“HTTP_Listener_Configuration” path=“/proxy” doc:name=“HTTP”/>

<set-variable variableName=“courseMap” value=“#[new java.util.LinkedHashMap()]” doc:name=“courseMap”/>

<set-payload value=“#[((java.util.HashMap)courseMap).put(“course1”, “Introuction to Mule”)]” doc:name=“c1”/>

<set-payload value=“#[((java.util.HashMap)courseMap).put(“course2”, “Advanced Mule”)]” doc:name=“c2”/>

<logger message=“c2=#[courseMap[“course2”]] or c2=#[courseMap.course2]” level=“INFO” doc:name=“show map, #[courseMap[“course2”]] or #[courseMap.course2]”/>

<set-payload value=“[{“year”:”2018”, “studentName”: “john”, “cid”: “course1”, “grade”: “A”},

{“year”:”2018”, “studentName”: “john”, “cid”: “course2”, “grade”: “B”},

{“year”:”2018”, “studentName”: “joe”,   “cid”: “course1”, “grade”:”C”}]”  doc:name=“Set DB Grade Records as Json string”/>

<json:json-to-object-transformer doc:name=“JSON to Object”/>

<logger level=“INFO” doc:name=“by default, converts to generate org.mule.module.json.JsonData “/>

<set-variable variableName=“schoolNameVar” value=“This is a Mule school” doc:name=“schoolNameVar”/>

<dw:transform-message doc:name=“Transform Message” metadata:id=“fecb7fe3-7b5f-4fba-bc3a-da5f30d3bb6e”>

<dw:set-payload><![CDATA[%dw 1.0
%output application/json

{
reportDate: now as :string {format: “yyyy-MM-dd”},
schoolYear: payload[0].year,
schoolName: flowVars.schoolNameVar,
grades:
payload map ({
student:   $.studentName,
course: flowVars.courseMap[$.cid],
grade:  $.grade
})
}
]]></dw:set-payload>
</dw:transform-message>
<logger message=“json input is converted by DW as hashMap=#[payload]” level=“INFO” doc:name=“Use DW, json input is converted as hashMap”/>
</flow>

]]>
https://blogs.perficient.com/2018/03/01/mulesoft-correlating-array-and-hashmap-with-dataweave/feed/ 0 196531
Generating Log Entries in Mule Java Component https://blogs.perficient.com/2017/10/17/generating-log-entries-in-mule-java-component/ https://blogs.perficient.com/2017/10/17/generating-log-entries-in-mule-java-component/#respond Wed, 18 Oct 2017 01:13:26 +0000 http://blogs.perficient.com/integrate/?p=4756

This is another journal post so I don’t have to re-learn the same thing many times over in the future.

There are different options if you need to use log file to debug Java component with Mule application.

The laziest way is to use System.out.println() inside your Java code. If you do that, you just need to remember the STDOUT from system.out.println() will not show up in your application log file. Instead, if you run the application inside Anypoint studio, the output will scroll through the “console” window. If you deploy the application on Mule server, the System.out.println() will output to logs/mule_ee.log file for on-prem server. I didn’t test Cloudhub, I suppose it will go to system.log.

This is a no-thrills method to implement, but you have to check two log files: the application log file for the standard Mule logger output and mule_ee.log for system.out.println() from your java component.

The next choice is to use log4j framework. Suppose your Java class is “Foo”, then this is what you need to do to send log entries to the application log file:

import org.apache.log4j.Logger;
public class Foo {
private static Logger log = Logger.getLogger(Foo.class);
void testCall() {
log.info(“this is my logging entry”);
}

the advantage is the output goes to the same application log file. You don’t have to search two log files. It does take some extra coding.

 

]]>
https://blogs.perficient.com/2017/10/17/generating-log-entries-in-mule-java-component/feed/ 0 196459
Mule Flat File and Cobol Copybook Processing https://blogs.perficient.com/2017/10/10/mule-flat-file-and-cobol-copybook-processing/ https://blogs.perficient.com/2017/10/10/mule-flat-file-and-cobol-copybook-processing/#comments Tue, 10 Oct 2017 11:45:23 +0000 http://blogs.perficient.com/integrate/?p=4676

A few months back, I worked on a project that involves flat file handling. I thought it was such an odd thing that people still use flat file in the 21st century. Ironically, I’m now on my 3rd project which involves flat file processing. It is not just flat file; I am actually dealing with COBOL Copybook and EDCDIC encoding. I guess it’s only fitting for someone like me who started learning programing with a punch card to deal with COBOL…

Anyway, I have learned quite a few things about Mule flat file processing that warrant some deeper discussion.

With Mule 3.8.x (currently 3.8.5), Dataweave (DW) comes with flat file support https://docs.mulesoft.com/mule-user-guide/v/3.8/dataweave-flat-file-schemas. Part of it is very powerful. There are also problems when it comes to more complex situation, especially with COBOL Copybook.

I categorize Mule’s flat file function into three types: 1) the true flat file 2) structured flat file 3) COBOL Copybook.

True Flat File

This is the first type of flat file. A “true flat file” is a file with a uniform structure for all rows. It is like a simple relational table with a uniform header that defines the size and type of each data column. “True Flat File” processing is relatively simple, especially if you don’t need to deal with special zoned number encoding (I have only seen COBOL Copybook uses zoned numbers; see later).

A sample flat file schema (ffd) can look like:

form: FIXEDWIDTH

name: my-flat-file

values:

– { name: ‘Row-id’, type: String, length: 2 }

– { name: ‘Total’, type: Decimal, length: 11 }

– { name: ‘Module’, type: String, length: 8 }

– { name: ‘Cost’, type: Decimal, length: 8, format: { implicit: 2 } }

– { name: ‘Program-id’, type: String, length: 8 }

– { name: ‘user-id’, type: String, length: 8 }

– { name: ‘return-sign, type: String, length: 1 }

 

It is quite straightforward for DW to handle “true flat file.” I’ll skip the details here. Please refer to the Mule online document. The only special thing worth mentioning here is the decimal number field:

  • In DW mapping, in order for preview to show the mapping correctly, you must provide default value “0” to all “implicit” number fields. Otherwise, you will get a clueless exception: com.mulesoft.flatfile.lexical.WriteException: incompatible type for supplied value object: java.lang.Integer. However, this exception seems to only matter in the studio preview. Even if you don’t fill in the number fields, it seems to work just fine at run time.
  • For regular decimal numbers, it will always contain a decimal point “.” in the output.
  • For implicit decimal numbers, the mapping result will fill in the decimal places (2 decimal places in this case), and there is no decimal point “.”

Structured Flat file

This is the second type of flat file. Unlike the “true flat file,” the rows in a structured flat file contain different types of records. This type of file is not really flat after all. For example, it may have a sales order on one line followed by multiple sales items for the order. It may even further followed by shipping records.

The Mule online document did a great job providing a structured file example. It also showed how to use reference-id etc.

The only thing I want to point out is the “tag” field. A tag field identifies what type of record a row contains. When flat file data comes in to DW, the file processor needs to differentiate one row type from another. The only thing it can rely on is a “tag” field at the beginning of a line.

For example, you may have tag “101” indicate this line is the order (header, says how much this order is, the order number, etc), then “202” indicates this line is sale item record (contains merchandize name, how many, etc), and “303” will identify a shipping record (with address info).

Please note that when DW generates flat file output, it auto fills the “tag” value depends on the record type.

COBOL Copybook

This the third type of flat file I have identified. It is quite complex. I will further break down the copybook processing into three parts.

I am no Copybook expert. However, I hope the few things I have learned can help anyone who is exposed to Copybook for the first time. Mule Dataweave (DW) Copybook support is somewhat limited at this moment. Current Mule documentation is inaccurate as well.

There are three parts of Copybook processing with DW:

Part I – Generating FFD file from the Copybook file

The Mule DW document assumes you already have a FFD file. However, it does not tell you how to use DW to load Copybook and generate the FFD as step one.

First of all, from a developer’s point of view, a Copybook is a data structure definition for a 32K character space. That’s how and why Copybook is related to flat file. I’m sure there is more to it. But for DW Copybook processing that’s all I care about: all we need is the structure definition of this 32K long space.

DW is very finicky on picking up a Copybook file. The Copybook file I initially used did not include a section “01” (whatever that means), so DW cannot process it. I ended up adding something like “01 GM220-REC” at the top of the file to make DW even recognize the file.

After DW accepts the Copybook file, it will spit out an “FFD” file. You can look at this step as DW translating Copybook format into FFD format. I do not understand why the online Mule document does not mention this step.

Anyway, what is mad is that DW will not recognize the “FFD” file that is generated by itself! Because the generated FFD contains “zoned” number types in my case.

I had to manually tweaking the “FFD” file so DW can recognize the structures. That is where the story start to get murky. Read on to the next section.

Part II – Tweaking the FFD file

If your generated FFD file contains zoned types, you need to read part III below. But first let me address a simpler FFD tweaking first.

The FFD generated by DW contains quite complex structures. Each section of the copybook is treated as separate structure. However, if your original Copybook does not contains OCCURS, the copybook is really just one single long line of flat record. In that case, you can merely flatten out the complex levels of structures in the generated FFD file. You can simply take all the “values” rows with the name and type definition for each field, and remove everything else. That way the FFD file becomes a “true flat file.”  Your life is a whole lot easier when it comes to parsing and mapping the records in DW.

Keep in mind, if your Copybook has structures that are more complex, you will not be able to flatten the FFD file.

Part III – Zoned numbers

Finally, if your Copybook contains zoned numbers, the situation will become very complex. DW will create the FFD file with zoned type. But the current version of DW cannot read zoned types in FFD! You have to manually change “zoned” to “decimal” in the FFD file in order for DW to recognize it in the studio.

Then after DW successfully loads the FFD file with “decimal” type, you need go back to the FFD file again, and change the “decimal” back to “zoned”. Yes, you heard me right. It appears to be a bug with DW at this moment. Until the bug is fixed, you have to flip-flop the zoned types in the FFD file.

If you really want to know, here is my limited insight of the bug: DW in the studio does not support zoned type during design time. However, at runtime, zoned type is supported. If you get it, that’s cool. If you don’t, never mind; please just flip-flop between the zoned and decimal type and let’s move on.

We are far from done here. The “zoned” numbers need to encode the number signs with the last digit of the number value http://simotime.com/datazd01.htm.

Here are a few examples numbers and encoded values with “EBDICS” and “ASCII” encoding:

COBOL format FFD Original number EBCDIC ASCII
S9(5) Decimal 123 0012C 00123
-123 0012L 0012s
12345 1234E 12345
-12345 1234N 1234u
S9(5)V9(2) Zoned, implicit 2 123 001230{ 0012300
-123 001230} 001230p
12345 123450E 1234500
-12345 123450N 123450u
14.18 000141H 0001418
-14.18 000141Q 000141x
S9(5)V9(3) Zoned, implicit 3 14.18 0001418{ 00014180
-14.18 0001418} 0001418p

 

Long story short, as of now, Mule DW only supports EBCDIC encoding. If your client uses ASCII encoding (also called MicroFocus), then you are out of luck.

That’s what happened to me. So after all the trouble figuring out how and what, I am unable to use DW Copybook. Mule DW may support ASCII encoding in the future. For now, I ended up using a customized Java solution.

]]>
https://blogs.perficient.com/2017/10/10/mule-flat-file-and-cobol-copybook-processing/feed/ 1 196453
What is in a Mule API Version? https://blogs.perficient.com/2017/10/04/what-is-in-a-mule-api-version/ https://blogs.perficient.com/2017/10/04/what-is-in-a-mule-api-version/#respond Wed, 04 Oct 2017 20:59:36 +0000 http://blogs.perficient.com/integrate/?p=4600

“What’s the version of your Mule API?” you might be asked one day.

On the surface, you would think it’s such a trivial question, but if you think again, you would know there is much to more to the story.

The truth is, there are a few “versions” of the same Mule API depending on what you’re referring to.

In theory, an API version is the version of the API interface. The interface definition is a contract that declares what the API can and shall do. MuleSoft uses RAML to define the API interface. There is a “version” attribute in RAML file specifies the version of the API.

RAML is the equivalent of WSDL in the SOAP world, or IDL for Java RMI. Once it is published, it should stay as stable as possible unless you absolutely have to change it, because changes may break existing clients. However, in the real world, we do make changes to the interface. If the interface has changed, the version number should be updated.

In an ideal world, the “version” defined in the RAML should be the single version # we ever use to refer to this API interface anywhere. However, the real world is little bit more complicated.

Let’s take a look at the life journey of an API version and its different incarnations in the Mule world.

When RAML (which should have a version in it) is used to generate an API project, Anypoint studio will use the RAML file to create a skeleton flow of the API project. The RAML file is placed under “src/main/api”, and the API main flow will contain an “apiRouter” flow that routes the HTTP request based on the RAML definition.

For the sake of this discussion, assume the RAML has two REST resource paths.

  version: v1
  /foo:
    get:
  /bar:
    get:

You can access the API endpoint with something like http://host/myHelloAPI/foo, http://host/myHelloAPI/bar. We will elaborate more about the “myHelloAPI” portion of the URL. Let’s continue for the time being.

What happens if you manually change the RAML file after you published your API already?

If you change the RAML like below and but never bothered to change your flow:

/foo:
  get:
/car:
  get:

Your project will compile and deploy fine and when you call http://host/myHelloAPI/bar, Mule runtime will balk, and bark at you. That’s not what we want.

Let’s do a somewhat more realistic RAML change like below and you also updated your flow accordingly:

version: v2
/foo:
  get:
/bar:
  get:
/car:
  get:

After deploying your API, it all runs fine. You are happy. Your previous client still access the API via http://hostname/foo, http://hostname/bar. For the people who know your API’s change, they can access http://hostname/car.

From the URL, the previous clients never know the version has changed. They don’t know they can access new “/car”. That’s not fair!

For the SOAP folks, at least, by convention you can check the WSDL (interface) by calling http://host/myHelloAPI/mysoapservice?wsdl and you can see which version of the interface you are dealing with. In the REST API world, there is no such thing! Why? First of all, unlike the SOAP world, there is no universal definition language of REST API. I hope someone can start a convention like http://host/myHelloAPI/myRestAPI?raml or something like that. The problem is the swagger (OpenAPI) folks will have to invent something different. Of course, there is even WADL folks. Without the established conventions, the closest thing to check the interface is either use the RMAL API Console (if available) or use API portals to check on documentations. Well, I digressed.

Let’s come back to the API version. We see that the “version” in the RAML really intends to declare that the logic version of the API interface. One way to let the clients know which API version they are using is to embed the version in the API URL during the runtime deployment.

For example, to differentiate v1 and v2, we can make two deployments of the API as:

http://host/myFooAPI/v1/foo and http://host/myFooAPI/v2/foo (please note for on-prem deployment, each project may decide whether to include “myFooAPI” as part of the endpoint URL)

If you deploy to cloudhub, you can do http://myFooAPI.cloudhub.com/v1/foo   and http://myFooAPI.cloudhub.io/v2/foo

I think you get the point. We just need to insert “v1” or “v2” somewhere in the API endpoint URL so clients know which version they are calling. (In case you are wondering what happens when you have multiple environments, hold on to it, we’ll talk about that below).

Yeah, everything is honkey dory now. Before you go out to celebrate, you have to be aware of one important implication by inserting “v1”, “v2” in the URL: there is nothing to prevent someone to deploy “v1” of the API, and name it “v2”. All of these conventions have to be followed up with a manual or quasi-automatic process and with developer discipline to ensure “what-you-see-is-what-you-get” for the clients.

Let’s sum up what we got so far before we plunge into the other hole of API versions.

  • Each API interface has a version. That is the version declared in the RAML “version” attribute.
  • Developers should stay true to this API version either in the implementation or in the deployment URL.

Now, let’s move on to the multiple environments. Once you get a RAML defined in the APIM console. You more than likely need to deploy the API implementation to more than one environment.  However, unlike Mule Runtime Manager, the Mule API Management Console does not have the concept of environment, such as “DEV”, “QA” or “PROD”. The APIs are “managed” across all environments the same way. Therefore, to differentiate “myHelloAPI /v1/” for DEV, QA or PROD environments, developers need to add more “technical versions” for the same logic API version “v1”. In the Mule APIM view, the same API version may have a picture like:

  • myFooAPI-v1 (the logical version v1 declared in RAML)
  • myFooAPI-v1-dev
  • myFooAPI-v1-qa
  • myFooAPI-v1-prod

Please note the version “myFooAPI-v1-dev” is purely for API management purpose, that’s why I call it a “technical version”.  For the curious mind, this is the version that appears in the source code

<api-platform-gw:api apiName=“![p[‘api.name’]]” version=“![p[‘api.version’]] apikitRef=“proxy-config” flowRef=“proxy” doc:name=“API Autodiscovery”/>

There is no compiler level enforcement of this “technical version” to be consistent with the “logic version” in the associated RAML file. Again, developers need to be disciplined to ensure the versions (for example “v1”) appear consistently in both places!

Finally, keep in mind, when deploying the actual API to Cloudhub, no one can stop you from deploying the application to an endpoint like http://myBarAPI-v99.cloudhub.io. All I can recommend is please select a meaningful name like http://myFooAPI-dev.cloudhub.io/v1, so the API users will have a better idea of your true API version.

OK, it is time to lasso in all versions of the Mule API now.

A Mule API version may have three incarnations:

  • The logical API version that is declared in the RAML file with the “version” attribute. This should be the true interface version for the API.
  • The version that appears in an API service end such as http://host/myFooAPI/v1 or http://myFooAPI-dev.cloudhub.io/v1 should be consistent with the RAML logic version.
  • Each deployment environment has a “technical version”, they are strictly used for APIM purpose, these are the same versions as they appear <api-platform
    -gw:api>
    tag

Here is a summary of the Mule API version table:

    version <v-platform
-gw:api> tag
Deployment Endpoint URL
on-prem  cloudhub
1 Design Doc v1      
2 RAML v1
3 DEV myHelloAPI-v1-dev myHelloAPI-v1-dev dev-hostname/myHelloAPI/v1 myHelloAPI-dev.cloudhub.io/v1
4 QA myHelloAPI-v1-qa myHelloAPI-v1-qa qa-hostname/myHelloAPI/v1 myHelloAPI-qa.cloudhub.io/v1
5 PROD myHelloAPI-v1-prod myHelloAPI-v1-prod prod-hostname/myHelloAPI/v1 myHelloAPI.cloudhub.io/v1

 

For the “version” column:

row #1 the version that appears in your design document. I call this the logic version.

row #2 this is the RAML version, it is same logic version as in your design doc.

row #3 – #5, these are what I call the technical versions, you better name them consistently with the RAML logic version. Unfortunately, no one can prevent you from naming them “myHelloAPI-v99-dev”, just please do not do that!

For “tag” column:

row #3 – #5: these are the technical versions in your API source code. They are supposed to be the same as the “version” column. You can change the “API version” in the API management console, . Again, don’t do that please!

For “on-prem” and “cloudhub” columns:

row #3 – #5: they should be named consistently as in the previous columns. If you name them differently, I give up. That’s your prerogative after all. But don’t do that, please!

There can be another variation of the API version. On the API management console, you can actually add or change the versions for each platform to anything you like. Please let’s keep it the same as the technical versions. We have got too many places to mess up!

However, I would amiss if I don’t mention the version that appear on the API portal site. Let’s keep that one as the “logical version”. We publish official documents there, and label it as “v1” to link to the official site for the logical version of the API.

Here is a screenshot of example Mule API versions.

There you go, that’s my tale on the Mule API versions.

]]>
https://blogs.perficient.com/2017/10/04/what-is-in-a-mule-api-version/feed/ 0 196449
Mule API Exception Handling Patterns https://blogs.perficient.com/2017/08/03/mule-api-exception-handling-patterns/ https://blogs.perficient.com/2017/08/03/mule-api-exception-handling-patterns/#respond Thu, 03 Aug 2017 20:21:48 +0000 http://blogs.perficient.com/integrate/?p=4108

Unlike regular Mule applications, when a new RAML based Mule API project is generated, the APIKit tool will create a global exception handler. Although this default exception handler covers some basic HTTP 400-level errors, it is only a starting point for a comprehensive error handling strategy. More can be done to enhance the error handling for the API application.

This post will explore two common patterns to enhance the exception handling for an API application.

The Default API Exception Handler

MuleSoft API exception handling is built on top of the general MuleSoft error handling framework.

When a Mule API project is generated, the Mule APIKit creates a global exception handler with a name like “apiKitGlobalExceptionMapping.” This exception handler is referenced by the API main flow, which contains the “APIKit Router.” All API calls are routed through the API main flow and the “APIKit Router.”

The following code snippet shows the structure of the API exception handler:

<apikit:mapping-exception-strategy name="api-apiKitGlobalExceptionMapping">
<apikit:mapping statusCode="404">
<apikit:exception value="org.mule.module.apikit.exception.NotFoundException" />
<set-property propertyName="Content-Type" value="application/json" doc:name="Property"/>
<set-payload value="{ &quot;message&quot;: &quot;Resource not found&quot; }" doc:name="Set Payload"/>
</apikit:mapping>
<apikit:mapping statusCode="405">
<apikit:exception value="org.mule.module.apikit.exception.MethodNotAllowedException" />
..
</apikit:mapping>
<apikit:mapping statusCode="415">
<apikit:exception value="org.mule.module.apikit.exception.UnsupportedMediaTypeException" />
...
</apikit:mapping>
<apikit:mapping statusCode="406">
<apikit:exception value="org.mule.module.apikit.exception.NotAcceptableException" />
...
</apikit:mapping>
<apikit:mapping statusCode="400">
<apikit:exception value="org.mule.module.apikit.exception.BadRequestException" />
...
</apikit:mapping>
</apikit:mapping-exception-strategy>

When an exception is triggered anywhere within the API application, two things may happen: 1) if the exception is already defined (mapped) in the default exception handler, the client will receive the predefined HTTP error code along with the message. For example, “404 / Page not found”. 2) If the exception is not already defined in the handler, for example, if a SQL server connection exception is thrown within the application, the exception will bubble up to the default handler. The default exception handler will respond with error code 500, and the #[exception.message].

Mule API Exception Handling Patterns

This default exception handler “apiKitGlobalExceptionMapping” is a good foundation, but it has some shortcomings:

  • Since any exceptions not defined in the pre-specified exception list are lumped together as HTTP 500 error, it does not provide enough information to the client. There is no customized response per each particular exception.
  • On the other hand, any undefined exception will return #[exception.message] as payload, which may contain too much information. For example, in the case of SQL exception, the error message may contain JDBC URL connection parameter, which may include host, port, instance id, username etc. The error message may even include the actual SQL statement. This can be a security concern.

Besides setting HTTP code and payload, the default handler does not do anything else. There is not even logging entry by default. This can make troubleshooting the problem more difficult.

The remainder of this post will show how we can create a more comprehensive API error handling strategy on top of the default handler.

Pattern 1 – Extend the Default Exception Handler

The first pattern extends the default handler.

There are several enhancements that can be added to extend the default handler:

  • Add more exception cases. For example, developers can add

java.sql.SQLException => 4xx (or use a 5xx code)

The code snippet may look like:

<apikit:mapping statusCode=“4xx">

<apikit:exception value=" java.sql.SQLException " />

logging code here, set content type, payload etc accordingly…

</apikit:mapping>

The 4xx or 5xx is code should come from either a well-defined API design document or the API RAML

  • Add logging entries to each exception case. At the minimum, the log entries will be helpful when developers need to troubleshoot the exception. Additionally, the logging entries allow developers to add certain keywords for each project. These keywords can be used by other data monitoring or data mining tools, such as Splunk.
  • Add more exception handling actions per each project requirements, such as alert email, retry logic etc.
  • Add a catch all entry using java.lang.Exception.

By doing this, developers can set a different HTTP status, such as 599. Developers can also control whether to expose #[exception.message] to the API client. The following code snippet shows an example of catch-all exception handling:

<apikit:mapping statusCode=“599">

<apikit:exception value=" java.lang.Exception" />

<!—add logging entries, set content type, payload, and take other actions per each project requirements accordingly -->

…

</apikit:mapping>

In this case, the “599” code can differentiate that the error comes from the API application, not from the HTTP server stack. Because HTTP 500 status code is designed for generic unknown errors from the HTTP server.

Advantages of Pattern 1:

  • The main advantage of extending the default handler is simplicity. Developers can simply edit the default API exception handler, adding more exception entries per each project. The exception handling code is in one place, this can be an advantage for a project with few exception cases.

Disadvantages of Pattern 1:

  • The disadvantage is all exception cases are crammed in one place when there are many exception cases. The code can become unyielding and hard to read.
  • The exception handler and the location where the exceptions actually happen may not be in the same file. It may require reader to jump from place to place to understand the flow and the exception handling logic.
  • Because the exception handler is global, it is hard to create localized and more elaborate exception handling. For example, when adding the exception entry for java.sql.SQ:Exception at the global level, it is difficult to make distinction between SQL read exception from a SQL connection exception.

Pattern 2 – Add Custom Exception Handlers

Custom exception handler(s) can be added by using the Mule exception handling strategies. The best practice is to add global exception handling strategy and let flows reference the global strategy. The custom exception references should be added in the flows that are directly called by the “apiRouter”. These flows are generated by the APIKit, such as “get:/person:api-config”. These are the “resource path flows”, because each flow represents the REST resource path as defined in the API RAML. By following this pattern, the HTTP status code and exception message will be picked up by the API main flow and returned to the client.

Mule API Exception Handling Patterns

When working with Mule API custom exception handling, developers need to be aware of two facts:

  1. Because this is an API project, the exception handler needs to set the HTTP error status and content-type in additional to setting the error message.
  2. Mule API project has a single API main flow which contains the “apiRouter”. This API main flow acts like a Java main().This main flow has an associated default exception handler generated by APIKit when the API project was first created. Any API responses will be returned by this main flow or the associated default exception handler. This unique application structure has consequences that affect the custom exception handling.

If a custom exception handler handles an exception in a flow, this exception is considered as “consumed”. The “consumed exceptions” will no longer be visible to the calling flow. Therefore, after the custom exception handler sets the HTTP code and message, these values will be returned by the API main flow to the API client.

Custom exception handler can re-throw exceptions. When that happens, the default exception handler in the main flow will catch the exception and handle it accordingly.

If additional exception handling is needed below the “resources path flow” level, these sub-level exception handlers should not be used to set the HTTP status and the final response messages. Because these values can be potentially reset by the intermediate flows in the flow invocation chain.

Finally, even with custom exception handler(s), the default global exception handler should still add the “catch-all” exception case for “java.lang.Exception”. Please reference the previous section for details.

Advantages of Pattern 2:

  • Custom exception handler(s) allow developers to modularize the exception handling instead of using a single global exception handler. It can make code easy to understand and maintain.
  • It can add more elaborate exception handling actions. For example, for JDBC operations, there can be DB connection error, DB read or write errors. Each of these operations may throw java.sql.SQLException, a custom exception handler can try to differentiate each type of operation, and add more details to the exception message to indicate if it is DB read, write, or DB connection. The exception handler can further set a different HTTP status code for each exception. This status code should be based on a well-thought-out API design document or the API RAML of the project.

Disadvantages of Pattern 2:

  • If not used properly, it can result in too many exception handlers and causes confusion.
  • Requires deeper understanding of how the exception handling chain works. Developers need be disciplined to add the custom exception handler(s) at the proper level.

Going Beyond the Basics

To enhance the API exception handling, a developer can perform more than the two exception handling patterns discussed the previous sections.

Technically, developers can add any number of exception handler(s) and at any levels of the flows. However, care must be given such that HTTP status and error message can be propagated to the API main flow and returned to the calling client as intended.

Besides requiring HTTP status code, API exception handling is just like standard Mule exception handling. The custom exception handler can take any actions like a standard exception handler. In theory, when an exception happens, an exception handler may take any of these potentially actions:

  1. do nothing (ignore, “swallow”)
  2. logging
  3. direct technical retry (add delay then retry), business retry (may require manually rehab the data and retry)
  4. alert (send message to messaging system, such Email, JMS etc)
  5. abort (terminate)
  6. re-throw (bubble up)
  7. rollback
  8. delegate (which can do any of the above #1 – 7, and more)

For example, #1 can be justified when an application needs to do a “best of effort” to invoke a web service for notification purpose. If calling the web service causes an exception, the application can simply ignore the error and moves on.

These additional actions are beyond the basic exception handling patterns, developers will need to follow the design requirements and add proper actions per each project requirements.

]]>
https://blogs.perficient.com/2017/08/03/mule-api-exception-handling-patterns/feed/ 0 196403
Mule 4 and Studio 7 Beta Release: What’s New? https://blogs.perficient.com/2017/07/21/mule-4-and-studio-7-beta-release-whats-new/ https://blogs.perficient.com/2017/07/21/mule-4-and-studio-7-beta-release-whats-new/#respond Fri, 21 Jul 2017 16:19:16 +0000 http://blogs.perficient.com/integrate/?p=4131

MuleSoft just announced the beta release of Mule 4 and Studio 7. If you have worked with any Mule products for the past few years, you will come to appreciate many of the new features in this beta release. To communicate all of the new features, MuleSoft is sharing a series of webinars on the new features. The first webinar covered general features, the Runtime and Studio. I’m already impressed with they have shown. I can’t wait to see what is announced for the new features on API Manager and CloudHub functions!

Here is my take on some of the new features:

  • GUI / Palette – I am OK with the new GUI icons. They come with blue backgrounds now. No biggy for me over here. Some new palette features are good, such as a “Favorite” tab containing the commonly used icons, and right clicking on icons in the flow will allow you to view the corresponding XML source. I am sure I will come to appreciate these handy functions more as I put my hands on the new Studio.
  • Major exception handling update – If you worked with Mule in the past, you know Mule exception handling is almost the same as the underlying Java. However, the older exception handling has left a few things to be desired. I guess they listened to the developer community and responded as such. For example, in the past, you will have to use Groovy to throw an exception. It has always been a sore point for me. Also, the exception handling in the past can only be applied to flows and a few other scopes. The new Mule 4 has stepped up “bigly;” you can add try-catch inside a block of your flow. You can propagate an exception up to the chain. These behaviors more closely reflect the underlying Java exception handling most developers have become accustomed to.
  • Major update on data streaming handling – It’s all behind the scenes now. I absolutely love this. In the past when data streams are involved, there are multiple steps of data conversions, hence those endless “transformers”: JSON to string, string to XML, object to string… Sometimes it is mind-boggling. I often just tell my clients to do trial and errors and see which conversion fits in which situation. Now those transformations are automatically taken care of behind the scenes. I am eager to get my hands on it to see how it works.
  • Online design center – I think it used to be called Mozart a couple of years ago. Now it is being officially released. It is a cool online flow editor, but it’s not something for hardcore techies. It is some sub-functions of the Studio online.
  • Connectors can set the output to variable directly – it saves the extra steps with message enricher.
  • MEL – Another fundamental change is the MEL. Now MEL uses Dataweave syntax. Any straddlers who are not totally committed to dataweave, it’s wake up time. Dataweave is central to the Mule from here on.

There are other miscellaneous features. Some are even “invisible;” that’s because Mule is trying to make everything simple. Some of the things have become transparent you don’t even “see” them anymore. For example, Mule 4 has the self-tuning that takes care of the tweaking the worker-thread configurations you had to do manually in the past.

Two more webinars are coming up. I’m especially looking forward to seeing the new API Manager features and CloudHub functions.

]]>
https://blogs.perficient.com/2017/07/21/mule-4-and-studio-7-beta-release-whats-new/feed/ 0 196406