Perficient IBM blog

Subscribe to Emails

Subscribe to RSS feed

Archives

Follow IBM Technologies on Pinterest

Integrating IBM Integration Bus with DataPower XC10 – (REST APIs)

Introduction

In this article we will discuss about how you can achieve the integration between IBM Integration Bus (IIB) and WebSphere DataPower XC10 (XC10) using the REST APIs functionality of XC10 devices. Since the release of the V9, IIB can now connect to an external data grid cache to improve SOA’s services performance by the use of a eXtreme Scale client. But the good thing about integrating with the XC10 REST APIs is that it can also be achieved by previous and earliest versions of WebSphere Message Broker (WMB) like 7 and 8.

For a complete understanding of Global Cache and External Cache on IIB, please consider the following link:
http://www.ibm.com/developerworks/websphere/library/techarticles/1212_hart/1212_hart.html

If you’re looking for details on how to achieve this integration using the Java APIs MbGlobalMap, please visit:
http://www.ibm.com/developerworks/websphere/library/techarticles/1406_gupta/1406_gupta.html

Also, a detailed explanation of the Side Cache pattern in a SOA architecture is beyond the scope of this article. To learn more about the pattern, please consider the following links:

The requester side caching pattern specification, Part 1: Overview of the requester side caching pattern
http://www.ibm.com/developerworks/webservices/library/ws-rscp1/

Cache mediation pattern specification: an overview
http://www.ibm.com/developerworks/library/ws-soa-cachemed/


Assumptions

Before going ahead, let’s just explain some of terms that you will find from now on during the rest of the article:

Cache Hit

IIB will first query the XC10 cache when receiving a request for a cached service. If the requested data is found on the cache – HTTP Return Code 200 OK received from XC10 API, the request can be fulfilled by the cache, avoiding a backend call.

The flow stops right there by returning the data to the client, no further processing is needed and the most expensive task, which is to call the backend, is avoided.

Cache Miss

On the opposite side, if IIB doesn’t find the requested data on the cache – HTTP Return Code 404 received from XC10 API, it will need to invoke the backend. After the data is retrieved from the backend, it will be inserted into the cache, for the use and speed up of the subsequent calls.

Bottom line is: Cache Hit is faster than Cache Miss. The best performance experience can be achieved when the majority of the requests can be served by the cache. That is the goal point, to have the end-to-end performance maximized by the use of the side cache pattern. And that is done transparently to the client, who assumes it is still hitting the backend.


 

Scenario

For the scope of this article, we will be caching a SOAP Web Service (WS) response “mocked” at SoapUI. To better illustrate the scenario, we are adding delay on the SoapUI mock responses.

The same Caching concept applies to database queries or different types of backends. The most important part is a good understanding of what type of data can be cached – static data vs dynamic data, which will depend on your environment, architecture and service usability.

Proposed Architecture

The below diagram illustrates the proposed architecture to achieve the Side Cache pattern for a SOAP WS using IIB and XC10 REST APIs:

IIBXC10Architecture

 


 

IBM Integration Bus Architecture

Security

Before introducing the message flows and sub-flows that were used to achieve the integration, we will cover two important steps related to the XC10 security on IIB.
By default, XC10 REST APIs does requires HTTP Basic Authentication. So you will need to perform the following configurations on IIB:

Security profile for HTTP Basic Auth

Using the MQSI command console, issue the commands below to register the user credentials and to create a security profile:

$ mqsisetdbparms <BROKERNAME> -n <securityIdName> -u <user> -p <pass>

$ mqsicreateconfigurableservice <BROKERNAME> -c SecurityProfiles -o <securityProfileName> -n “propagation,idToPropagateToTransport,transportPropagationConfig” -v “TRUE,STATIC ID,<securityIdName>”

Attaching the securityProfile to the BAR file

Once the security profile is created, we need to attach it to the BAR file. At this point you should not have your BAR file ready yet, but since it’s a pretty straightforward task, we will cover it now.
Click on the BAR file, on the Manage tab, expand your application and select the Message Flow as shown on the picture. On the Configure properties that appears below, scroll down and set the security profile you just created. IIB will now add the Basic Auth header on the HTTP Requests used on this flow.

IIBXC10Architecture_2

 

Overall MsgFlow

This is an overview of the complete message flow implementing the side cache pattern.

IIBXC10Architecture_3

A brief description of the flow can be:

The SOAP Input nodes exposes a Web Service interface. Once being called, the subsequent sub-flow SF_CacheQuery will first check if the data requested is cached by hitting the XC10 API (HTTP GET Method). If that is successful, the response is returned to the client immediately and no subsequent processing is done. Otherwise, the Invoke_BE_getCustomer node will call the SOAP Web Service. A Flow Order will first return the response to the Client and, after that the SF_CahceInsert sub-flow will insert this response data into the XC10 cache grid (HTTP POST Method).

Note that neither error handling nor retry logic when calling the XC10 APIs and the SOAP backend was implemented. You will certainly need to improve that and build your own flow and adapt it based on your own needs.

Cache Query SubFlow

As described above, this SubFlow will query the XC10 Cache Grid by sending a GET method to the REST API and using one of the request incoming data as the KeyIdentifier.
You should consider using a small timeout period on every call to the XC10 because we don’t want to cause an overhead processing time to the client in case of connections problems or if the XC10 is not available. For example, in this PoC, I used a timeout of 2 seconds.

IIBXC10Architecture_4

Set CacheQuery Params Compute Node – ESQL

The below ESQL contains the code used to achieve the Query Cache. First we save the incoming request into the variables for further usage.
After that the InputKEY is referenced from the incoming SOAP body. This is the key that will be used to query/insert the data in XC10. As the name suggests, you need to use something that uniquely distinguishes a request from others. If needed, you can even consider concatenating two or more fields and use that as your key.
After that we are ready to Query the XC10 cache by overwriting the HTTP Method to GET and then specifying the RequestURL in the XC10 REST API notation.
A successful response of that is acknowledged by XC10 with a HTTP 200 along with the data previously cached.

–Storing incoming request in case of CacheMiss
SET Environment.Variable.InputMessage = InputRoot.SOAP;

–Getting ID from request which is the KEY for XC10 Cache
DECLARE InputKEY REFERENCE TO InputRoot.SOAP.Body.ns:getCustomerRequest.ID;

–GET – Query Cache
SET OutputLocalEnvironment.Destination.HTTP.RequestLine.Method = ‘GET';
–XC10 URL for Query Cache
SET OutputLocalEnvironment.Destination.HTTP.RequestURL = ‘http://192.168.122.1:7000/resources/datacaches/’ || CACHENAME || ‘/’ || CACHENAME || ‘/’ || CAST(InputKEY AS CHARACTER);

Note: Since we’re overwriting the HTTPRequest properties from a previous node, make sure you have the Compute Node propertie Compute Scope set to LocalEnvironment and Message:

IIBXC10Architecture_LocalEnv

 

Cache Insert SubFlow

After a Cache Miss, this SubFlow will insert the response data into the XC10 Cache Grid by sending a HTTP POST request method to the REST API and using one of the request incoming data as the KeyIdentifier. The data to be cached is mandatory and is sent as the request payload.

Remembering, you should consider using a small timeout period on every call to the XC10 because we don’t want to cause an overhead processing time to the client in case of connections problems or if the XC10 is not available. For example, in this PoC, I used a timeout of 2 seconds.

IIBXC10Architecture_5

Set CacheInsert Params Compute Node – ESQL

The below ESQL contains the code used to achieve the Insert Cache. First we save the incoming request into the variables for further usage.
After that the InputKEY is referenced from the incoming SOAP body. This is the key that will be used to query/insert the data in XC10. As the name suggests, you need to use something that uniquely distinguishes a request from others. If needed, you can even consider concatenating two or more fields and use that as your key.
After that we are ready to Query the XC10 cache by overwriting the HTTP Method to GET and then specifying the RequestURL in the XC10 REST API notation.
A successful response of that is acknowledged by XC10 with a HTTP 200 along with the data previously cached.

–Getting ID from request which is the KEY for XC10 Cache
DECLARE InputKEY REFERENCE TO Environment.Variable.InputMessage.Body.v1:getCustomerRequest.ID;

–POST – Insert Cache
SET OutputLocalEnvironment.Destination.HTTP.RequestLine.Method = ‘POST';
–XC10 URL for Insert Cache
SET OutputLocalEnvironment.Destination.HTTP.RequestURL = ‘http://192.168.122.1:7000/resources/datacaches/’ || CACHENAME || ‘/’ || CACHENAME || ‘/’ || CAST(InputKEY AS CHARACTER);

Note: Since we’re overwriting the HTTPRequest properties from a previous node, make sure you have the Compute Node propertie Compute Scope set to LocalEnvironment and Message:

IIBXC10Architecture_LocalEnv

 


 

XC10 REST APIs

It’s beyond the scope of the article to explain all the potential and possibilities that can be achieved by using the XC10 REST APIs.
For a complete reference and usability functions available, please visit:
http://www-01.ibm.com/support/knowledgecenter/SSS8GR_2.5.0/com.ibm.websphere.datapower.xc.doc/tdevrest.html

POST

You can use any HTTP client to interact with the XC10 REST APIs. HTTPRequest nodes are used on IIB, but you can also use cURL or SoapUI for testing purposes. There’s a sample SoapUI project along with the files available for download. cURL commands samples are available at the Appendix chapter.

Insert Data into the XC10 for test

HTTP method: POST
HTTP header: ’Content-type: text/xml;charset=UTF-8’
URI: /resources/datacaches/<cachename>/<cachename>/<key>

POST data:
’<xml>Sample Data</xml>’

Response:
200

GET

You can use any HTTP client to interact with the XC10 REST APIs. HTTPRequest nodes are used on IIB, but you can also use cURL or SoapUI for testing purposes. There’s a sample SoapUI project along with the files available for download. cURL commands samples are available at the Appendix chapter.

Retrieve (GET) Cache Data from the XC10
HTTP method: GET
URI: /resources/datacaches/<cachename>/<cachename>/<key>

Returns Data cached previously:
‘<xml>Sample Data</xml>’

If key doesn’t exist, returns HTTP error:
404

Monitoring

XC10 offers native monitoring of for each Data Grid. Using the GUI, just follow Monitor -> Individual Data Grid Overview -> click on your Grid. The below monitoring and performance metrics will appear.

IIBXC10Architecture_6

 


 

Conclusion

In this article we covered how easy it is to implement the Side Cache Pattern on a SOA architecture using IBM Integration Bus and WebSphere DataPower XC10 to speed up performance. This can be used on a variety of backends and not only SOAP Web Services. The XC10 REST APIs is a pretty good interface and provides all necessary functions to integrate pretty straight forward. In our scenario we solved a “slow SOAP Web Service backend” problem by caching the response data into the XC10 Data Grid.


 

Appendix

Insert (POST) Cache Data example using command utility “cURL”

curl -u <user>:<pass> -H ‘Content-type: text/xml;charset=UTF-8′ -X POST -d ‘<xml>Sample Data</xml>’ http://<xc10hostname>/resources/datacaches/IIB_POC/IIB_POC/1020

Retrieve (GET) Cache Data example using command utility “cURL”

curl -u <user>:<pass> -X GET http://<xc10hostname>/resources/datacaches/IIB_POC/IIB_POC/1020

Consider using SoapUI for a more friendly GUI instead of cuRL. There’s a sample SoapUI project along with the files available for download.

 

Troubleshooting Tools

Consider using troubleshooting tools such as NetTools web debugging tool to sit in the middle between IIB and XC10.
Download at: http://sourceforge.net/projects/nettool/files/

 

DOWNLOAD IBM INTEGRATION BUS PROJECT INTERCHANGE SAMPLE CODE
IIB_XC10

DOWNLOAD SOAPUI PROJECTS
IIBXC10_SoapUI

Anatomy of a Complex Fraud Scheme

With tomorrow’s Counter Fraud Webinar fast approaching – I came across an interesting info graphic that put this increasing phenomenon into perspective.

The complexity of fraud evolves as people embrace the conveniences of an interconnected world. From the office, to the cloud, and even on private mobile devices, anyone is a target.

Cyber-crime turns to sophistication for success. International teams of Hackers and Cashers manipulate systems to steal credit information, drain balances, and hide the evidence before anyone notices. This graphic is a great depiction of the complexity we face today:

anatomy of CF 1

anatomy of CF 2

So what’s the moral to the infographic?

Read the rest of this post »

Apply an OOAD approach to the TOAGF ADM

Describing a rich EA framework and process using a standard object oriented development approach.


Recently I had had an interesting conversation with a solution architect around the TOGAF framework and the ADM. I was asked the following question: “Can you provide a short but reasonable description for tailoring the framework while instantiating the ADM as a practical implementation?”. This was certainly an interesting challenge given the scope and depth of the TOGAF ADM.

So I took up the task with the concept of “dogfooding”. Why not apply tools and techniques describe in the ADM and other methods as an approach. So, I started with an enterprise architecture modeling tool that supports TOGAF modelling elements and UML notation. I then applied object oriented analysis techniques to flesh out a set of contextual models.

My objective; provide a concise and practical example of the ADM using a model based approach for developing architectural capabilities. Which is the same method for developing a solution architecture that delivers a business capability.

TOGAF for all practical purposes may be thought of as a tool box of artifacts. With the underlying ADM as the process by which those artifacts are prescribed and applied. In general this is often expressed by the use of the TOGAF crop circle diagram; artifacts excluded.

While the crop circles offer a simplified standard and popular expression of the ADM. Its a bit difficult to translate that viewpoint into a description for a practical implementation. Especially if someone new to implementing the ADM. And when adding the depth and richness of the ADM another challenge arises. Particularly, how to provide simplified contextual viewpoints which correlate to TOGAF artifacts as deliverables.

In offering one solution to these challenges I began by creating a couple of architectural context models by:

  • Creating a work package viewpoint of high-level requirements.
  • Creating a work package viewpoint of domains used in the ADM, which are abstracted as static components.

By abstracting domains into components, I’m then able to create architectural definitions for each domain, which I will then tailor for this particular implementation. I also added component interfaces, which will provide the means for describing the interaction between the domains in the execution of the ADM.

For each domain component an architectural definition would include.

  • The role(s) that are responsible for implementing the domain components capability
  • The interface definition in terms of the required artifact data.
  • The responsibility of the domain component which is mapped to the ADM.
  • A mapping of the architectural capability being delivered by the domain component.

By providing these static viewpoints I now have the ability to scope the work that is will be required for developing new architectural capabilities. What make this a value added exercise is this process of moving from the descriptive aspects of the framework to a prescriptive application for implementing the artifact the will support the ADM.

This approach also has some ancillary benefits.

  • Establishing a basic model driven capability to the architectural practice.
  • Establishing the use of industry standard modelling notations.
  • The building of an enterprise repository of architecture assets.
  • Sets the foundation for building upon a concrete implementation of a tailored ADM.

ADM Component Context View

My next but not final step is to create an artifact dependency viewpoint. I will use this model to reason about the artifacts as deliverables in terms of component interfaces. The essential attribute of this model is the design by contract approach. The intent is to use the role that is encapsulated by the domain component as the implementation for the interface. This is a bottom up design which enables an agile approach to tailoring what is “just good enough” for each deliverable relative to the domains interface. The implementation can then be determined by the capability of the role, and the scope of the work for a sprint. In performing this exercise I now realize the architectural component definitions from the previous activity in terms of role(s) and responsibilities. Which then delivers several outputs for the ADM.

  • Identify the key or values added deliverables for each ADM domain.
  • Provide input to a RACI matrix for the underlying ADM process.
  • Identify the skill level that will be needed for the implementation.
  • Tailor the scope of deliverables and the process.

Deliverable Dependency View

Conclusion

In using some relatively straightforward and simple object oriented analysis techniques. I believe that one can reasonably create a concise description of a complex rich and mature framework and process such as the TOGAF ADM.  And in this proposed solution I am “dogfooding” the same approach for building toward a solution architecture that delivers a business capability.

It also seems to me, that this sort of exercise can also yield significant value at several other levels of architectural development. For example, by decomposing the high-level requirements into User Stories and adding a process model. A current architectural practice can augmented or even assessed for improvement. Which demonstrates one way to integrate an EA capability with an agile project management methodology.

Cyber Intrusions are Rapidly Reaching a Tipping Point. Have You?

This is the first year I have felt more than a little uncomfortable shopping online and even handing over my credit card while in my favorite stores. I am one of the millions that have been affected by one of the many cyber-attacks that have occurred this year. It is very clear that cyber criminals are more organized and better equipped than ever before—and they continue to evolve their strategies in order to undermine even the strongest protections. You cannot turn on the TV and not hear about another breach of security somewhere in the world. Here are some startling statistics:

12 cyber-crime victims per secondcyber attack
1,400 is the average number of attacks on a single organizations over the course of a week
• The average Cyber threat goes undetected for 8 months
• The average cyber-attack can in the US cost an organization $11million USD
• A security breach can cost an organization millions in dollars, not to mention the effect on that organizations reputation
71% customers will switch banks due to fraud
46% customers are leaving/avoiding companies with a security breach

The level of angst circulating in business and government circles caused by huge financial losses from cyber intrusions suggests we are rapidly reaching a tipping point. Have you reached yours?  Do you feel like you have a complete picture of the cyber threats to your organization?   How effective have you been in determining your infrastructure weaknesses? Read the rest of this post »

Posted in News

Topic Publish – Subscribe Using IBM Integration Designer 8.5

Introduction:

Business Process Execution Language (BPEL) is a XML-based language used to define enterprise business processes within Web services. BPEL extends the Web services interaction model and enables it to support business transactions.

BPEL can be developed to perform multiple activities such as invoking a web service, publishing a message to topic, subscribe for messages from topic, posting message into queue and consuming messages from queue. Below are the steps to publish messages to Topic and consume messages from Topic.

WebSphere Application Server Configurations:

Create Topic Space

  • Click on “Buses” under “Service Integration” section in left panel. Then click on “BPM.ProcessServer.Bus”

01_IIDPubSub

Read the rest of this post »

5 Tips to Adopting BPM Methodology

BPM Methodology and Principles:

The BPM Methodology is an iterative framework used to effectively analyze and re-design a business process with the goal of constant process improvement. The methodology’s key objective is to foster communication between business and IT in order to establish an optimal business process. Here are 5 tips to ensure that your business partners properly adopt the BPM methodology to improve their business operations.

1. Sell! Sell! Sell!

Every opportunity you get with your business partners will be an opportunity to sell the benefits of adopting the BPM methodology. A client new to the principles of the BPM methodology might find the ideas foreign and might possibly show some initial resistance. It is important to educate and demonstrate how accepting these principles will positively impact not only business/technology operations, but also the corporate culture. Embracing a philosophy of change enables your business partners to avoid common pitfalls that lead to failed BPM projects and ultimately poor BPM adoption.

Each checkpoint during the project lifecycle should address specific business problems. You should demonstrate how the methodology’s effectiveness enabled a resolution. It is essential to illustrate to your business partners how the BPM methodology has and will continue to strengthen corporate initiatives such as product quality, customer satisfaction, and communication between business and IT.

2. Find a Champion.

The subject of change is always a tricky one. As mentioned earlier, you might receive some resistance to adopting this methodology. You will find it easier to get buy-in from your business partners if someone within the corporate structure supports you. It is essential to have a counterpart who can also promote the benefits of the methodology. Your champion does not necessarily have to be someone at the top of the food chain in the corporate structure, but it should be someone who has some influence with the other project participants.

3. Avoid Old Habits.

You will likely run into issues and roadblocks during your implementation process. When these situations arise, it is critical to continually use the principles of the BPM methodology to reach a resolution. Your business partners might find it prudent to use other techniques used in past projects to overcome these roadblocks. Stay the course!

4. Use the Right Tools.

To ensure the successful implementation of your BPM project, it is vital to use the right tools to support your project. From a project management standpoint, the iterative approach to the BPM methodology will not be compatible with the tools used for a waterfall project. Using tools that can handle an iterative project cycle will help clients better understand deliverable and will put their expectation in perspective.

5. Be Patient.

During the process, you will undoubtedly be challenged. Whether it’s resistance from business partners, impending deadlines, or scope of your deliverables, you must remain patient and focused on your implementation methodology.

Building an ESB Capability

Building ESB Capability Java EE -vs- Configuring a Datapower SOA Appliance

Implementing a Java network infrastructure solution versus network appliance configuration

It’s not unusual for a seasoned Java implementer, when exposed to an IBM Datapower appliance for the first time to question the technological advantage of a configurable network device. I feel this question is best examined from an application architecture perspective.

Fundamentally, every implementation is the realization of a prescribed software architecture pattern and approach. From this viewpoint I’ll use a lightweight architectural tradeoff analysis technique to analyze the suitability of a particular implementation from the perspective of two technology stacks. The Java Spring framework combined with Spring Integration extensions and the IBM DataPower SOA appliance.

In this tradeoff analysis I will show the advantage of rapidly building and extending a bus capability using a configurable platform technology, versus Spring application framework components and the inversion of control container.

High-Level Requirements

The generic Use Case scenario: receive an XML message over a http, transform the XML input message  into SOAP/XML format, and deliver the payload to a client over an MQ channel.

Proposed Solution Architecture

Solution 1

Using an EIP pattern to provide a conceptual architecture and context lets consider the following ESB type capability. This solution calls for a message gateway, message format translator, and a channel adapter.

Assumptions

  1. The initial release will not address the supplemental requirements, such as logging, persistent message delivery and error back-out.
  2. This next release will be extended to include a data access feature, as well as the supplemental requirements.
  3. Message end-points, message formats, queue configurations, database access and stored procedure definitions have all been fully documented for this development life-cycle sprint.

Architectural Definition

  • To receive messages over HTTP you need to use an HTTP Inbound Channel Adapter or Gateway.
  • The Channel Adapter component is an endpoint that connects a Message Channel to some other system or transport.
    • Channel Adapters may be either inbound or outbound.
  • The Message Transformer is responsible for converting a message’s content or structure and returning or forwarding the modified message.
  • IBM MQ 7.x has been supplied as part of the messaging infrastructure capability.

Technology Stack Requirements

Spring / Java SE Technical Reference – Standards Information Base

  • Spring 4.0.x
  • Java SE 6 or 7
  • Spring Extension: Spring Integration Framework 4.1.x
  • Spring Extension: XML support for Spring Integration
  • Apache tomcat 7.x.x
  • Spring run-time execution environment (IoC container)
  • Eclipse for Spring IDE  Indigo 3.7 / Maven

DataPower XI/XG Appliance Technical Reference – Standards Information Base

  • Configurable Multi-protocol gateway (XG45 7198 or or XI52 – 7199)
  • XSLT editor- XMLSpy (Optional)
  • Eclipse for Spring IDE  Indigo 3.7 (Optional)

Architecture Tradeoff – Analysis Criteria  

For the application architectural analysis I will use following architecture “illities”

  • Development velocity
    • In terms of code base, development task, unit testing.

Development Velocity Analysis – Design and Implementation Estimates

Assumptions

  1. Development environments, Unit Test cases / tools, have been factored into the estimates.
  2. Run-time environments must be fully provisioned
  3. Estimates based on 6.5 hour work day
  4. 2 development resources for the implementation (1 Development Lead and 1 Developer)

Java SE using Spring Framework and Spring Integration Extensions.

Java EE Spring Framework
Architecture Component Design Component(s) Development Task Effort / Hr.
Message Gateway Http Inbound Gateway XML wiring http Inbound Adapter 6.5
Http Namespace Support XML wiring of Spring Component 6.5
Timeout Handling XML wiring of Spring Component 6.5
Http Server Apache / Jetty Build Web Server instance 12
Exception Handling Error Handling XML wiring of Spring Component 12
Message Transformer XsltPayloadTransformer XML wiring of Spring Component 13
Transformation Templates Build XML Transformation Template 12
Results Transformer XML wiring of Spring Component 13
Chanel Adaptor (Direct Channel) XML wiring Outbound Gateway 2.5
Build Attribute Reference File 12
Estimation hrs
96
Estimation of Duration (Days) 15

DataPower SOA appliance with standard configuration components.

DataPower Appliance
Architecture Component Design Component(s) Development Task Effort / Hr.
Message Gateway Multi-protocol Gateway Name and Configure MPG 3
XML Manager Name and Configure XML Manager
Message Transformer Multi-Step Transform Action Build XSLT Transformation Code 13
Chanel Adapter (Direct Channel) MQ Manager Object Name and Configure MQ Manager 2
Estimation 18
Estimation of Duration (Days) 3

Architecture – Architectural Tradeoff Analysis

In terms of development velocity a DataPower implementation requires approximately 70% less effort. This is primarily due to DataPowers’ Service Component Architecture design and the forms based WebGUI tool that is used to enable configuration features and input required parameters for the service components.

DataPower Services

The Java development velocity may be improved by adding development resources to Java implementation, however this will increase development cost and complexity to the overall project. Efforts around XML transformations are for the most part equal, the Spring framework and DataPower will use XSLT templates to implement this functionality.

Use Case Description for next release

In the next development iteration, our new Use Case calls for additional data from a legacy business application. Additionally, a supplemental requirement for persistent messaging with MQ Backout for undelivered messages on the channel.

Extended Solution Architecture

Solution 2

Development Extension Analysis – Design and Implementation Estimates

Assumptions

  1. Message end-points, message formats, queue configurations, database access and stored procedures have all been defined documented for the development life-cycle.

Architectural Definition

  • Must access a stored procedure from legacy relational database.
  • Must support Message Channel to which errors can be sent for processing.

Java SE using Spring Framework and Spring Integration Extensions

Java EE Spring Framework
Architecture Component Design Component(s) Development Task Effort / Hr.
SQL Data Access JDBC Message Store XML wiring of Spring Component 6.5
Stored Procedure Inbound XML wiring of Spring Component 8
Configuration Attributes XML wiring of Spring Component 3
Stored Procedure parameters XML wiring of Spring Component 3
Process SQL Validation/Processing  of SQL DataSet 9
Estimation 28.5
Estimation of Duration (Days) 5

DataPower SOA appliance with standard configuration components

DataPower Appliance
Architecture Component Design Component(s) Development Task Effort / Hr.
SQL Data Access SQL Resource Manager Configure Db Resource 2
Process SQL XSLT Transformer – Database Build XSLT Transformation Code 10
Estimation 12
Estimation of Duration (Days) 2

Architecture Tradeoff – Analysis Criteria  

For the application architectural analysis I will use following architecture “illities”

  • Extensibility
    • Adding persistent messaging on the channel with back-out functionality.
    • Adding data access and stored procedure execution from legacy database.

Architecture – Architectural Tradeoff Analysis

In terms of development extensibility the DataPower implementation requires approximately 50% less effort. This is primarily because, extending DataPower for these new requirements will not require additional programming for the data access functionality.

Again for this additional functionality the processing of the SQL stored procedure Dataset will require a programming effort for both implementations. The primary difference for Spring is the addition of 3 new components versus the configuration of a database access component on the DataPower appliance.

In terms of adding persistent messaging with back-out functionally. DataPowers’ built-in queue management service requires the implementer to enter the defined queue parameters. This is a net zero programming capability.

Conclusion

Undoubtedly the Spring framework along with Spring integration and the inversion of control (IoC) container, provides the Java developer with powerful application framework with functions that are essential in messaging or event-driven architectures.

However, the DataPower appliance offers this functionality as a purpose built non-disruptive network device out-of-the-box. In short DataPower is the concrete implementation of much of what Spring and the Integration Framework offers programmatically.

As cross-cutting concerns and non-functional requirements around security and webservice integration emerge the configuration capability of the appliance will become even more apparent.

ODM Series 1: IBM ODM Best Practices – The Performance

1. The Performance Cost

The performance cost for a Decision Service may look something like:

PerCost

 

 

 

 

 

 

 

 

 

 

2. The eXecutable Object Model and the RuleFlow

XOM type choices:

  • JAVA XOM better performance
  • XML XOM
    • Dynamicity
    • Useful in case of XML model

Ruleflow:

  • Limit the size and the complexity of the ruleflow, it is interpreted.
  • Always use the same engine algorithm to save memory.

3. The Engine Algorithm

Choose the correct engine algorithm depending on your Decision Service.

RetePlus (The default mode)

  • Stateful application
  • Rule chaining application
  • May be useful in the case of many objects

Sequential

  • Application with many rules and few objects
  • Most of the customer cases.
  • Really efficient in multi-thread environment.

Fastpath

  • Application with rules implementing a decision structure and many objects.
  • May have longer compilation but faster at run time.
  • Really efficient in multi-thread environment.

4. Decision Server Rules Tuning

  • The log level in the Rule Execution Server should be set to level Severe or Warning in the production environment to increase performance.
    • This property (TraceLevel) is accessible in the resource adaptor of the Rule Execution Server or in the ra.xml.
  • Tune the GC and memory size.
    • Starting configuration 64bits
    • -Xgcpolicy:gencon –Xmn2048M –Xmx4096M –Xms4096M
  • Tune the RES pool size.
    • A sizing methodology is available at: http://www-01.ibm.com/support/docview.wss?uid=swg21400803

5. Impact of the Execution Trace

Trace

 

6. Impact of the XOM Type

XOMType

 

7. Remote Web Service call vs. Local call

WSvsLocal

8. Fastpath Algorithm vs. Sequential Algorithm

FaspathvsSequential

 

Posted in News

ODM Series 1: IBM ODM Best Practices – The ABRD

I. The Agile Business Rule Development Practices (ABRD)

The Agile Business Rule Development (ABRD) Methodology provides a framework that project teams may adapt to meet the needs of their specific business rules application project. The methodology supports the full rule lifecycle, from discovery to governance by using an ‘agile’, iterative approach. ABRD activities fall into five categories described below. Each of these activities is executed multiple times as the process is followed.

TimeCost

 

 

1. Harvesting

Harvesting

1. Rule Discovery: Harvest rules, using short workshop sessions

  • Divide the decision process in smaller chunks
  • Determine the inputs, the outputs, the error cases
  • Use concrete scenario and pull them through the rules

2. Rule Analysis: Understand and prepare rules for implementation

  • Refine rules to be atomic
  • Look for ambiguity, contradiction, incompleteness or redundancy
  • Reconcile the rules with the object model (term-fact modeling)
  • Identify rule patterns, and rule dependencies
  • Define test scenarios against the object model
  • Assess: Rule volatility, and Rule sharing opportunity

Tools: Documentation          Roles: SME, BA

 

 2. Prototyping

Prototyping

1. Rule Authoring Early Stage – Rule Design

  • Define rule set
  • Define the BOM
  • Define the project structure
  • Prototype rules

2. Rule Authoring

  • Develop rules
  • Develop unit tests

Tools: Documentation, Rule Designer          Roles: SME, BA, Developer

 

ABRD

 3. Building

1. Rule Validation

  • Develop functional tests
  • Involve SME for feedback

Tools: Rule Designer, DVS          Roles: SME, BA, Developer

 

Building

 

 4. Integrating

1. Rule Deployment

  • Use Rule Execution Server staging platform

Tools: DVS, Decision Center          Roles: SME, Developer

Integrating

 

5. Enhancing

Tools: DVS, Decision Center           Roles: SME, Developer

Enhancing

 

 

II. Rules atomicity, patterns and dependencies

1. Rules atomicity

Atomic rules

  • Cannot be simplified without loosing meaning
  • Conjunction of conditions resulting in a single action

Atomic

2. Rules patterns

Rule pattern analysis helps to:

  • Select the right rule artifact (action rule, decision table, …)
  • Structure rules in packages and articulate the rule flow
  • Create rule templates

Table

3. Rule dependency

Rule dependency analysis helps to:

  • Structure rules in packages and articulate the rule flow

Dependency

WebSphere Portal-Custom Impersonation Portlet Invoked from Themes

This blog provides a different approach on implementing impersonation in portal applications. Impersonation, as we know, is a Portlet service, which lets the user (A) access the portal application as another user (B) by logging in as him or her (B). Out-Of-Box Impersonation Portlet provided by WebSphere Portal lacks flexibility and customization features specific to the requirement of the application.

There are 2 steps in implementing our custom impersonation Portlet:

i)  Creating a Portlet and implementing impersonation in action phase:

The following snippet use Sprint Portlet MVC annotations. Impersonation Service provided WebSphere Portal has two impersonate method. We use the one whose parameters are PortletRequest, PortletResponse and userDN. The first two parameters can be obtained in the action phase while the userDN we get it from the LDAP by passing the userID (of the user whom we are going to impersonate) using PUMA services.

Code1

Read the rest of this post »