Subscribe via Email

Subscribe to RSS feed


Interpreting Spring form tags

Spring MVC provides tags for handling form elements when using JSP.

Each spring form tag will generate HTML tags at runtime. These tags provide support for the set of attributes of its corresponding HTML tag.

This blog provides a quick reference of how to interpret some of these commonly used spring form tags and understand the attribute level mappings between the Spring form tags and its HTML Counterpart. Read the rest of this post »

Client side Inter Portlet Communication using amplifyJS

When portlets need to communicate between each other without involving any server-side logic, client side Inter Portlet Communication[IPC] can help for quicker interaction.

Handling IPC at client side provides flexibility to refresh only the portlets / components involved in the communication rather than a full portal page refresh.

AmpliyfJS – Is a JavaScript component library providing a set of components designed to solve common web application problems including ajax request management, client side component communication, and client side browser and mobile device storage. Refer:

This blog captures the details how amplifyJS can be used to achieve a publish/subscribe mechanism between the portlets of same page in IBM WebSphere Portal 8.0. Read the rest of this post »

Using Maven Shade Plug-in in Portal Development

Using Maven Shade Plug-in in Portal Development

The Scenario

In some portal development projects, when the functionalities are common for a set of portlets and custom filters, we put them in the jar file and call them as common libraries or provider jars. It increases the modularity of the code, which in return helps in maintainability of the code. Usually these functionalities do not require any externalization, so we don’t require any .properties files. Even if we have any .properties files, the portlets would work fine, because the jar files can be bundled in   the         “ WEB-INF “ folder of the portlets.

But if you use these provider jars   in custom filters or any other jar files, this would result in class loading errors, because the class loader would not be able to load the .properties files of the providers.

The Challenge

The Challenge is to make the custom filters or other jar files (the consumers) to work with the jars with .properties files( providers) .

The solution

The proposed solution is to bundle the consumers with providers as a single jar so that the class loading issue can be resolved. For this we can use the maven shade plug-in. When you add maven shade plug-in, in the build part of  the pom file, it  scans the given dependencies ,  if the  scope of a dependency is not “provided” , (i.e <scope>provided</scope> ) it bundles the classes and .properties of that dependency jar(provider) as part of the jar, that is being built(consumer).

At the end of the build, we would be having a jar file, that consists of consumer classes as well as providers classes and their .properties files. This would resolve the class loading issues, because the .properties files are part of the jar itself.


Before Build

After Build

You can customize the shade plug-in as per your requirement. If you are using spring jars in your application, the shade plug-in, overrides your spring handlers and schemas, with its own version. To avoid this you can apply transformers as shown in the sample code snippet.

For more information on customization you can look in to


Sample Code snippet.









































Mobile Automation Testing using Selenium Webdriver

Nowadays, compatibility testing is in great demand as it gives us the confidence to say whether the application is usable across multiple platforms.

One of the most used platforms available in today’s world is Mobile. So the question here is whether the application is usable across different Mobile platforms?

There are n number of Mobile devices available with x resolution having y operating system. So practically, it’s not feasible to have all the n*x*y number of devices to do a compatibility test on.

That’s the reason Mobile simulators have come into the picture so if I have to test my application on that many  devices, I can do it by simulating the required features.

Let’s look at the task that needs to be performed:-

Selenium WebdriverObjective: - To test the web application on Mobile emulator using Selenium Automation tool.

Resources: - Selenium, Android SDK, Eclipse IDE

Solution: – I will be using Java as the scripting language and Junit framework. Using Eclipse IDE, I will be executing the script which in turn runs on the Mobile emulator/device.

Read the rest of this post »

Using GIT deploy key in Jenkins – Written By Tom Tang

This post is an introduction for who want to use Jenkins manage multiple project, using GIT as version control.

Of course! We can use SSH-KEY mapping GIT server and CI server together. But that means we must use same SSH-KEY in different projects and some guys can run command “commit” and “push “commit code to GIT repository through CI server.   To avoid this, GIT give out a solution for this scenario: Deploy Key. We can create deploy key for each project and add those keys to GIT server. Those deploy keys only can run command “clone” and “pull”.  That’s enough for CI environment such as “Jenkins “.

Example: How to use GIT deploy key to fetch code from GIT repository.

Step1: Create a freely Jenkins Job

Step2: Select GIT and input repository URL

You may get an error here, but don’t worry we will resolve it later, click save.

Step3: Go to Jenkins Home


Step4: Generate SSH Key:

sudo -u jenkins ssh-keygen -t rsa -f { jenkins_HOME }/.ssh/id_rsa.{projectName} -C “{Comments}”.

PS: The bold part means it runs follow command as user “Jenkins”. Otherwise we may get some permission error.


Step5: Add key to GIT Project as deploy key


Open public key “vi”.


Copy and add it to GIT project as deploy key:

Here, you should be a master of this project.


Step6: Map deploy key and local private key

Go to {jenkins_HOME}, create config file and add following code.

             Host {git_home}-{project_name}

             Hostname {git_home}

             User git

             IdentityFile { jenkins_home}/.ssh/id_rsa. {project_name}


            Host gdcgit-Enable2

            Hostname gdcgit

            User git

            IdentityFile /var/lib/jenkins/.ssh/id_rsa. Enable2

Step7: Update repository URL


Update repository URL from to

The repository error would be resolved.

Build job, we can use deploy key get code from GIT repository now.



Setting cookie in Render Phase of a JSR 286 Portlet


To accomplish one of our project requirements, a specific piece of information (CSS_Key) needs to be maintained in the browser cookie that controls the styles dynamically for the application. This key should be read in the first screen of the application as a URL parameter and the value needs to be maintained in the cookie. But the cookie did not get added to the browser, when we tried to add the cookie using “response.addProperty (cookieObj)” in the render phase of a custom JSR 286 portlet. If this cookie does not get added to the browser, the application will not show the dynamic styles in page based on the URL parameter received in the first page.

Root cause:

While setting a cookie in doView method, response already gets committed and hence we are unable to add the cookie using response.addProperty (cookieObj).


To avoid this issue we need to enable and use the two phase rendering feature of JSR 286 portlets. In general, while developing any portlet we confine our scope to define and access components within portlet. By using two phase rendering, we can set page level components like page title, meta tag, cookie and so on. Page level components do not come under the portlet scope. Two phase rendering splits the rendering phase into two phases, first phase gives us access to the page level components (doHeader method) and the second phase renders the actual view of the portlet (doView method). “doHeader” method gets called before “doView” method and hence before the response gets committed, we can set page level components like page title, meta tag, cookie and so on.

We need to follow below mentioned two steps to enable and use two phase rendering.

Step 1:

Add this code to portlet.xml within <portlet> tag. By default this is set to false.





Step 2:

Sample code to be added in portlet class is mentioned below:

protected void doHeaders(RenderRequest renderRequest,

RenderResponse renderResponse){

//Setting cookie

Cookie cssKeyCookie = new Cookie(“CSS_Key”, “testValue”);





Approaches to automate and abstract JAXB from Portal Layer

This blog is to provide an insight on development of Portal applications by achieving high cohesion between the portal and the service layer by secluding and automating the JAXB framework. We know that portal applications are composed of bundles of portlets and hence the compositions and the complexities of each portlet are of utmost importance. The approaches to be elucidated here will considerably reduce the complexity, composure thereby increasing cohesion of the portlets with the service layer₁.

Our aim is to achieve the following by bringing in loose coupling and high cohesion.


Conventional solutions on consuming the services with schemas/JAXB acting as contract are done at every portlet (i.e. generation of the JAXB classes are done at every portlet project), thereby loading it relatively heavy and resulting in low cohesion between the two layers. Any change in the upstream service layer affects the portal layer. With the advent of Continuous Integration and advanced build automation tools, a proactive design by evidently isolating the layers would bring in significant gain.


The illustrated solution assumes REST/ SOAP communication between the portal and the back-end/business layer and uses Maven build tool (can be achieved by other ways as well). In the following illustration, we use the REST services.

Read the rest of this post »

SQL SERVER – Introduction to LEAD and LAG

SQL Server 2012 introduces new analytical functions LEAD () and LAG (). These functions accesses data from subsequent row (for lead) and previous row (for lag) in the same result set without the use of a self-join.

The syntax for the Lead and Lag functions is:


Note that:
−        “Partition by clause” is optional, “order by clause” is required, and the windowing clause (ROWS|RANGE) is not supported.
−        “Scalar_expression” is the value to be returned – this will normally be a column, but it can also be a sub query or any other expression that results in a single value.
−        “Offset” is the number of previous (LAG) or following (LEAD) rows count from the current row, from which to obtain a value. And if it is not specified, the value will be defaulted to 1.
−        “Default” is the value to be used if the value at the “offset” is NULL, and if it is not specified, the value will be defaulted to NULL.


The following example uses the LAG function to compare year-to-date sales between employees. The PARTITION BY clause is specified to divide the rows in the result set by sales territory. The LAG function is applied to each partition separately and computation restart for each partition. The ORDER BY clause in the OVER clause orders the rows in each partition. The ORDER BY clause in the SELECT statement sorts the rows in the whole result set. Notice that because there is no lag value available for the first row of each partition, the default of zero (0) is returned.


Tags: , ,

Posted in BI&Database

Adopting Agile in BI Requirement Gathering


There have been numerous discussions and even arguments in terms of a better implementation strategy in BI projects. But there is no doubt that more and more teams are adopting agile processes or spirit into the team as of its value. When we have conversations on agile methodology or agile teams we usually start with a created product (requirement) backlog and a perfect product owner (PO). Actually per my finding and consideration, to create a good backlog is not easy – it should stand for the real requirement from the stakeholder who can be a manager, staff, CXO and contractor. It should be clear enough to work for the development team. It represents the future state and is sometimes intangible for the people.

Taking my recent Oracle BI project case as an example, it will showcase the challenges and advantages to adopting agile untitledinto a traditional Oracle practice team. Agile strategy and technique has brought a positive impact in the project schedule, quality, team velocity, knowledge, customer involvement etc.

Similar to most types of IT projects, there is much challenge in business analysis. For example, the real requirements are distributed in different groups of people and the staff knows most of his/her own part but doesn’t have much sense into the big picture; the front end system is the ERP which comprises of many functional modules and its analysis driven by each function track lead. The usual way is that each lead works with the user separately but there is a lack of integration. The BI system relies heavily on the ERP system which means the common user case should be consistent in both systems such as time/expense approval process. The business users don’t have much insight into what the future system state is and can’t present their real requirements which is indeed transferred to the future state. Inevitably they will need to change  as the system is implemented with the new technology and out of box processes. Too many stakeholders can make this complex and inefficient.

We have been implementing the project management in a hybrid model – traditional processes and Agile based analysis, design and delivery approach. The agile model adds much value to the teams. Most of team members are pretty strong in the traditional delivery model such as waterfall planning, budgeting, quality controlling, etc. but few of them had agile sense. From what we learned, the following principles could be taken into consideration for a similar project requirement analysis.

Start with the big picture but take small steps. In an ideal agile (scrum) team, the PO role is really important and key to the success. In our mixed team, the project manager or dedicated business analyst (BA) can be a PO. This role could work with business users in different departments to gather  as-is reports/analysis/dashboard and define its high level property like priority, track, average user, delivery phase, etc. At this stage the PO doesn’t need to worry much about data flow, feasibility,  or user experience. With the big picture in mind, the BA (or maybe a team) can start on one specific track to drill down on detail requirements. The reason for picking one small part is that the analyst and user will understand their business clearly and deeply step by step.

Create the initial backlog list. Initially creating a backlog and continually maintaining the list is a major activity for agile teams. In the data warehouse and BI world, the architecture could be built upon frond end  user requirements or the enterprise process element. Please refer to Kimball model. Which one should be our backlog? I believe there are some arguments around it. In our case we use user reports and analysis requirements as our backlog as that will drive us to continue on design and delivery in the next step. As mentioned in the first point, BA might have begun their work to gather as-is reports when those list are cleaned and being added with new requirement from users. That can be our initial backlog.

Improve analysis process continuously. From the experience we observed, the PO would have to spend lots of time to meet with different groups of folks and have discussions with them. Sometimes the user may change their mind later. The PO can try to improve the process to avoid re-work and reduce the cost. In the meantime, the requirement should be validated and tested out when one defined track is finished. This implies that when POs work on more tracks (aka sprints in scrum), the process can improve.  Not only the BA team, but also customers know more about agile and like to cooperate in the same manner.

Manage the activities in iterative way. To define the time boxed sprints is not a must for requirement gathering, but teams can try to plan out their activities such as conversations, meetings, discussions, and documentation in an iterative way. For example, go to build a first track of report requirement by documenting its priority, use case, condition, and user story. Then continually communicate with the stakeholders to reach an agreement. At the end of each day, the team can communicate with each other to discuss progress and impediments. At end of each track, the team can look back and have a quick retrospective talk.

The agile model is new to the business requirement gathering space compared to the design and development phase, but its thinking and idea is wonderful and attracting, why not to have a try?





Some problems in Agile software development practices

I surely think that Agile development methodology has an advantage over traditional development methodologies. But I’ve also found some problems in my Agile practices. I want to discuss them and the possible solutions in this article.

Some Problems in Agile Software Development Practices1. Do we still need an architect?

I noticed there is no architect in many Agile teams, even in some big projects. I think the possible reason is that Agile doesn’t advocate big design up front. But it doesn’t mean there is no any design at all. There are still requirements, still a design though not heavyweight and a solid architecture is still beneficial. I would suggest some experienced developer will still take the role of an architect.

2. Why is the development work getting slower and more painful as the sprint is evolving?

A project usually goes well in the first several sprints: stories are implemented quickly, system is demoed to customer, everyone is happy. But the situation changes in the sprints later on. The system is becoming complicated and codes are accumulated, even worse, every piece of code has obvious personal imprint so it’s not easy for others to read old codes. Why does this happen? Because Agile suggests every team member have the same responsibility so everyone can work on the same code piece which is great because it improves every one’s understanding of the whole system. But what can we do to make it less painful? Good design, clear codes and refactoring are the keys. Developers should be conscious about the importance of good design and clear codes, especially in medium or big size of projects. Refactoring and unit testing are very useful to make your code clear. If you don’t refactor codes as soon as possible, it will be much more painful as the code size increases. The management also needs to understand the cost and value of refactoring work.

3. Should we pursue the highest unit testing coverage?

Some teams are obsessed about the unit testing coverage. Higher coverage is a good thing of course, but it’s not the ultimate goal of software development. It’s only a useful technology to improve code quality. So more test cases in important codes is useful than test cases on clear and simple codes.

4. Do we need to write document anymore?

Some people may think document is not important any more in Agile. But it’s not true especially in the situations where the development team is not so stable. Believe me, it’s not rare for some core developers to leave the team. Of course, knowledge sharing and transfer help a lot. But a well-written document is also very useful for the new starters.  I know it’s not so easy to maintain your document, but it’s a worthwhile job.