There are various Automation tools available in the market and open source tools have this upper hand over the licensed ones because there is always an extra investment needed to buy the commercial product. If the goal is achieved with an open source tool and is reliable then why to go for a licensed one?
Here I am going to discuss about a new open source Automation tool “Sahi”.
“Sahi is a mature, business-ready tool for automation of web application testing. Sahi is available as an Open Source free product and as Sahi Pro, the commercial version. For testing teams in product companies and captive IT units which need rapid reliable web automation, Sahi would be the best choice among web automation tools.”
Sahi is mainly used to do cross browser compatibility testing. Cross browser compatibility testing is nothing but to make sure the application is working properly across various browser platforms.
Let’s see how do we configure Sahi in our local system.
These days we got the long time running issue for the data warehouse job, so we decided to optimize some stored procedures.
After analyzing the stored procedures those days. we have some findings regarding the sql optimization.
1. Use the temporary table to minimize disk access.
From the sql tuning view, The main concern for the database applications is to minimize disk access.So try to use the temporary table to limit
multiple database access.
For example, we have multiple places to access the table Gdm.SnapshotConsolidationRegionCorporateDepartment in one stored procedure, so we change that part to use temporary table below.
Insert into #SnapshotConsolidationRegionCD
2. Use the ‘select’ instead of the ‘update’
We know that the update operations are much more resource consume compared with the select. So try to use the select in that scenarios.
For example, we have the case to update the ConsolidationRegionKey to -1 for the Budget before 2010.
Set F.ConsolidationRegionKey = -1
From #TimeAllocationBudget F
Join GrReporting.dbo.Calendar C on F.CalendarKey=C.CalendarKey
It has better performance if use select as follow.
Insert into #TimeAllocationBudget(ConsolidationRegionKey,CalendarKey,…)
Select (Case when CalendarYear<=2010 then-1 else AllocationRegion.ConsolidationRegionKey end) as ConsolidationRegionKey,
From GrReporting.dbo.AllocationRegion AllocationRegion
Inner join GrReporting.dbo.Calendar Calendar
On AllocationRegion.CalendarKey = Calendar.CalendarKey
Inner join ….
3. Use the index in the temporary table.
Even for the temporary tables, we can also create the index to improve the query performance.
Create unique clustered index IX_Clustered on #TimeAllocationBudget (ReferenceCode)
Introducing coding dojo to a new group could be challenging. I facilitated quite a few dojo sessions, here are the rule of thumbs:
Facilitated or participating in a coding dojo could be a challenging yet fun and rewarding experience if we do it the right way.
Traditionally, in order to test both Android and iOS apps, we need to:
As a result, we need to maintain 1 document and 2 scripts for each user story. But now, there is a less painfulway. All we need is to create a Cucumber test for each user story like the follow:
Feature: Rating a stand Scenario: Find and rate a stand from the list Given I am on the List Then I should see a "rating" button And I should not see "Dixie Burger & Gumbo Soup" And take picture Then I touch the "rating" button And I should see "Dixie Burger & Gumbo Soup" And take picture When I touch "Dixie Burger & Gumbo Soup" Then I should see details for "Dixie Burger & Gumbo Soup" When I touch the "rate_it" button Then I should see the rating panel Then I touch "star5" And I touch "rate" And take picture
And the script can drive both Android & iOS apps. Since the test script is more readable then Robotium or UI Automation scripts, so the PO or tester can understand and modify it by themselves. We create and maintain only 1 document instead of 3.
Overall, Calabash is a more cost effective tool to express executable specification on Android and iOS platform.
So far we have seen what Automation can do to help us in reducing human effort, time, cost etc. Here I will discuss few scenarios for which either Automation can’t be done or is not required.
There are certain tasks which can be performed only using Automation tools such as Load, Endurance, and Scalable Performance testing which simulates 100 of users. However let’s see few of the tasks that cannot be Automated:-
- Image Re-Captcha
Image Re-Captcha cannot be automated due to security measures being implemented in the application. This is nothing but an image which has distorted letters printed on it which can be identified only with the naked eye. The existing Automation tools can’t read those distorted letters. There are few OCR (Optical Character Recognition) softwares available in market but are not 100% effective. Automation scripts won’t do that for you.
- Adhoc Testing
“Ad hoc testing is a commonly used term for software testing performed without planning and documentation”. This type of testing is performed to learn more about the product by doing random testing. The main task of Adhoc testing is to find important defects much quickly. Automation scripts won’t do that for you.
While most of the Automation scripts manipulate just the UI of an application, by making our scripts communicate with database we can accomplish more complicated tasks. Here is an example:-
Suppose you have to automate an application which works on online voting system where in once the vote is casted using the application, it goes to external system and verification process initiates to check whether the casted vote is correct or not. If the vote is correct then it will be counted else the count remains the same.
The following are the statuses of CASTED_VOTE column in VOTE_STATUS table configured in database:-
So usually our Automation script would be as follows:-
- Record the steps to cast the vote
- Wait till we receive the verification response from external entity.(Unknown time)
- Go to the UI screen where total no. of votes are displayed and verify the count.
So the loophole in the above example would the unknown time (i.e. till what time script needs to wait before it executes the next command). This will create problem during execution when there are inputs provided in bulk. But if I can make my script communicate with the database, I can be sure of the unknown time as the time until I see the CASTED_VOTE_STATUS column equals 3 (i.e. Verification done).
Scenario: Sometimes BPEL application need to send out the email alert to let people know what’s the detail error information, then people can analysis the root cause. But if your sync bpel application got an error and bpel throw the exception in Error Handler, it will cause the global transaction rollback, so email will not be triggered to send out.
Solution: to resolve this issue, need to create a new BPEL (child bpel) to invoke the notification service, and use the transaction property in composite.xml to make sure this child BPEL have a new transaction context so that it won’t rollback even if the global transaction failed.
<property name=”bpel.config.transaction” many=”false” type=”xs:string”>requiresNew</property>
The JIRA has been a powerful and useful tracking tool for most agile projects. It is being evolved to provide more convenient features onward. In past half year I and my team just worked on a TM1 project where we managed and tracked all requirements, tasks and efforts in JIRA tool between multiple teams.
TM1 is an IBM OLAP tool to build analytic model, cube and create reporting for financial account analysis and different cycle planning and forecasting. The regular way to drive this kind of project is to do requirement gathering via proof of concept, architecture, build, unit test & deployment etc. During that process the major tasks might be managed and tracked in tool such as MS project or excel to follow a traditional approach. Since we are the agile advocator, we would love to maximize the value of agile particularly scrum in the daily work. We did not have explicit experience of employing scrum in BI like TM1 project but this is a good endeavor for the TM1 team. In the following section, the challenge occurred in the project as well as some JIRA tips were summarized.
The requirement is not clear in the beginning even after development effort is started. I believe this is common in most projects but we should handle it flexibly to reduce its impact on plan. Business analyst and project manager were assigned at client site to work with different team to identify customer’s future vision on their finance planning, also do investigation on legacy application. The typical unclear TM1 requirement may contain uncertain planning cycle, data feeding source, calculation rule, roll-up/drill-down requirement etc.
The requirement will likely to be changed and affect specific design. The requirement change here does not have to undergo the formal process and most likely it relates to point 1 above. There might have been great number of design documentation out there like dimension, attribute and cube. But the dimension name may be changed; the attribute and elements may be changed. A baseline of all objects specification should be settled and therefore all stakeholders can know what the design change is in the past, current and future state.
Measure the team effectiveness and area to improve. Usually a learning curve is a must for each member and the whole team. Some members are skillful in technology but they need to know about project input. Others may need to ramp up themselves further in skill set. The way to learn about team effectiveness in JIRA is to look at velocity in the chart. However we will need to do further analysis – is that caused by unclear requirement, or the technical competence? The challenge goes to how we can quickly identify the improve area for individual as well as the team in the agile tool.
Eliminate multiple-shore team’s miscommunications. Yep, the JIRA itself will function as a communication between teams. Even though, the miscommunication will still occur because of insufficient task description and different understanding. In our case we have both onshore and offshore developer and we should put their focus differently.
I don’t think we can solve all the problems by a hit with hammer. But I do like to go over the JIRA user experience on making team to be more flexible, productive and manageable.
Establish lineage of requirement, design, task and testing evidence. This is effective way to track completed effort in different stage and actually it is necessary to keep everyone in the same page in understanding in delivery. Option 1 is to build link in the Jira between feature, task, test case and issue etc, JIRA provides bidirectional link along with type to bind each. Option 2 is to build parent child feature.
Feature to sub-feature. We opted this one as we have found that link is not explicit in presenting requirement change while parent-child feature is better. For instance, the customer account gets changed for many times. For initial one we create parent feature called customer account dimension and sub feature such as load elements, load attributes. Going forward, the dimension name change or element change can be deemed as a sub feature under that initial one.
Bulk import for all breakdown features. JIAR provides a way to import all features from csv/spreadsheet into your project. That is helpful to save time.
Embody review effort in JIRA. In our quality assurance model, typically we will conduct internal review, formal review. When a feature/sub-feature is completed, it must be assigned to lead developer for review. Reviewer can put his comment such as “Review Complete with First Pass” or “Review Complete with Reject” to proceed with next step. Therefore it is easy to report how many features are reviewed out of total and how many passed.
Build version hierarchy. The reason for this is to generate feature status at each sprint and scrum level. With version hierarchy, user can go to agile planning/task/chart board to gain plain overall progress.
Streamline the steps in workflow. Try to maintain a minimum steps that is really needed such as open, in progress, offshore review, onshore review, approved. Too many steps will definitely slow down the communication.
Scenario: Sometimes bpel application need to invoke the web service, due to some reasons, some web service can not handle the extra header information, but BPEL application send the message with WS-Address by default.
When BPEL application invoke the web service, it send the message with WS-Address header by default, if web service can not accept the extra header information, it will throw the exception. So how should we resolve this problem?
There is a property “oracle.soa.ws.outbound.omitWSA” can resolve this issue. You can add this property to your related web service reference in composite.xml file, then set this property value to true, compile the bpel application and test , it should be successfully.
Duplicate records are occasionally found in source data. Due to primary key constraints on a target database, only one version of a duplicate source record should be loaded into the target. The following methods demonstrate some of the most efficient ways to deal with duplicate source data.
1. Aggregator Transformation
|When to use||Any time an Aggregator transformation has no major impact on performance.Source is relational. Or source is file and output row order does not matter.|
|Pros||Ease of implementation.Works with all types of sources.|
|Cons||Limits in choice of output row. Aggregator cache can slow performance of large loads. Duplicates are silently removed and therefore no auditing.|
Use AGG Transformation and group by the keys that u want to remove dup. To improve speed, sorted ports can be used for presorted data. Make sure the first row listed in the order by clause in source qualifier is the same as the Group By port in the Aggregator. If your source is not relational, add the Sorter transformation.
2. Rank Transformation
|When to use||When you want a specific row from each group and your source is a flat file.|
|Pros||Ease of implementation.Sorts non-relational data for custom row output.|
|Cons||Cache can slow performance of large loads. Limits in choice of output row. Duplicates are silently removed and therefore no auditing.|
No modifications needed for Source Qualifier. This is most useful if the source is not relational. Set Number of Ranks to 1 and Top/Bottom property.
3. Sorter Transformation
Send all the data to a sorter and sort by all fields that you want to remove dup. In the properties’ tab, select Unique. This will select and send forward only Unique Data.
At Source qualifier, you can enable ‘Select Distinct’ Option. Or, you can also write your own query in SQL so only distinct rows would be selected. However it works for only relational sources. For flat file sources, you can do it in as pre session command:
sort abc.txt | uniq