Skip to main content

Data & Intelligence

Serviceability Auditing


What is application serviceability?

 “Serviceability (also known as supportability,) is one of the -ilities or aspects (from IBM’s RAS(U) (Reliability, Availability, Serviceability, and Usability)). It refers to the ability of application support personnel to install, configure, and monitor an application, identify exceptions or faults, debug or isolate faults to root cause analysis, and provide maintenance in pursuit of keeping an application current and/or solving a problem and restoring the product into service. Incorporating serviceability facilitating features typically results in more efficient product maintenance and reduces operational costs and maintains business continuity.”

Serviceability Levels


An application serviceability audit should be conducted and have the objective of classifying all support procedures within the application as having either a low, medium or high serviceability level.

Characteristics of Low and medium serviceability procedures are:

  • Require manual intervention to initiate
  • Require manual intervention during processing
  • Require manual intervention to verify (complete)
  • Are overly complex
  • Are not repeatable without significant manual intervention
  • Are “tightly coupled” to other areas or procedures within the application or other applications
  • Require specific skillsets or skill levels
  • Are not documented

Characteristics of High serviceability procedures are:

  • Are fully automated
  • Require zero or limited manual intervention
  • Are encapsulated within a distinct area of the application
  • Are easily repeated if required
  • Are documented

Application Procedure Areas

All application serviceability procedures will fall into one of the following areas:

  • Data Initialization and Manipulation
  • Administrative Maintenance
  • Setting Assumptions
  • Validation

Data Initialization and Manipulation

Data initialization and Manipulation refers to procedures that clear data from, transform or update data within or load data into an application. All applications absorb data. Data absorption is categorized as one of the following:

  • Generic (i.e. the regular refresh of a current period of activity)
  • Upon-request (i.e. a one-time request to address a specific need)

Generic Absorption

Data Intelligence - The Future of Big Data
The Future of Big Data

With some guidance, you can craft a data platform that is right for your organization’s needs and gets the most return from your data capital.

Get the Guide

Because generic Absorption procedures are “expected” – meaning that they occur at predetermined intervals and have expected parameters – all generic absorption procedures should be able to be automated, scheduled and documented. These procedures can then be classified as highly serviceability.


Upon-Request procedures are usually un-documented and not scheduled and therefore have a low serviceability level; however these requests will be on a case-by-case basis and will usually be performed by a systems administrator, as needed. These requests are understandable outside the consideration of an application serviceability audit.

A simple example of a data initialization and manipulation procedures to be reviewed during a typical audit might be a procedure to maintain application Meta data:

Dimension Metadata File Data Load

  1. Get an updated metadata file for the application dimensions from enterprise source system.
  2. Verify that the files are in the proper format with the correct name (Dim Internal Order.csv).
  3. Place file in the .\MetaData folder on the server.
  4. Run the appropriate TurboIntergrator process to update the dimension.

The audit team should review this procedure and provide feedback:

Serviceability Level:

service me low


The generation and loading of external data should be an automated and scheduled process to avoid the manual dependency to initiate, and process and to avoid administrative errors.  An automated process will eliminate errors in file formatting, naming and placement.

Administrative Maintenance

Administrative Maintenance procedures include the management of governance mechanisms within an application to keep the application synchronized with appropriate business processes. These procedures should:

  • Be straight-forward and easily understandable
  • Well documented
  • Easily reversible
  • Have minimal impact to system availability during the procedure

The following might be an example of a generic administrative maintenance procedure that would be audited:

Update Input Cubes with Actuals and Open the Forecast

Actual Sales Activity must be pushed to the Input cubes so users can being forecasting. The administrator might:

  1. Set run parameters such as last actuals year and month and then run a TurboIntegrator process. Once the process completes, the administrator will inform the users that they can now update forecasts using provided.

The audit team would provide feedback:

Serviceability Level:

service me medium


This is a straight forward procedure however it is recommended that a reversal or “back out” procedure be defined. In addition system impact should be calculated for the procedure for current and future system states. Consider automating the update process so that users are either altered with an email or perhaps a visual cue that the forecast is now “input enabled”.

Setting Assumptions

Setting Assumptions refers to setting parameters that control how the system and/or application users should interpret information within the system. An example is the following:

Updating the “Last Actuals” Period

In this simple example, the applications administrator follows a 3 step process:

  1. Open file Update Last Actuals Period on the TM1 server (Applications -> Admin).
  2. Verify that the dropdown has the correct new Last Actuals Period.
  3. Click button Run to update the Last Actuals Period process.

A serviceability audit might respond with the following feedback:

Serviceability Level:

service me low


Consider automating this step as part of the actuals load process.


Validation procedures are tasks or procedures that verify the authenticity or accuracy of the information within an application or that preparedness of parameters, structures or assumptions are at a sufficient state to proceed.

File Verifications is a typical example. When data files are being manually created and loaded, file formats, names and placements are at risk of error. Meta data files, activity (actuals) files, etc. are all at risk. At serviceability audit would respond with the following feedback:

Serviceability Level:

service me low


File verification should not be required if the process of generating and loading the files are automated. The recommendation is to automate all file processing procedures to eliminate the need for file verifications.


Even an application that is a soundly-constructed may have low serviceability. Some typical actionable items identified by a serviceability audit may be:

  • Reduce the level of administrative intervention. Most of the application service requirements should be “lights out”, automated processing.
  • Automate all data processes (i.e. file creations, placement, loads and updates to TM1).
  • All automated processes should include failure check points that automatically alert the system administrators of intervention needs.
  • Create a sequence and dependency diagram. Most of the procedures reviewed during the audit are interconnected (Actuals loading => Actuals (cube to cube) pushes => Forecast initiation => variance reporting, etc.) and may be able to be linked into single scheduled processes. In addition, this document will clarify the general operations of the system as it relates to business processes.
  • Create an administrative dashboard showing all application statuses and parameters allowing an administrator to view and adjust each through a single point of reference. This makes it easy to quickly determine the overall health of the application or where intervention may be required before it affects the availability of the application. The use of graphical status indicators is recommended.
  • Modify reports to be “As Of” reports showing current states -rather than dependent upon a data push.
  • Develop and maintain sufficient application user guides and run books and store them in the TM1 applications folder for easy access.
  • Develop an application performance monitoring routine.
Jim Miller

Mr. Miller is an IBM certified and accomplished Senior Project Leader and Application/System Architect-Developer with over 30 years of extensive applications and system design and development experience. His current role is National FPM Practice Leader. His experience includes BI, Web architecture & design, systems analysis, GUI design and testing, Database modeling and systems analysis, design, and development of Client/Server, Web and Mainframe applications and systems utilizing: Applix TM1 (including TM1 rules, TI, TM1Web and Planning Manager), dynaSight - ArcPlan, ASP, DHTML, XML, IIS, MS Visual Basic and VBA, Visual Studio, PERL, Websuite, MS SQL Server, ORACLE, SYBASE SQL Server, etc. His Responsibilities have included all aspects of Windows and SQL solution development and design including: analysis; GUI (and Web site) design; data modeling; table, screen/form and script development; SQL (and remote stored procedures and triggers) development and testing; test preparation and management and training of programming staff. Other experience includes development of ETL infrastructure such as data transfer automation between mainframe (DB2, Lawson, Great Plains, etc.) systems and client/server SQL server and Web based applications and integration of enterprise applications and data sources. In addition, Mr. Miller has acted as Internet Applications Development Manager responsible for the design, development, QA and delivery of multiple Web Sites including online trading applications, warehouse process control and scheduling systems and administrative and control applications. Mr. Miller also was responsible for the design, development and administration of a Web based financial reporting system for a 450 million dollar organization, reporting directly to the CFO and his executive team. Mr. Miller has also been responsible for managing and directing multiple resources in various management roles including project and team leader, lead developer and applications development director. Specialties Include: Cognos/TM1 Design and Development, Cognos Planning, IBM SPSS and Modeler, OLAP, Visual Basic, SQL Server, Forecasting and Planning; International Application Development, Business Intelligence, Project Development. IBM Certified Developer - Cognos TM1 (perfect score 100% on exam) IBM Certified Business Analyst - Cognos TM1

More from this Author

Follow Us