Skip to main content

Life Sciences

Making The Oracle Pivot Function Work For You

oracle-pivot-columns-rows

 

Ah, the never-ending task of transposing rows into columns and columns into rows in an Oracle table! There was always a fairly standard max-decode solution for this until Oracle came up with their elegant pivot and unpivot standard functions in 11g. Recently, I had a close encounter with these pivot functions and learned a few things. 

The pivot function works beautifully, as long as you know which columns you want to transpose upfront and are dealing with a manageable size of raw data. Unfortunately, neither condition was true for me.

In this particular case, it involved retrieving raw audit data in XML format from a transactional system using a web service call, and then transposing that data to re-create the transactional system’s data tables locally. The list of tables and structures were not fixed and had to be read upfront using a separate metadata web service call.

Since my use case involved transposing a data structure that was not fixed upfront, I didn’t know which columns to transpose into a pivot function. However, by building the pivot query string dynamically, this was pretty easy to solve.

Scalability was a major challenge and a surprise, too! As the amount of source data grew, the pivot queries took longer and longer, sometimes running for hours. Indexes didn’t help and I had to resort to some creative problem-solving.

The trace was characterized by very long I/O wait times, since it was trying to process a large number of rows at the same time. Here’s how I went about reducing the processing time:

  1. Set a configurable parameter at the application level, driven by the overall database memory available to process queries. (DBAs have to set this, depending on their database.) Let’s say this was set to x bytes.
  2. Calculated the row byte length of the target table I was transposing to. Let’s say this was y bytes.
  3. Determined the number of rows I processed in one go as x/y.
  4. Ran the pivot query in a loop until all the rows were processed.

These few steps enabled me to run multiple small, manageable chunks of pivot queries rather than a single enormous one, and ultimately brought down the overall processing time from hours to just seconds.

To learn how our life sciences practice can help you tackle your database challenges, feel free to send us a note.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Rohit Sharma

More from this Author

Follow Us