On Operational projects that involves heavy data processing on a daily basis, there’s a need to monitor the DB performance. Over a period of time, the workload grows causing potential issues. While there are best practices to handle the processing by adopting DBA strategies (indexing, partitioning, collecting STATS, reorganizing tables/indexes, purging data, allocating bandwidth separately for ETL/DWH users, Peak time optimization, effective DEV query Re-writes etc.,), it is necessary to be aware of the DB performance and consistently monitor for further actions.
If Admin access is not available to validate the performance on Azure, building Automations can help monitor the space and necessary steps before the DB causes Performance issues/failures.
Regarding the DB performance monitoring, IICS Informatica Job can be created with a Data Task to execute DB (SQL Server) Metadata tables query to check for the performance and Emails can be triggered once Free space goes below the threshold percentage (ex., 20 %).
IICS Mapping Design below (scheduled Hourly once). Email alerts would contain the Metric percent values.
Note : Email alerts will be triggered only if the Threshold limit exceeds.
IICS ETL Design :
IICS ETL Code Details :
Query to check if Used space exceeds 80% . I Used space exceeds the Threshold limit (User can set this to a specific value like 80%), and send an Email alert.
If Azure_SQL_Server_Performance_Info.dat has data (data populated when CPU/IO processing exceeds 80%) the Decision task is activated and Email alert is triggered.
Email Alert :
During one of my project experiences, I have had an opportunity to work in MS SQL where I gained valuable knowledge working with the stored procedures. A stored procedure is a prepared SQL code that you can save, so the code can be reused repeatedly. I learned how to optimize database operations by encapsulating complex SQL logic into reusable procedures, enhancing performance and maintainability. Additionally, I have improved my skills in error handling and debugging, ensuring the reliability of critical database tasks.
This experience highlighted the importance of well-structured and documented stored procedures for efficient database management and led me to write this Blog and share my knowledge with people who are interested to know more about these.
Overall, it was a valuable learning journey that greatly contributed to my proficiency in database development.
The SQL Server to Snowflake migration concerns transferring a database from Microsoft SQL Server to Snowflake, a cloud-based data warehousing platform. This process requires converting SQL Server-specific syntax and features to their Snowflake equivalents.
For instance, SQL Server’s T-SQL queries might need to be adjusted to Snowflake’s SQL dialect, and functions like GETDATE() might be replaced with CURRENT_TIMESTAMP() in Snowflake.
Below are some of the Functions which were used more frequently during the Conversion:
Below are the differences between SQL Server and Snowflake Merge statements.
In SQL Server, there are three options available in the MERGE command, as per the below screenshot. For old records that need an update, we can update the records using the “WHEN MATCHED” clause. For new records, we can insert them into the target table using the “WHEN NOT MATCHED BY TARGET” clause. Most importantly, for records that are available in the target but not in the source, we can choose to either update them as invalid or delete the record from the target using the “WHEN NOT MATCHED BY SOURCE” clause.
MERGE stg.dimShipping as target
USING tgt.ShippingCodes as source
ON target.shippingCode = source.ShippingCode
WHEN MATCHED AND target.ShippingPrice < source.ShippingPrice THEN
UPDATE SET
shippingDescription = source.ShippingCodeDesc,
ShippingPrice = source.ShippingPrice
WHEN NOT MATCHED by Target THEN
INSERT (shippingCode, shippingDescription, ShippingPrice)
VALUES (source.ShippingCode, source.ShippingCodeDesc, source.ShippingPrice)
WHEN NOT MATCHED by source THEN DELETE;
In Snowflake, there are three options available in the MERGE command, as per the below screenshot. For old records that need an update, we can update the records using the “WHEN MATCHED” clause. For new records, we can insert them into the target table using the “WHEN NOT MATCHED” clause. And for the records that must be deleted, we can use the same “WHEN MATCHED” clause along with the delete statement.
And for the records that are available in the target but not in the source, we can choose to update (update for invalid record) them prior to the merge statement, as there is no option to update the statement along with the “WHEN NOT MATCHED” clause.
MERGE INTO stg.dimShipping as target
USING tgt.ShippingCodes as source
ON target.shippingCode = source.ShippingCode
WHEN MATCHED AND target.ShippingPrice < source.ShippingPrice THEN
UPDATE SET
shippingDescription = source.ShippingCodeDesc,
ShippingPrice = source.ShippingPrice
WHEN NOT MATCHED THEN
INSERT (shippingCode, shippingDescription, ShippingPrice)
VALUES (source.ShippingCode, source.ShippingCodeDesc, source.ShippingPrice)
WHEN MATCHED THEN DELETE;
In SQL Server, the ISNULL function is commonly used to replace a NULL value with a specified alternative value. In Snowflake, the equivalent function is IFNULL. Let’s consider an example to illustrate the mapping:
In this example, if column1 is NULL, it will be replaced with the string ‘N/A’ in both SQL Server and Snowflake.
SQL Server’s ISDATE function is used to check if a value is a valid date. In Snowflake, you can achieve the same functionality using the TRY_TO_DATE function. Let’s look at an example:
In this example, TRY_TO_DATE in Snowflake will return a non-null value if date_column is a valid date, otherwise, it will return NULL.
Both SQL Server and Snowflake support the CAST function to convert data types. However, it’s important to note that the syntax and available options may vary. Let’s consider an example:
In this example, CAST is used to convert the data type of column1 to an integer.
SQL Server’s IIF function allows for inline conditional expressions. In Snowflake, you can use the IFF function to achieve the same functionality. Let’s see an example:
In this example, IFF in Snowflake will return ‘Greater’ if column1 is greater than 10, otherwise, it will return ‘Smaller or Equal’.
SQL Server’s SYSDATETIMEOFFSET() function returns the current date and time, including the time zone offset. In Snowflake, the equivalent function is CURRENT_TIMESTAMP(). Let’s see an example:
In this example, both SYSDATETIMEOFFSET() in SQL Server and CURRENT_TIMESTAMP() in Snowflake will return the current date and time.
In SQL Server, the SYSTEM_USER function returns the login name of the current user. In Snowflake, you can achieve the same result using the CURRENT_USER function. Here’s an example:
In this function, the SQL Server is used to delete a specified length of characters from a string and then insert another string at a specified starting position. In Snowflake, you can achieve a similar result using the INSERT function with a reversed string. Let’s look at an example:
This function in SQL Server is used to check if a value can be converted to a numeric data type. In Snowflake, you can achieve a similar result using the TRY_TO_NUMERIC function. Here’s an example:
In SQL Server, this code assigns a string value to the body variable. In Snowflake, you can achieve the same result using JavaScript within a Snowflake stored procedure. Here’s an example:
In SQL Server, this code assigns the current date in the YYYYMMDD format to the nvarchar variable. In Snowflake, you can achieve the same result using the TO_CHAR function to format the current date. Here’s an example:
In SQL Server, this code assigns a string value to the @subject
variable. In Snowflake, you can achieve the same result by concatenating the string values using the ||
operator. Here’s an example:
To concatenate the year, month, and day values from a date in SQL Server, you can use arithmetic operations. In Snowflake, you can achieve the same result by casting the date to a TIMESTAMP data type and applying similar arithmetic operations. Here’s an example:
SQL Server’s ISDATE
function is used to check if a value is a valid date. In Snowflake, you can achieve a similar result using the TRY_TO_DATE
function. Here’s an example:
Similar to the previous example, you can use the IS_DATE
function in Snowflake to check if a value is a valid date. Here’s an example:
To concatenate the year, month, and day values from a date in SQL Server, you can use arithmetic operations. In Snowflake, you can achieve the same result by casting the date to a TIMESTAMP data type and applying similar arithmetic operations. Here’s an example:
In SQL Server, the SUBSTRING
function is used to extract a substring from a string based on a starting position and a length. In Snowflake, you can achieve the same result using the SUBSTRING
function. Here’s an example:
To concatenate the year, month, and day values from a date in SQL Server, you can use arithmetic operations. In Snowflake, you can achieve the same result by casting the date to a TIMESTAMP data type and applying similar arithmetic operations. Here’s an example:
In SQL Server, the DATETIME
datatype is used to represent both date and time values. In Snowflake, the equivalent datatype is TIMESTAMP
, which also represents both date and time values. Here’s an example:
In SQL Server, this code concatenates the year, month, and day values from a date and then casts the result to an integer. In Snowflake, you can achieve the same result by using the TO_CHAR
function to format the date and then casting it to an integer. Here’s an example:
In SQL Server, this code converts a date to a specific format (YYYYMMDD) by casting it to a DATE
datatype and then converting it to a VARCHAR
datatype. In Snowflake, you can achieve the same result by using the TO_CHAR
function with a format specifier. Here’s an example:
In SQL Server, this code is used to manipulate strings by reversing the string, deleting the first character, and then reversing it again. In Snowflake, you can achieve the same result using the REVERSE
and INSERT
functions. Here’s an example:
In SQL Server, this code concatenates the @cols
variable with a string that includes the COLUMN_NAME
value wrapped in square brackets. If COLUMN_NAME
is NULL
, an empty string is used. In Snowflake, you can achieve the same result using the ||
operator for string concatenation. Here’s an example:
In SQL Server, this code converts a string representation of a date ('01-''+MonthYear
) to a date datatype, retrieves the last day of the month (EOMONTH
), and then converts it to a specific format (112
). In Snowflake, you can achieve the same result using the TO_CHAR
function with appropriate date functions. Here’s an example:
In SQL Server, this code casts the @loadcontrolid
variable to a VARCHAR
datatype. In Snowflake, you can achieve the same result by using the TO_VARCHAR
function. Here’s an example:
In SQL Server, this code converts a day_id
value to a specific date format, retrieves the last day of the month using the EOMONTH
function, and then converts it to an INT
datatype. In Snowflake, you can achieve the same result using the TO_CHAR
function with appropriate date functions and casting to an INT
. Here’s an example:
In SQL Server, the GETDATE()
function returns the current date and time. In Snowflake, you can achieve the same result using the CURRENT_TIMESTAMP()
function. Here’s an example:
In SQL Server, this code converts a day_id
value to a specific date format, then converts it to a DATETIME
datatype, and finally converts it to an INT
datatype. In Snowflake, you can achieve the same result using the TO_CHAR
function with appropriate date functions and casting to an INT
. Here’s an example:
In SQL Server, the datetimeoffset
datatype is used to store a date and time value with a time zone offset. In Snowflake, the equivalent datatype is TIMESTAMP_NTZ
, which represents a timestamp in the current session’s time zone. Here’s an example:
In SQL Server, the nvarchar
datatype is used to store Unicode character data. In Snowflake, the equivalent datatype is VARCHAR
, which also supports Unicode character data. Here’s an example:
In SQL Server, the db_name()
function returns the name of the current database. In Snowflake, you can achieve the same result using the CURRENT_DATABASE()
function. Here’s an example:
In SQL Server, GETDATE
is a system function used to retrieve the current date and time. In Snowflake, you can achieve the same result using the CURRENT_TIMESTAMP
function. Here’s an example:
In SQL Server, the SUSER_NAME
function returns the name of the current user. In Snowflake, you can achieve the same result using the CURRENT_USER
function. Here’s an example:
In conclusion, the migration journey from SQL Server to Snowflake constitutes a significant step in modernizing data management strategies. By seamlessly transferring databases to the cloud-based Snowflake platform, organizations can harness enhanced scalability, flexibility, and analytical capabilities. However, this transition necessitates meticulous attention to detail, particularly in the realm of syntax and functionality conversions. Adapting SQL Server-specific elements to align with Snowflake’s SQL dialect, as demonstrated through examples such as query adjustments and function substitutions, underscores the importance of precision in ensuring a seamless and optimized migration process.
Please note that this blog post provides a general guide, and additional considerations may be required depending on your specific migration scenario. Consult Snowflake’s documentation for comprehensive information on datatypes and their usage.
We hope that this blog post has provided you with valuable insights into SQL Server to Snowflake migration. Happy migrating!
[contact-form]]]>
In D & A Projects, building efficient SQL Queries is critical to achieving the Extraction and Load Batch cycles to complete faster and to meet the desired SLAs. The below observations are towards following the approaches to ensure writing SQL queries that meet the Best Practices to facilitate performance improvements.
Before we get into subjecting a SQL Query against Performance Improvements, below are steps to be adopted:
After ensuring the above prerequisites are taken care of and possible bottlenecks identified, tuning practices can be applied to the SQL Query for performance improvements.
Basic Guidelines are listed below:
On a high level, below are the inferences:
This wizard is helpful for generating scripts and publishing them. In order to run the wizard, right click on the database and navigate to Tasks -> Generate Scripts.
The wizard will pop up and you can generate scripts either for all database objects or for a selected number of database objects. This could be used later to create an instance of the database or for publishing it. Click the Next button to proceed.
In this step choose the objects which you want to script. You can opt for entire database and all of its objects or select some specific objects to script. Click Next to go for the scripting options.
Choose where to save the scripts: either in a specific location or to a web service. By clicking Advanced, you can choose advanced options for the script.
The Script Drop and Create option lets the user DROP the object and RECREATE it. You can choose to CREATE the object alone or DROP the object alone using Script CREATE and Script DROP option respectively.
The Script USE DATABASE option lets the user use that particular database.
The Types of data to script option allows the user to generate a script with SCHEMA only, DATA Only and both SCHEMA and DATA. DATA only scripts the data alone from all the tables. SCHEMA only scripts only the data structure and the datatype of its tables. The SCHEMA and DATA option allows the user to capture both the data structure and all of its data.
By clicking the next button, the wizard allows the users to review their selections. The Target will be a single SQL script file which comprises of the script.
This window displays the action list like Preparing the script and its result. You can save this report for logging purpose or to determine which part has failed. After clicking the finish button, the wizard exits and the script file can be accessed.
The scripts will be generated as a result in a sql file.
]]>MSDeploy (a.k.a. Web Deploy) is mainly known as a technology to deploy web applications, but it is much more than that. It is a platform that can be used to deploy many different application and application components. It accomplishes this by allowing custom providers to be written. MSDeploy ships with providers that cover a wide-range of deployment needs.
One common provider used is the dbDACFx provider. This is used to deploy data-tier applications (i.e. databases). The source can be an existing instance of a data-tier application or a .dacpac file, which is nothing more than a zip file that contains XML files (which is very common these days: MS Office, NuGet, etc.).
While attempting to deploy a dacpac file, I got the following error message
Internal Error. The database platform service with type Microsoft.Data.Tools.Schema.Sql.Sql120DatabaseSchemaProvider is not valid. You must make sure the service is loaded, or you must provide the full type name of a valid database platform service.
Pretty cryptic. The only part that made sense was the “Sql120” piece, which is the version given to SQL Server 2014. That made sense as that was the target platform selected when the .dacpac file was created. The command line that was issued that caused the error message was something like
msdeploy -verb:sync -source:dbDACFx=”c:\myapp.dacpac” -dest:dbDACFx=”Server=x;Database=y;Trusted_Connection=True”
I installed MSDeploy through the Web Platform Installer, and I made sure to select the option that included the bundled SQL support.
I verified that I was able to install the .dacpac through Management Studio on my local machine, which tells me the database server was fine, so I knew the issue had to be with the server that was hosting MSDeploy.
After many Google Bing searches, I learned that DACFx was the short name for “SQL Server Data-Tier App Framework”. Looking on the server with MSDeploy, I discovered that the SQL 2012 (aka sql110) version was already installed. It all started coming together now, msdeploy was happy with the dbDACFx provider since it was installed, but when it went to locate the needed version of DACFx (specific by the .dacpac file), it could not find it, and it errored out.
A quick search in the Web Platform Installer resulted in finding “Microsoft SQL Server Data-Tier Application Framework (DACFx) (June 2014). Bingo! I installed it, reran my MSDeploy command, and I got a successful deployment of the .dacpac file!
For those who would rather not run the Web Platform Installer, you can go to the Microsoft site and download it directly (please search for the appropriate version you need).
If you want to see which versions are installed, go to Programs and Features on your machine and look for “Microsoft SQL Server 20xx Data-Tier App Framework”.
One of the compelling features of Microsoft SQL Server Integration Services (SSIS) is its extensibility. Although SQL Server comes with a wide array of SSIS components (including different data sources, transformation tasks and logical flow control operations), sometimes there is a need to do something unique in your SSIS package, something that is not supported out of the box.
Luckily, this is possible and it’s not very hard to do. All that you need a Visual Studio and some knowledge of SSIS extensibility framework.
If you decide to develop your own SSIS component, then there is good documentation available at MSDN, there are a few great blog posts outlining the process from end to end, and there is source code on Codeplex. In my short blog post I’m not planning to duplicate all of above, I’m just trying outline a few “gotchas” which I went though myself while developing SSIS component.
Azure Site Recovery now provides native support for SQL Server AlwaysOn. SQL Server Availability Groups can be added to a recovery plan along with VMs. All the capabilities of your recovery plan—including sequencing, scripting, and manual actions—can be leveraged to orchestrate the failover of a multitier application using an SQL database configured with AlwaysOn replication. Site Recovery is now available through Microsoft Operations Management Suite, the all-in-one IT management solution for Windows or Linux, across on-premises or cloud environments.
For more information, please visit the Site Recovery webpage.
Row-Level Security (RLS) for Azure SQL Database is now generally available. RLS simplifies the design and coding of security in your application. RLS enables you to implement restrictions on data row access. For example ensuring that workers can access only those data rows that are pertinent to their department, or restricting a customer’s data access to only the data relevant to their company.
The access restriction logic is located in the database tier rather than away from the data in another application tier. The database system applies the access restrictions every time that data access is attempted from any tier. This makes your security system more reliable and robust by reducing the surface area of your security system.
Row-level filtering of data selected from a table is enacted through a security predicate filter defined as an inline table valued function. The function is then invoked and enforced by a security policy. The policy can restrict the rows that may be viewed (a filter predicate), but does not restrict the rows that can be inserted or updated from a table (a blocking predicate). There is no indication to the application that rows have been filtered from the result set; if all rows are filtered, then a null set will be returned.
Filter predicates are applied while reading data from the base table, and it affects all get operations: SELECT, DELETE(i.e. user cannot delete rows that are filtered), and UPDATE (i.e. user cannot update rows that are filtered, although it is possible to update rows in such way that they will be subsequently filtered). Blocking predicates are not available in this version of RLS, but equivalent functionality (i.e. user cannot INSERT or UPDATE rows such that they will subsequently be filtered) can be implemented using check constraints or triggers.
This functionality will also be released with SQL Server 2016. If you have Azure though, you get the features now. Another great reason to consider the Microsoft Cloud – you don’t have to wait for a server release to get new functionality! Contact us at Perficient and one of our 28 certified Azure consultants can help plan your Azure deployment today!
Recently, Kate Tuttle, my colleague and healthcare marketing guru, wrote a post over on Perficient’s Healthcare Industry Trends blog, describing the shift from a fee-for-service based model to a value-based care model and the subsequent need for a 360-degree patient view. Many healthcare organizations are facing challenges around transforming data into meaningful information – information that outlines the population and identifies the most high-risk patients, resulting in improved management of chronic diseases and improved preventative care.
Health data has become a powerful influencer in population health management as organizations seek to analyze data and translate it into actionable, real-time insights that will lead to smarter business decisions and better patient care.
Because of the changes in the delivery model and payment reform, these organizations increasingly look to implement a centralized data warehouse that will meet the growing data and reporting needs, and provide the health system with a central data repository for clinical, financial and business data.
Kate also shared that Cadence Health, now part of Northwestern Medicine (a large Epic user) sought to leverage the native capabilities of Epic in the management of their population health initiatives and value-based care program. Cadence Health engaged us because of the work we’ve done with ProHealth Care, the first healthcare system to produce reports and data out of Epic’s Cogito data warehouse in a production environment.
By leveraging Epic’s Cogito and Healthy Planet,
Northwestern Medicine is able to track the health of their population and evaluate whether or not patients with chronic diseases are proactively getting care. They also have real-time reports generated that provide their physician’s with a dashboard view, designed to instantly provide them with an overview of the performance of their patient population across all registry-based measures.
You can learn more about Northwestern Medicine’s value-based care journey in a webinar next week, on Thursday, August 27th at 1:00 PM CT..
Register below to join the live session or receive the on-demand version to hear Rob Desautels, Senior Director IT, Cadence Health and Perficient healthcare experts:
One more thing… if you are a Epic user planning to attend 2015 Epic: UGM in just two weeks, we welcome you to join us for an evening event on September 3rd at the Edgewater in Madison, WI. Heidi Rozmiarek, Assistant Director of Development at UnityPoint Health and Christine Bessler, CIO at ProHealth Care, will lead a discussion focused on how organizations are currently leveraging the data housed in Epic systems and planned initiatives to gain even further insights from their data. Register here – space is limited
At their recent Ignite conference in Chicago, Microsoft unleashed a flood of new information about their products and services across a wide variety of functions. Business Intelligence was not left out, by any means, with announcements of exciting new cloud-based offerings such as Azure Data Warehouse and Azure Data Lake. But given all the focus on Azure and cloud lately, one has to wonder: what about good ol’ SQL Server? Well, wonder no more.
SQL Server 2016 will include a host of new features related to BI. In fact, Microsoft claims that SQL Server 2016 will be one of the largest releases in the history of the product. From hybrid architecture support to advanced analytics, the new capabilities being introduced are wide-ranging and genuinely exciting!
Providing an exhaustive list of new features and enhancements would be, well, exhausting. And the information is currently covered in good detail on the SQL Server product website. But here’s a few items that caught my eye from a BI perspective….
For classic SQL Server BI components:
And on the Advanced Analytics front:
So, clearly, lots of changes and enhancements are forthcoming in SQL Server 2016. While Microsoft’s “cloud first, mobile first” initiative has left many on-premises SQL Server users feeling left out, SQL Server 2016 should bring a bright ray of hope. We should expect to see Microsoft technology developed for cloud make its way into on-premises products going forward, and SQL Server is a perfect example of that trend.
]]>
I recently blogged about my personal experiences with the first “Know it. Prove it.” challenge that ran through the month of February 2015. The “Know it. Prove it.” challenge is back! This time it’s bigger and better than ever. The new challenge is a companion to both the Build and Ignite Conferences with 11 amazing tracks for both Developers and IT Professionals. Also, just like the first round, this set of challenges are completely Free!
Join the tour and accept a challenge today.
Whether you’re looking to learn something new or just brush up on something you’re already using, there’s definitely a challenge track for you.
Build Developer challenge
Geared towards Developers, this includes tracks focusing on: Azure, Office, Database, App Platform, Developer Tools.
Ignite IT Professional challenge
Geared towards IT Professionals, this includes tracks focusing on: Cloud, Big Data, Mobility, Security and Compliance, Unified Communications, Productivity and Collaboration.
What do the challenges consist of?
Each challenge consists of free, amazing training courses hosted on the Microsoft Virtual Academy. These are some amazing courses, it’s hard to believe their free. In addition to the training courses, some of the challenges include links to free eBooks from Microsoft Press.
The challenges range is approximately 15 hours to 30 hours of video training content. So if you take on 30 minutes to 1 hour a day, you’ll finish a single challenge in a month. Surely, you can commit just 1 hour a day for a month. Right?
Are you up to the challenge?
You know you are! Go on and sign up for one. When you’re done, I’ll wait for you to come back here an post a comment letting everyone know which challenge you’ve accepted.
I’ve already signed up for the Build Cloud and Azure challenge, and can’t wait to see how many of you take on a challenge as well!
Remember the “Know it. Prove it” challenge is FREE, so you have no excuse to start.
Join the tour.
Companies undergoing digital transformation are creating organizational change through technologies and systems that enable them to work in ways that are in sync with the evolution of consumer demands and the state of today’s marketplace. In addition, more companies are relying on more and more data to help make business decisions.
And when it comes to consumer data – one challenge is the abundance of it. How can you turn complex data into business insight? The socially integrated world, the rise of mobile, IoT – this explosion of data can be directed and used, rather than simply managed. That’s why Big Data and advanced analytics are key components of most digital transformation strategies and serve as revolutionary ways of advancing your digital ecosystem.
Where does Microsoft fit into all of this? Recently, Microsoft has extended its data platform into this realm. SQL Server and Excel join up with new PaaS offerings to make up a dynamic and powerful Big Data/advanced analytics tool set. What’s nice about this is that you can leverage tools you already own for your digital transformation.
Join us next week, on Thursday, April 2 at 1 p.m. CT for a webinar, Transforming Business in a Digital Era with Big Data and Microsoft, to learn why you should be including Big Data and advanced analytics as components of your digital transformation and what options you have when it comes to Microsoft technology.