SQL Server Articles / Blogs / Perficient https://blogs.perficient.com/tag/sql-server/ Expert Digital Insights Thu, 11 Apr 2024 13:48:36 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png SQL Server Articles / Blogs / Perficient https://blogs.perficient.com/tag/sql-server/ 32 32 30508587 Azure SQL Server Performance Check Automation https://blogs.perficient.com/2024/04/11/azure-sql-server-performance-check-automation/ https://blogs.perficient.com/2024/04/11/azure-sql-server-performance-check-automation/#respond Thu, 11 Apr 2024 13:37:29 +0000 https://blogs.perficient.com/?p=361522

On Operational projects that involves heavy data processing on a daily basis, there’s a need to monitor the DB performance. Over a period of time, the workload grows causing potential issues. While there are best practices to handle the processing by adopting DBA strategies (indexing, partitioning, collecting STATS, reorganizing tables/indexes, purging data, allocating bandwidth separately for ETL/DWH users, Peak time optimization, effective DEV query Re-writes etc.,), it is necessary to be aware of the DB performance and consistently monitor for further actions. 

If Admin access is not available to validate the performance on Azure, building Automations can help monitor the space and necessary steps before the DB causes Performance issues/failures. 

Regarding the DB performance monitoring, IICS Informatica Job can be created with a Data Task to execute DB (SQL Server) Metadata tables query to check for the performance and Emails can be triggered once Free space goes below the threshold percentage (ex., 20 %). 

IICS Mapping Design below (scheduled Hourly once). Email alerts would contain the Metric percent values. 

                        Iics Mapping Design Sql Server Performance Check Automation 1

Note : Email alerts will be triggered only if the Threshold limit exceeds. 

                                             

IICS ETL Design : 

                                                     

                     Iics Etl Design Sql Server Performance Check Automation 1

IICS ETL Code Details : 

 

  1. Data Task is used to get the Used space of the SQL Server performance (CPU, IO percent).

                                          Sql Server Performance Check Query1a

Query to check if Used space exceeds 80% . I Used space exceeds the Threshold limit (User can set this to a specific value like 80%), and send an Email alert. 

                                                            

                                         Sql Server Performance Check Query2

If Azure_SQL_Server_Performance_Info.dat has data (data populated when CPU/IO processing exceeds 80%) the Decision task is activated and Email alert is triggered. 

                                          Sql Server Performance Result Output 1                                          

Email Alert :  

                                            Sql Server Performance Email Alert

]]>
https://blogs.perficient.com/2024/04/11/azure-sql-server-performance-check-automation/feed/ 0 361522
SQL Server to Snowflake Migration – Conversions with Examples https://blogs.perficient.com/2023/10/26/sql-server-to-snowflake-migration-conversions-with-examples/ https://blogs.perficient.com/2023/10/26/sql-server-to-snowflake-migration-conversions-with-examples/#comments Thu, 26 Oct 2023 11:34:59 +0000 https://blogs.perficient.com/?p=347645

SQL Server to Snowflake Migration – Conversions with Examples

Readers’ Digest:

During one of my project experiences, I have had an opportunity to work in MS SQL where I gained valuable knowledge working with the stored procedures. A stored procedure is a prepared SQL code that you can save, so the code can be reused repeatedly. I learned how to optimize database operations by encapsulating complex SQL logic into reusable procedures, enhancing performance and maintainability. Additionally, I have improved my skills in error handling and debugging, ensuring the reliability of critical database tasks.

This experience highlighted the importance of well-structured and documented stored procedures for efficient database management and led me to write this Blog and share my knowledge with people who are interested to know more about these.

Overall, it was a valuable learning journey that greatly contributed to my proficiency in database development.

Introduction

The SQL Server to Snowflake migration concerns transferring a database from Microsoft SQL Server to Snowflake, a cloud-based data warehousing platform. This process requires converting SQL Server-specific syntax and features to their Snowflake equivalents.

For instance, SQL Server’s T-SQL queries might need to be adjusted to Snowflake’s SQL dialect, and functions like GETDATE() might be replaced with CURRENT_TIMESTAMP() in Snowflake.

Below are some of the Functions which were used more frequently during the Conversion:

Merge

Below are the differences between SQL Server and Snowflake Merge statements.

SQL Server:

In SQL Server, there are three options available in the MERGE command, as per the below screenshot. For old records that need an update, we can update the records using the “WHEN MATCHED” clause. For new records, we can insert them into the target table using the “WHEN NOT MATCHED BY TARGET” clause. Most importantly, for records that are available in the target but not in the source, we can choose to either update them as invalid or delete the record from the target using the “WHEN NOT MATCHED BY SOURCE” clause.

MERGE stg.dimShipping as target

USING tgt.ShippingCodes as source

ON target.shippingCode = source.ShippingCode

WHEN MATCHED AND target.ShippingPrice < source.ShippingPrice THEN

UPDATE SET

shippingDescription = source.ShippingCodeDesc,

ShippingPrice = source.ShippingPrice

WHEN NOT MATCHED by Target THEN

INSERT (shippingCode, shippingDescription, ShippingPrice)

VALUES (source.ShippingCode, source.ShippingCodeDesc, source.ShippingPrice)

WHEN NOT MATCHED by source THEN DELETE;

New1

Snowflake:

In Snowflake, there are three options available in the MERGE command, as per the below screenshot. For old records that need an update, we can update the records using the “WHEN MATCHED” clause. For new records, we can insert them into the target table using the “WHEN NOT MATCHED” clause. And for the records that must be deleted, we can use the same “WHEN MATCHED” clause along with the delete statement.

And for the records that are available in the target but not in the source, we can choose to update (update for invalid record) them prior to the merge statement, as there is no option to update the statement along with the “WHEN NOT MATCHED” clause.

MERGE INTO stg.dimShipping as target

USING tgt.ShippingCodes as source

ON target.shippingCode = source.ShippingCode

WHEN MATCHED AND target.ShippingPrice < source.ShippingPrice THEN

UPDATE SET

shippingDescription = source.ShippingCodeDesc,

ShippingPrice = source.ShippingPrice

WHEN NOT MATCHED THEN

INSERT (shippingCode, shippingDescription, ShippingPrice)

VALUES (source.ShippingCode, source.ShippingCodeDesc, source.ShippingPrice)

WHEN  MATCHED THEN DELETE;

New

ISNULL

In SQL Server, the ISNULL function is commonly used to replace a NULL value with a specified alternative value. In Snowflake, the equivalent function is IFNULL. Let’s consider an example to illustrate the mapping:

N1

 

In this example, if column1 is NULL, it will be replaced with the string ‘N/A’ in both SQL Server and Snowflake.

ISDATE([date])>0

SQL Server’s ISDATE function is used to check if a value is a valid date. In Snowflake, you can achieve the same functionality using the TRY_TO_DATE function. Let’s look at an example:

N2

 

In this example, TRY_TO_DATE in Snowflake will return a non-null value if date_column is a valid date, otherwise, it will return NULL.

CAST

Both SQL Server and Snowflake support the CAST function to convert data types. However, it’s important to note that the syntax and available options may vary. Let’s consider an example:

N3

In this example, CAST is used to convert the data type of column1 to an integer.

IIF

SQL Server’s IIF function allows for inline conditional expressions. In Snowflake, you can use the IFF function to achieve the same functionality. Let’s see an example:

N4

In this example, IFF in Snowflake will return ‘Greater’ if column1 is greater than 10, otherwise, it will return ‘Smaller or Equal’.

SYSDATETIMEOFFSET()

SQL Server’s SYSDATETIMEOFFSET() function returns the current date and time, including the time zone offset. In Snowflake, the equivalent function is CURRENT_TIMESTAMP(). Let’s see an example:

N5

In this example, both SYSDATETIMEOFFSET() in SQL Server and CURRENT_TIMESTAMP() in Snowflake will return the current date and time.

SYSTEM_USER

In SQL Server, the SYSTEM_USER function returns the login name of the current user. In Snowflake, you can achieve the same result using the CURRENT_USER function. Here’s an example:

N6

 

STUFF(REVERSE(@cols),1,1,”)

In this function, the SQL Server is used to delete a specified length of characters from a string and then insert another string at a specified starting position. In Snowflake, you can achieve a similar result using the INSERT function with a reversed string. Let’s look at an example:

N7

IsNumeric

This function in SQL Server is used to check if a value can be converted to a numeric data type. In Snowflake, you can achieve a similar result using the TRY_TO_NUMERIC function. Here’s an example:

N8

body nvarchar(max) = ‘Export Aging Report. See attached CSV file.’ + char(10)

In SQL Server, this code assigns a string value to the body variable. In Snowflake, you can achieve the same result using JavaScript within a Snowflake stored procedure. Here’s an example:

N9

YYYYMMDD nvarchar(8) = CONVERT(char(8), GETDATE(), 112)

In SQL Server, this code assigns the current date in the YYYYMMDD format to the nvarchar variable. In Snowflake, you can achieve the same result using the TO_CHAR function to format the current date. Here’s an example:

N10

SET @subject = N’ATLAMEDb01: [Export Claims DRCV Rpt to CSV] – Weekly Aging Claims for ‘ + @Division + ‘ run on ‘ + @YYYYMMDD**

In SQL Server, this code assigns a string value to the @subject variable. In Snowflake, you can achieve the same result by concatenating the string values using the || operator. Here’s an example:

N11

 

YEAR([Day])*10000 + MONTH([Day])*100 + DAY([Day])**

To concatenate the year, month, and day values from a date in SQL Server, you can use arithmetic operations. In Snowflake, you can achieve the same result by casting the date to a TIMESTAMP data type and applying similar arithmetic operations. Here’s an example:

N13

 

ISDATE([Day]) > 0

SQL Server’s ISDATE function is used to check if a value is a valid date. In Snowflake, you can achieve a similar result using the TRY_TO_DATE function. Here’s an example:

N14

ISDATE(stg_info.[day_id])>0

Similar to the previous example, you can use the IS_DATE function in Snowflake to check if a value is a valid date. Here’s an example:

N15

 

 

YEAR([date])*10000 + MONTH([date])*100 + DAY([date])

To concatenate the year, month, and day values from a date in SQL Server, you can use arithmetic operations. In Snowflake, you can achieve the same result by casting the date to a TIMESTAMP data type and applying similar arithmetic operations. Here’s an example:

N16

SUBSTRING([campaign], CHARINDEX(‘(‘,[campaign])+1, LEN([campaign]))

In SQL Server, the SUBSTRING function is used to extract a substring from a string based on a starting position and a length. In Snowflake, you can achieve the same result using the SUBSTRING function. Here’s an example:

N17

 

 

YEAR(fact_ppr.day_campaign)*10000 + MONTH(fact_ppr.day_campaign)*100 + DAY(fact_ppr.day_campaign)

To concatenate the year, month, and day values from a date in SQL Server, you can use arithmetic operations. In Snowflake, you can achieve the same result by casting the date to a TIMESTAMP data type and applying similar arithmetic operations. Here’s an example:

N18

 

DATETIME

In SQL Server, the DATETIME datatype is used to represent both date and time values. In Snowflake, the equivalent datatype is TIMESTAMP, which also represents both date and time values. Here’s an example:

N19

 

CAST(concat(year(drop_date), format(month(drop_date), ’00’), format(day(drop_date), ’00’)) AS int)

In SQL Server, this code concatenates the year, month, and day values from a date and then casts the result to an integer. In Snowflake, you can achieve the same result by using the TO_CHAR function to format the date and then casting it to an integer. Here’s an example:

N20

 

CONVERT(VARCHAR,CAST(posting_date AS DATE),112)

In SQL Server, this code converts a date to a specific format (YYYYMMDD) by casting it to a DATE datatype and then converting it to a VARCHAR datatype. In Snowflake, you can achieve the same result by using the TO_CHAR function with a format specifier. Here’s an example:

N21

 

REVERSE(STUFF(REVERSE(@cols),1,1,”))

In SQL Server, this code is used to manipulate strings by reversing the string, deleting the first character, and then reversing it again. In Snowflake, you can achieve the same result using the REVERSE and INSERT functions. Here’s an example:

N22

 

@cols + ‘[‘ + ISNULL(CAST(COLUMN_NAME AS VARCHAR(100)),”) + ‘],’

In SQL Server, this code concatenates the @cols variable with a string that includes the COLUMN_NAME value wrapped in square brackets. If COLUMN_NAME is NULL, an empty string is used. In Snowflake, you can achieve the same result using the || operator for string concatenation. Here’s an example:

N23

 

CONVERT(VARCHAR, EOMONTH(CAST(”01-”+MonthYear AS DATE)),112) AS day_id,’+@OrderNum+

In SQL Server, this code converts a string representation of a date ('01-''+MonthYear) to a date datatype, retrieves the last day of the month (EOMONTH), and then converts it to a specific format (112). In Snowflake, you can achieve the same result using the TO_CHAR function with appropriate date functions. Here’s an example:

N24

 

‘+CAST(@loadcontrolid AS VARCHAR(1000))+’

In SQL Server, this code casts the @loadcontrolid variable to a VARCHAR datatype. In Snowflake, you can achieve the same result by using the TO_VARCHAR function. Here’s an example:

N25

 

CAST(CONVERT(VARCHAR, EOMONTH(CONVERT(datetime, (CONVERT(CHAR(10), day_id, 120)))), 112) AS INT)

In SQL Server, this code converts a day_id value to a specific date format, retrieves the last day of the month using the EOMONTH function, and then converts it to an INT datatype. In Snowflake, you can achieve the same result using the TO_CHAR function with appropriate date functions and casting to an INT. Here’s an example:

N26

 

GETDATE()

In SQL Server, the GETDATE() function returns the current date and time. In Snowflake, you can achieve the same result using the CURRENT_TIMESTAMP() function. Here’s an example:

N27

 

(CONVERT(VARCHAR,CONVERT(DATETIME, CONVERT(CHAR(10), a.day_id, 120)),112) AS INT)

In SQL Server, this code converts a day_id value to a specific date format, then converts it to a DATETIME datatype, and finally converts it to an INT datatype. In Snowflake, you can achieve the same result using the TO_CHAR function with appropriate date functions and casting to an INT. Here’s an example:

N28

 

datetimeoffset

In SQL Server, the datetimeoffset datatype is used to store a date and time value with a time zone offset. In Snowflake, the equivalent datatype is TIMESTAMP_NTZ, which represents a timestamp in the current session’s time zone. Here’s an example:

N29

 

nvarchar

In SQL Server, the nvarchar datatype is used to store Unicode character data. In Snowflake, the equivalent datatype is VARCHAR, which also supports Unicode character data. Here’s an example:

N30

 

db_name()

In SQL Server, the db_name() function returns the name of the current database. In Snowflake, you can achieve the same result using the CURRENT_DATABASE() function. Here’s an example:

N31

GETDATE

In SQL Server, GETDATE is a system function used to retrieve the current date and time. In Snowflake, you can achieve the same result using the CURRENT_TIMESTAMP function. Here’s an example:

N32

 

SUSER_NAME

In SQL Server, the SUSER_NAME function returns the name of the current user. In Snowflake, you can achieve the same result using the CURRENT_USER function. Here’s an example:

N33

 

Conclusion

In conclusion, the migration journey from SQL Server to Snowflake constitutes a significant step in modernizing data management strategies. By seamlessly transferring databases to the cloud-based Snowflake platform, organizations can harness enhanced scalability, flexibility, and analytical capabilities. However, this transition necessitates meticulous attention to detail, particularly in the realm of syntax and functionality conversions. Adapting SQL Server-specific elements to align with Snowflake’s SQL dialect, as demonstrated through examples such as query adjustments and function substitutions, underscores the importance of precision in ensuring a seamless and optimized migration process.

Please note that this blog post provides a general guide, and additional considerations may be required depending on your specific migration scenario. Consult Snowflake’s documentation for comprehensive information on datatypes and their usage.

We hope that this blog post has provided you with valuable insights into SQL Server to Snowflake migration. Happy migrating!

[contact-form]

 

]]>
https://blogs.perficient.com/2023/10/26/sql-server-to-snowflake-migration-conversions-with-examples/feed/ 2 347645
SQL Tuning https://blogs.perficient.com/2023/02/25/sql-tuning/ https://blogs.perficient.com/2023/02/25/sql-tuning/#comments Sat, 25 Feb 2023 13:32:23 +0000 https://blogs.perficient.com/?p=328831

In D & A Projects, building efficient SQL Queries is critical to achieving the Extraction and Load Batch cycles to complete faster and to meet the desired SLAs. The below observations are towards following the approaches to ensure writing SQL queries that meet the Best Practices to facilitate performance improvements.

Tuning Approach

Pre-Requisite Checks

Before we get into subjecting a SQL Query against Performance Improvements, below are steps to be adopted:

  • Deep Dive into the current SQL Query
    • Complexity of the SQL (# of Tables/Joins/Functions)
    • Design of the SQL Query (Sub-Query/Correlated Sub-Query/Join/Filter Sequences)
    • Whether Best Practices followed: Is it modularized? When joined, does it contain functions and derivations?
  • Verify the As-Is Metrics of the SQL
    • Duration to return 1st record and first 100 records
    • Extract the Explain Plan Metrics
      • Cost (Resource Usage)
      • Cardinality (# of Rows returned per Task Operations)
      • Access Method (Full Table/ROWID/Index Unique/Full Index/Index Skip Scan)
      • Join Method (Hash/Nested-Loop/Sort-Merge/Outer Join)
      • Join Order (Multiple tables join sequence)
      • Partition
      • Parallel Processing (Exec on Multiple Nodes)

After ensuring the above prerequisites are taken care of and possible bottlenecks identified, tuning practices can be applied to the SQL Query for performance improvements.

Tuning Guidelines

Basic Guidelines are listed below:

  • Query Design Perspective
    • Extract only the required columns in the code via SELECT (instead of SELECT *)
    • Use Inner joins well ahead of Outer joins
    • Filters applied ahead with Inner Joins rather than at the end using WHERE clause
    • Avoid Sub-queries and Correlated Sub-queries as much as possible
    • Create TEMP tables
      • to hold Sub-Query logic
      • to Modularize Complex Logic with related Columns and Derivations
      • to hold a reference list of values (used as Joins instead of IN clause)
      • to hold Functions, Calculations, and Derivations Attributes for later JOIN with Tables
      • to hold Complex Query Logic and subsequently apply RANK()/ROW_NUMBER()
    • Create Physical tables (instead of TEMP) if high volume
    • Drop the TEMP or Physical tables after intermediate processing completes
    • Complex Query with too many LEFT joins can be broken into parts and then JOINed
    • Avoid Duplicates as early as possible before subjecting the Derived tables to JOINs
    • On MPP DBs, do not use DISTRIBUTION for Smaller tables
    • On MPP DBs, DISTRIBUTION column-based joins provide faster results
  • Functions Perspective
    • Use EXISTS instead of IN if presence alone requires to be checked
    • Instead of MINUS, use LEFT JOIN with IS NULL condition
    • If DISTINCT causes slowness, try ROW_NUMBER() to select 1 record out of Multiples
    • Do not use Functions on Joins
  • DBA Perspective
    • Collect STATISTICS
    • Create Indexes (Single/Multiple) (on frequently used Joins/Predicates as required)
    • Create Partitions (for Optimized Scans)
  • Space and Computing Perspective
    • Increase the DB Server storage space
    • Increase the DB Server Computing Abilities
    • Multi-Node Processing of Queries

Conclusion

On a high level, below are the inferences:

  • Check Explain Plan
  • Subject the Query to effective Design
  • Focus on DBA, Space, and Computing Abilities
  • Follow the Best Practices
]]>
https://blogs.perficient.com/2023/02/25/sql-tuning/feed/ 1 328831
Generate Scripts for Data in the Tables of SQL SERVER https://blogs.perficient.com/2016/10/18/generate-scripts-for-data-present-in-the-tables-of-sql-server/ https://blogs.perficient.com/2016/10/18/generate-scripts-for-data-present-in-the-tables-of-sql-server/#respond Tue, 18 Oct 2016 14:33:00 +0000 http://blogs.perficient.com/delivery/?p=6332

This wizard is helpful for generating scripts and publishing them. In order to run the wizard, right click on the database and navigate to Tasks -> Generate Scripts.

generatescript_navigation

The wizard will pop up and you can generate scripts either for all database objects or for a selected number of database objects. This could be used later to create an instance of the database or for  publishing it. Click the Next button to proceed.

generate-script_wizard

In this step choose the objects which you want to script. You can opt for entire database and all of its objects or select some specific objects to script. Click Next to go for the scripting options.

table schema

Choose where to save the scripts: either in a specific location or to a web service. By clicking Advanced, you can choose advanced options for the script.

choose path

The Script Drop and Create option lets the user DROP the object and RECREATE it. You can choose to CREATE the object alone or DROP the object alone using Script CREATE and Script DROP option respectively.

The Script USE DATABASE option lets the user use that particular database.

The Types of data to script option allows the user to generate a script with SCHEMA only, DATA Only and both SCHEMA and DATA. DATA only scripts the data alone from all the tables. SCHEMA only scripts only the data structure and the datatype of its tables. The SCHEMA and DATA option allows the user to capture both the data structure and all of its data.

choose data

By clicking the next button, the wizard allows the users to review their selections. The Target will be a single SQL script file which comprises of the script.

review

This window displays the action list like Preparing the script and its result. You can save this report for logging purpose or to determine which part has failed. After clicking the finish button, the wizard exits and the script file can be accessed.

result

The scripts will be generated as a result in a sql file.

sql script

]]>
https://blogs.perficient.com/2016/10/18/generate-scripts-for-data-present-in-the-tables-of-sql-server/feed/ 0 210829
“Database Platform Service” Error with MSDeploy and dbDACFx https://blogs.perficient.com/2016/08/09/database-platform-service-error-with-msdeploy-and-dbdacfx/ https://blogs.perficient.com/2016/08/09/database-platform-service-error-with-msdeploy-and-dbdacfx/#comments Tue, 09 Aug 2016 13:00:38 +0000 http://blogs.perficient.com/microsoft/?p=33146

MSDeploy (a.k.a. Web Deploy) is mainly known as a technology to deploy web applications, but it is much more than that. It is a platform that can be used to deploy many different application and application components. It accomplishes this by allowing custom providers to be written. MSDeploy ships with providers that cover a wide-range of deployment needs.
One common provider used is the dbDACFx provider. This is used to deploy data-tier applications (i.e. databases). The source can be an existing instance of a data-tier application or a .dacpac file, which is nothing more than a zip file that contains XML files (which is very common these days: MS Office, NuGet, etc.).
While attempting to deploy a dacpac file, I got the following error message

Internal Error. The database platform service with type Microsoft.Data.Tools.Schema.Sql.Sql120DatabaseSchemaProvider is not valid. You must make sure the service is loaded, or you must provide the full type name of a valid database platform service.

Pretty cryptic. The only part that made sense was the “Sql120” piece, which is the version given to SQL Server 2014. That made sense as that was the target platform selected when the .dacpac file was created. The command line that was issued that caused the error message was something like

msdeploy -verb:sync -source:dbDACFx=”c:\myapp.dacpac” -dest:dbDACFx=”Server=x;Database=y;Trusted_Connection=True”

I installed MSDeploy through the Web Platform Installer, and I made sure to select the option that included the bundled SQL support.
I verified that I was able to install the .dacpac through Management Studio on my local machine, which tells me the database server was fine, so I knew the issue had to be with the server that was hosting MSDeploy.
After many Google Bing searches, I learned that DACFx was the short name for “SQL Server Data-Tier App Framework”. Looking on the server with MSDeploy, I discovered that the SQL 2012 (aka sql110) version was already installed. It all started coming together now, msdeploy was happy with the dbDACFx provider since it was installed, but when it went to locate the needed version of DACFx (specific by the .dacpac file), it could not find it, and it errored out.
A quick search in the Web Platform Installer resulted in finding “Microsoft SQL Server Data-Tier Application Framework (DACFx) (June 2014). Bingo! I installed it, reran my MSDeploy command, and I got a successful deployment of the .dacpac file!
For those who would rather not run the Web Platform Installer, you can go to the Microsoft site and download it directly (please search for the appropriate version you need).
If you want to see which versions are installed, go to Programs and Features on your machine and look for “Microsoft SQL Server 20xx Data-Tier App Framework”.

]]>
https://blogs.perficient.com/2016/08/09/database-platform-service-error-with-msdeploy-and-dbdacfx/feed/ 1 225167
Developing custom SSIS component – 9 lessons learned https://blogs.perficient.com/2016/04/27/developing-custom-ssis-component-9-lessons-learned/ https://blogs.perficient.com/2016/04/27/developing-custom-ssis-component-9-lessons-learned/#comments Wed, 27 Apr 2016 23:36:37 +0000 http://blogs.perficient.com/microsoft/?p=31746

SQL_SSISOne of the compelling features of Microsoft SQL Server Integration Services (SSIS) is its extensibility. Although SQL Server comes with a wide array of SSIS components (including different data sources, transformation tasks and logical flow control operations), sometimes there is a need to do something unique in your SSIS package, something that is not supported out of the box.
Luckily, this is possible and it’s not very hard to do. All that you need a Visual Studio and some knowledge of SSIS extensibility framework.
If you decide to develop your own SSIS component, then there is good documentation available at MSDN, there are a few great blog posts outlining the process from end to end, and there is source code on Codeplex. In my short blog post I’m not planning to duplicate all of above, I’m just trying outline a few “gotchas” which I went though myself while developing SSIS component.

  1. Understand what you want to do. SSIS supports two distinct types of components: tasks and data flow tasks. “Task” is … well, a task. Some custom action which you wold like to execute in your control flow, like sending email when package execution fails (by the way, email task comes standard). “Data flow task” is dealing with data flow, i.e. extracting, transforming and loading data (ETL). The chances are that you’ll be more interested in the later, because I think the primary reason for developing custom SSIS component is a building a direct interface with some specific system which is not supported OOB (most likely by utilizing that system’s API). If it’s that you want, then you’ll need to implement your component logic in a class derived from Microsoft.SqlServer.Dts.Pipeline.PipelineComponent.
  2. Implement custom property editor. Although technically all that you need for a custom data flow component is to subclass PipelineComponent, it’s more convenient for the user of your component to work with a specialized UI than with generic property interface.
  3. Implement custom connection manager. There is substantial chance that SSIS package would contain a multiple instances of your pipeline component. In this case it’s much easier to specify connection to your custom data source once and then reuse it between pipeline components. Note that you’ll need to call ComponentMetaData.RuntimeConnectionCollection.New() method (preferably inside ProvideComponentProperties method) in order to let SSIS engine know that your component requires custom connection. Then, inside AcquireConnections() method you’ll need to validate if ComponentMetaData.RuntimeConnectionCollection[0].ConnectionManager is set and InnerObject is your custom connection manager.
  4. Override PipelineComponent.Validate() method. This will give user a clear indication when some of the properties of your component are either not entered or invalid.
  5. Return DTSValidationStatus.VS_NEEDSNEWMETADATA from Validate() method when your component properties have changed and you need to rebuild outputs accordingly. As a response SSIS runtime would call your component’s ReinitializeMetaData() method. And inside ReinitializeMetaData() you can rebuild (or create if none existed before) outputs.
  6. Be aware that SSIS runtime and execution engines are .NET 4.0. Even if your component is written in 4.6.1, it still be executed under 4.0. Usually it’s not that important, however there are cases there differences between 4.0 and 4.6 (or 4.6) could be critical. For example, I run into situation where I needed to call external API which only supported TLS 1.2. .NET 4.0 by default doesn’t use TLS 1.2 (unless you specifically) request it. The code worked fine when I run it outside SSIS, but was giving errors under SSIS.
  7. Be aware of incompatible versions of SQL Data Tools. Unfortunately, in your development you’ll have to target a specific version of SQL Data Tools. When you developing the component, you’ll have to reference components like Microsoft.SqlServer.Dts.Design.dll, Microsoft.SqlServer.DTSPipelineWrap.dll, Microsoft.SQLServer.ManagedDTS.dll, etc. These components are found in GAC and they are specific to SQL Data Tools version. And they neither forward no backward compatible. For example, SQL Server 2012 comes with SQL Data Tools which is based on Visual Studio 2010. If you also installed Visual Studio 2013 or 2016 then it would upgrade your version of SQL Data Tools (to the one which comes with SQL Server 2013 or 2016 correspondingly). And the problem is that if your component is referencing a different version of SSIS DLLs then it would not come up SQL Data Tools toolbox.
  8. Remember to register your component in GAC and copy to SQL server folders. Pipeline components should be  copied to C:\Program Files (x86)\Microsoft SQL Server\<version>\DTS\PipelineComponents and custom connection managers to C:\Program Files (x86)\Microsoft SQL Server\<version>\DTS\Connections. If you have both pipeline component and connection manager in the same Dll, then you should copy it to both places.
  9.  SQL Server 2012 requires pipeline component to have custom icon. Make sure to set  DtsPipelineComponentAttribute.IconResource. Other SQL Server versions don’t require that.
]]>
https://blogs.perficient.com/2016/04/27/developing-custom-ssis-component-9-lessons-learned/feed/ 3 225137
Azure Site Recovery Integration with SQL Server AlwaysOn https://blogs.perficient.com/2015/11/10/azure-site-recovery-integration-with-sql-server-alwayson/ https://blogs.perficient.com/2015/11/10/azure-site-recovery-integration-with-sql-server-alwayson/#respond Tue, 10 Nov 2015 14:32:05 +0000 http://blogs.perficient.com/microsoft/?p=28338

Azure_LogoAzure Site Recovery now provides native support for SQL Server AlwaysOn. SQL Server Availability Groups can be added to a recovery plan along with VMs. All the capabilities of your recovery plan—including sequencing, scripting, and manual actions—can be leveraged to orchestrate the failover of a multitier application using an SQL database configured with AlwaysOn replication. Site Recovery is now available through Microsoft Operations Management Suite, the all-in-one IT management solution for Windows or Linux, across on-premises or cloud environments.
For more information, please visit the Site Recovery webpage.

]]>
https://blogs.perficient.com/2015/11/10/azure-site-recovery-integration-with-sql-server-alwayson/feed/ 0 225047
Azure SQL – Row Level Security Now Available https://blogs.perficient.com/2015/08/21/azure-sql-row-level-security-now-available/ https://blogs.perficient.com/2015/08/21/azure-sql-row-level-security-now-available/#respond Fri, 21 Aug 2015 14:59:07 +0000 http://blogs.perficient.com/microsoft/?p=27679

RLS-Diagram-4-636x300Row-Level Security (RLS) for Azure SQL Database is now generally available. RLS simplifies the design and coding of security in your application. RLS enables you to implement restrictions on data row access. For example ensuring that workers can access only those data rows that are pertinent to their department, or restricting a customer’s data access to only the data relevant to their company.
The access restriction logic is located in the database tier rather than away from the data in another application tier. The database system applies the access restrictions every time that data access is attempted from any tier. This makes your security system more reliable and robust by reducing the surface area of your security system.
Row-level filtering of data selected from a table is enacted through a security predicate filter defined as an inline table valued function. The function is then invoked and enforced by a security policy. The policy can restrict the rows that may be viewed (a filter predicate), but does not restrict the rows that can be inserted or updated from a table (a blocking predicate). There is no indication to the application that rows have been filtered from the result set; if all rows are filtered, then a null set will be returned.
Filter predicates are applied while reading data from the base table, and it affects all get operations: SELECT, DELETE(i.e. user cannot delete rows that are filtered), and UPDATE (i.e. user cannot update rows that are filtered, although it is possible to update rows in such way that they will be subsequently filtered). Blocking predicates are not available in this version of RLS, but equivalent functionality (i.e. user cannot INSERT or UPDATE rows such that they will subsequently be filtered) can be implemented using check constraints or triggers.
This functionality will also be released with SQL Server 2016. If you have Azure though, you get the features now. Another great reason to consider the Microsoft Cloud – you don’t have to wait for a server release to get new functionality! Contact us at Perficient and one of our 28 certified Azure consultants can help plan your Azure deployment today!

]]>
https://blogs.perficient.com/2015/08/21/azure-sql-row-level-security-now-available/feed/ 0 225005
Northwestern Medicine Uses Epic to Deliver Value-Based Care https://blogs.perficient.com/2015/08/17/northwestern-medicine-uses-epic-to-deliver-value-based-care/ https://blogs.perficient.com/2015/08/17/northwestern-medicine-uses-epic-to-deliver-value-based-care/#respond Mon, 17 Aug 2015 13:56:53 +0000 http://blogs.perficient.com/microsoft/?p=27602

health-laptop
Recently, Kate Tuttle, my colleague and healthcare marketing guru, wrote a post over on Perficient’s Healthcare Industry Trends blog, describing the shift from a fee-for-service based model to a value-based care model and the subsequent need for a 360-degree patient view. Many healthcare organizations are facing challenges around transforming data into meaningful information – information that outlines the population and identifies the most high-risk patients, resulting in improved management of chronic diseases and improved preventative care.
Health data has become a powerful influencer in population health management as organizations seek to analyze data and translate it into actionable, real-time insights that will lead to smarter business decisions and better patient care.
Because of the changes in the delivery model and payment reform,  these organizations increasingly look to implement a centralized data warehouse that will meet the growing data and reporting needs, and provide the health system with a central data repository for clinical, financial and business data.
Kate also shared that Cadence Health, now part of Northwestern Medicine (a large Epic user) sought to leverage the native capabilities of Epic in the management of their population health initiatives and value-based care program. Cadence Health engaged us because of the work we’ve done with ProHealth Care, the first healthcare system to produce reports and data out of Epic’s Cogito data warehouse in a production environment.

By leveraging Epic’s Cogito and Healthy Planet, nmNorthwestern Medicine is able to track the health of their population and evaluate whether or not patients with chronic diseases are proactively getting care. They also have real-time reports generated that provide their physician’s with a dashboard view, designed to instantly provide them with an overview of the performance of their patient population across all registry-based measures.

You can learn more about Northwestern Medicine’s value-based care journey in a webinar next weekon Thursday, August 27th at 1:00 PM CT..
Register below to join the live session or receive the on-demand version to hear Rob Desautels, Senior Director IT, Cadence Health and Perficient healthcare experts:

  • Analyze how Epic’s Healthy Planet and Cogito platforms can be used to manage value-based care initiatives.
  • Examine the three steps for effective population health management: Collect data, analyze data and engage with patients.
  • Discover how access to analytics allows physicians at Northwestern Medicine to deliver enhanced preventive care and better manage chronic diseases.
  • Discuss Northwestern Medicine’s strategy to integrate data from Epic and other data sources.


One more thing… if you are a Epic user planning to attend 2015 Epic: UGM in just two weeks, we welcome you to join us for an evening event on September 3rd at the Edgewater in Madison, WI. Heidi Rozmiarek, Assistant Director of Development at UnityPoint Health and Christine Bessler, CIO at ProHealth Care, will lead a discussion focused on how organizations are currently leveraging the data housed in Epic systems and planned initiatives to gain even further insights from their data.  Register here – space is limited

]]>
https://blogs.perficient.com/2015/08/17/northwestern-medicine-uses-epic-to-deliver-value-based-care/feed/ 0 225002
On-Premises BI Gets a Boost in SQL Server 2016 https://blogs.perficient.com/2015/05/21/on-premises-bi-gets-a-boost-in-sql-server-2016/ https://blogs.perficient.com/2015/05/21/on-premises-bi-gets-a-boost-in-sql-server-2016/#respond Thu, 21 May 2015 15:59:09 +0000 http://blogs.perficient.com/microsoft/?p=26997

At their recent Ignite conference in Chicago, Microsoft unleashed a flood of new information about their products and services across a wide variety of functions.   Business Intelligence was not left out, by any means, with announcements of exciting new cloud-based offerings such as Azure Data Warehouse and Azure Data Lake.  But given all the focus on Azure and cloud lately, one has to wonder: what about good ol’ SQL Server?  Well, wonder no more.
SQL Server 2016 will include a host of new features related to BI.  In fact, Microsoft claims that SQL Server 2016 will be one of the largest releases in the history of the product.  From hybrid architecture support to advanced analytics, the new capabilities being introduced are wide-ranging and genuinely exciting!
Providing an exhaustive list of new features and enhancements would be, well, exhausting.  And the information is currently covered in good detail on the SQL Server product website.   But here’s a few items that caught my eye from a BI perspective….
For classic SQL Server BI components:

  • SSDT/BIDS will now (finally) be unified in Visual Studio.  After the last few years of trying to get VS and SQL set up for development across various versions, this is a welcome change
  • SSAS Multidimensional is getting some attention (finally), with Netezza and Power Query being added as supported data sources.  Also expect some performance improvements, and support for DBCC.
  • SSAS Tabular is also getting some VERY welcome improvements: Power Query as a data source, support for Many-to-Many relationships (hallelujah!), additional new DAX functions, and some cool in-memory scalability enhancements
  • SSIS 2016 will also support Power Query, and will integrate with Azure in a number of very useful ways (an Azure Data Factory Data Flow task, for example), and will get some other helpful updates
  • SSRS, after being neglected for several releases, is getting a number of great improvements including additional chart types, cross-browser mobile support, improved parameter functionality, CSS support for custom report themes, and the ability to publish SSRS reports on Power BI sites!
  • Even Master Data Services (MDS) is getting some needed improvments, particularly around performance and security.

And on the Advanced Analytics front:

  • Revolution Analytics R is being integrated directly into the SQL Server relational database.  This will allow developers to access predictive analytics via T-SQL queries, and will support deploying R models as web services in the Azure Marketplace
  • PolyBase, the “secret sauce” in the PDW solution that allows T-SQL querying of both SQL Server and Hadoop data, will be available within SQL Server –WITHOUT needing an APS

So, clearly, lots of changes and enhancements are forthcoming in SQL Server 2016.  While Microsoft’s “cloud first, mobile first” initiative has left many on-premises SQL Server users feeling left out, SQL Server 2016 should bring a bright ray of hope.  We should expect to see Microsoft technology developed for cloud make its way into on-premises products going forward, and SQL Server is a perfect example of that trend.

]]>
https://blogs.perficient.com/2015/05/21/on-premises-bi-gets-a-boost-in-sql-server-2016/feed/ 0 224965
BUILD & IGNITE Know It. Prove It. Tech Challenge Tour https://blogs.perficient.com/2015/04/17/build-ignite-know-it-prove-it-tech-challenge-tour/ https://blogs.perficient.com/2015/04/17/build-ignite-know-it-prove-it-tech-challenge-tour/#respond Fri, 17 Apr 2015 15:00:55 +0000 http://blogs.perficient.com/microsoft/?p=26536

KiPiTour
I recently blogged about my personal experiences with the first “Know it. Prove it.” challenge that ran through the month of February 2015. The “Know it. Prove it.” challenge is back! This time it’s bigger and better than ever. The new challenge is a companion to both the Build and Ignite Conferences with 11 amazing tracks for both Developers and IT Professionals. Also, just like the first round, this set of challenges are completely Free!
Join the tour and accept a challenge today.
Whether you’re looking to learn something new or just brush up on something you’re already using, there’s definitely a challenge track for you.

Build Developer challenge
Geared towards Developers, this includes tracks focusing on: Azure, Office, Database, App Platform, Developer Tools.
KnowItProveIt_Build_Tracks_2015
Ignite IT Professional challenge
Geared towards IT Professionals, this includes tracks focusing on: Cloud, Big Data, Mobility, Security and Compliance, Unified Communications, Productivity and Collaboration.
KnowItProveIt_Ignite_Tracks_2015
What do the challenges consist of?
Each challenge consists of free, amazing training courses hosted on the Microsoft Virtual Academy. These are some amazing courses, it’s hard to believe their free. In addition to the training courses, some of the challenges include links to free eBooks from Microsoft Press.
The challenges range is approximately 15 hours to 30 hours of video training content. So if you take on 30 minutes to 1 hour a day, you’ll finish a single challenge in a month. Surely, you can commit just 1 hour a day for a month. Right?
Are you up to the challenge?
You know you are! Go on and sign up for one. When you’re done, I’ll wait for you to come back here an post a comment letting everyone know which challenge you’ve accepted.
I’ve already signed up for the Build Cloud and Azure challenge, and can’t wait to see how many of you take on a challenge as well!
Remember the “Know it. Prove it” challenge is FREE, so you have no excuse to start.
Join the tour.

]]>
https://blogs.perficient.com/2015/04/17/build-ignite-know-it-prove-it-tech-challenge-tour/feed/ 0 224922
Webinar: Big Data & Microsoft, Key to Your Digital Transformation https://blogs.perficient.com/2015/03/27/webinar-big-data-microsoft-key-to-your-digital-transformation/ https://blogs.perficient.com/2015/03/27/webinar-big-data-microsoft-key-to-your-digital-transformation/#respond Fri, 27 Mar 2015 21:18:06 +0000 http://blogs.perficient.com/microsoft/?p=26289

big_data_shutterstock_wordpress
Companies undergoing digital transformation are creating organizational change through technologies and systems that enable them to work in ways that are in sync with the evolution of consumer demands and the state of today’s marketplace. In addition, more companies are relying on more and more data to help make business decisions.
And when it comes to consumer data – one challenge is the abundance of it. How can you turn complex data into business insight? The socially integrated world, the rise of mobile, IoT – this explosion of data can be directed and used, rather than simply managed. That’s why Big Data and advanced analytics are key components of most digital transformation strategies and serve as revolutionary ways of advancing your digital ecosystem.
Where does Microsoft fit into all of this? Recently, Microsoft has extended its data platform into this realm. SQL Server and Excel join up with new PaaS offerings to make up a dynamic and powerful Big Data/advanced analytics tool set. What’s nice about this is that you can leverage tools you already own for your digital transformation.
Join us next week, on Thursday, April 2 at 1 p.m. CT for a webinar, Transforming Business in a Digital Era with Big Data and Microsoft, to learn why you should be including Big Data and advanced analytics as components of your digital transformation and what options you have when it comes to Microsoft technology.

]]>
https://blogs.perficient.com/2015/03/27/webinar-big-data-microsoft-key-to-your-digital-transformation/feed/ 0 224908