SQL Database Articles / Blogs / Perficient https://blogs.perficient.com/tag/sql-database/ Expert Digital Insights Wed, 26 Feb 2025 08:54:58 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png SQL Database Articles / Blogs / Perficient https://blogs.perficient.com/tag/sql-database/ 32 32 30508587 From Cloud to Local: Effortlessly Import Azure SQL Databases https://blogs.perficient.com/2025/02/26/import-azure-sql-databases/ https://blogs.perficient.com/2025/02/26/import-azure-sql-databases/#comments Wed, 26 Feb 2025 08:54:58 +0000 https://blogs.perficient.com/?p=377428

With most systems transitioning to cloud-based environments, databases are often hosted across various cloud platforms. However, during the development cycle, there are occasions when having access to a local database environment becomes crucial, particularly for analyzing and troubleshooting issues originating in the production environment.

Sometimes, it is necessary to restore the production database to a local environment to diagnose and resolve production-related issues effectively. This allows developers to replicate and investigate issues in a controlled setting, ensuring efficient debugging and resolution.

In an Azure cloud environment, database backups are often exported as .bacpac files. The file must be imported and restored locally to work with these databases in a local environment.

There are several methods to achieve this, including:

  1. Using SQL Server Management Studio (SSMS).
  2. Using the SqlPackage command-line.

This article will explore the steps to import a .bacpac file into a local environment, focusing on practical and straightforward approaches.

The first approach—using SQL Server Management Studio (SSMS)—is straightforward and user-friendly. However, challenges arise when dealing with large database sizes, as the import process may fail due to resource limitations or timeouts.

The second approach, using the SqlPackage command-line, is recommended in such cases. This method offers more control over the import process, allowing for better handling of larger .bacpac files.

Steps to Import a .bacpac File Using SqlPackage

1. Download SqlPackage

  • Navigate to the SqlPackage download page: SqlPackage Download.
  • Ensure you download the .NET 6 version of the tool, as the .NET Framework version may have issues processing databases with very large tables.

2. Install the Tool

  • Follow the instructions under the “Windows (.NET 6)” header to download and extract the tool.
  • After extracting, open a terminal in the directory where you extracted SqlPackage.

3. Run SqlPackage

  • Put .bacpac file into the package folder.(ex: C:\sqlpackage-win7-x64-en-162.1.167.1)
  • Use the following example command in the terminal to import the .bacpac file:
  • powershell
    SqlPackage /a:Import /tsn:"localhost" /tdn:"test" /tu:"sa" /tp:"Password1" /sf:"database-backup-filename.bacpac" /ttsc:True /p:DisableIndexesForDataPhase=False /p:PreserveIdentityLastValues=True

4. Adjust Parameters for Your Setup

  • /tsn: The server name (IP or hostname) of your SQL Server instance, optionally followed by a port (default: 1433).
  • /tdn: The name of the target database (must not already exist).
  • /tu: SQL Server username.
  • /tp: SQL Server password.
  • /sf: The path to your .bacpac file (use the full path or ensure the terminal is in the same directory).

5. Run and Wait

  • Let the tool process the import. The time taken will depend on the size of the database.

Important: Ensure the target database does not already exist, as .bacpac files can only be imported into a fresh database.

The options /p:DisableIndexesForDataPhase and /p:PreserveIdentityLastValues optimize the import process for large databases and preserve identity column values. SqlPackage provides more reliability and flexibility than SSMS, especially when dealing with more extensive databases.

 

Reference:

https://learn.microsoft.com/en-us/azure/azure-sql/database/database-import?view=azuresql&tabs=azure-powershell

]]>
https://blogs.perficient.com/2025/02/26/import-azure-sql-databases/feed/ 1 377428
SQL Magic Series – Minus Sign in ORDER BY https://blogs.perficient.com/2022/11/28/sql-magic-series/ https://blogs.perficient.com/2022/11/28/sql-magic-series/#comments Mon, 28 Nov 2022 11:54:44 +0000 https://blogs.perficient.com/?p=320400

We are beginning a new series on SQL – A Magic SeriesWe will see few of many, yet effective tricks & solutions to make SQL easy in daily use. Let’s begin!!

So, the Question we are dealing here is:

What does the SQL minus sign (-) mean in ORDER BY -emp_no DESC; ?

Before answering that, let’s see, how ORDER BY works without MINUS:

  • ORDER BY emp_no DESC ;
    If you end the relevant query this way, you will obtain an output ordered with the highest employee number on top, the lowest employee number down the list, and the null values at the end.
Sql Magic Series 1

the top part of the output has been displayed

  • ORDER BY emp_no ASC ;
    This ending of the query will do the opposite – the null values will be on top, and then the employee numbers will grow from the lowest to the highest.
Sql Magic Series 1

the top part of the output has been displayed

And now, what changes with MINUS:

  • ORDER BY -emp_no DESC ;
    Using this code, we will first order the employees from the lowest to the highest number, and then leave the null values at the end.
Sql Magic Series 1

the top part of the output has been displayed

  • ORDER BY -emp_no ASC ;
    Following the logic explained so far, this ending would list the null value first, and will then order all employees from the highest to the lowest number.
Sql Magic Series 1

the top part of the output has been displayed

Depending on the situation, we may choose between the four SQL clause.

Why would using a minus sign in such a situation be useful at all?

Specifically, the combination used in “ORDER BY -emp_no DESC;” is a frequently used technique as it allows the user to sort their output in descending order, without starting with a (sometimes large) number of null values.

In other words, if the user prefers to see the null values at the end of the output, using ORDER BY -a.emp_no DESC; is a very convenient choice.

Read my previous blogs here.

]]>
https://blogs.perficient.com/2022/11/28/sql-magic-series/feed/ 2 320400
How to create cascading parameters in Reporting services (SSRS) https://blogs.perficient.com/2022/08/01/how-to-create-cascading-parameters-in-reporting-services-ssrs/ https://blogs.perficient.com/2022/08/01/how-to-create-cascading-parameters-in-reporting-services-ssrs/#comments Mon, 01 Aug 2022 15:30:32 +0000 https://blogs.perficient.com/?p=315155

What is SSRS?

SSRS stands for SQL Server Reporting Services. It is a reporting tool developed by Microsoft that comes free with the SQL Server. It produces formatted reports with the tables of data, graph, and reports. Reports are hosted on a server and configured to run using parameters supplied by users. When we run the reports, the current data appears from the database, XML file or other data source. It provides the security features that controls who can see which reports.

What are cascading parameters?

The concept of cascading parameters is a list of values for one parameter which depends on the values chosen for a previous parameter. Cascaded Parameters help the user when a parameter has a long list of values. The user can filter parameters based on the previous parameter.

What is a cascading report?

Cascading parameters provide a way of managing large amounts of data in a paginated report. You can define a set of related parameters so that the list of values for one parameter depends on the value chosen in another parameter.

Ssrs Logo

We will see how to create a sequence of cascading drop-down lists we are selecting an option from one filter to the next and so on.

Following are the points that needs to be taken care while creating Cascading Drop Down Lists-

  1. Understanding the database structure
  2. Planning a report (To identify which drop-down lists and which data sets you need to create)
  3. Creating Drop Down Lists Parameters
  4. Filtering Datasets

Below is the query which we will be using from ‘SSRS_Demo_Data’ table which includes Region, Country, and other information. We are going to copy the below query and create our report.

Note-Before we create a report, we need Shared Data Source or Embedded Data Source to create the report and, in our case, we are going to use the shared data source because, our all the reports are pointing to the same database.

1

Step 1 – To create the new report right-click on the report and add the new item and rename it as ‘CascadeParameter’ as shown in below the screenshot.

2

Step 2 – Now, Empty report will be created next part we will be using embedded data source or shared data source. So, right-click on the data source and add data source and configure it as shown below.

3

Here, we have used the shared data source reference as our shared data source is pointing to the same database as our table is and we can also rename the ‘DataSource2’ as per the requirement so that it will be more readable.

Step 3 – We must create our Datasets for all the detail columns or table where we see the information. This will be the main dataset that we will be creating. We can give suitable name to that dataset then select the data source and paste the query as below.

4

Here, we have added the parameters as you can see in the above screenshot where region column is @RegionName we can give any parameter name it will not really matter and country in @CountryName and we can also add other parameter for example- State.

Step 4 – Right click on the design Insert and bring a table so that we can insert some column as you can see in the below screenshot. We can make it bold, italic and can change the font, background, etc.

5

6

Now, our report is ready we can go ahead and preview it as below-

7

If we put Asia in region and then will try to put country name as Pak and India it is not going to return me anything because, this is taken as single parameter so for that we just need to put India as a single parameter then, it will return me the records from India as you can see below.

8

Step 5 –We have to make these parameters as multi value parameter so that we can get the result for both the country name and to implement this we have go to the parameters properties on the country parameter and select ‘Allow multiple values’ it will now accept more than one value.

9

10

Step 6 – To make this process automated as dropdown list so that we don’t have to type region name or country name every time for that click on the dataset and create the values for the region first and then for the country.

11

Here, we have to click on the region parameter properties and go the available values and select ‘get values from query’ and configure it as below.

12

Now, we can see the region name is in the drop-down list and can select any region from the drop-down list. Now let’s make the country as a drop down as well but we want to make it drop down in a way- if we select Asia or Europe then the countries belong to that region must be shown in the drop-down.

Step 7- In this step Right-click on the data set and rename it as ‘DSET_Country’ and as we are selecting the country, we need to select only the country belong to that region as below-

13

Step 8- Click on the region parameter properties and go the available values and select ‘get values from query’ and configure it as below-

14

Here, in our case we can see that the country name is greyed out. Because we have not selected the values from Region once we select the values. For example- Europe then it will start showing us the countries from specific region.

15

16

So, that’s how we can create the cascade parameter in SQL server reporting services.

For more such blogs click here

Thanks for reading!! Hope you enjoyed reading this blog.

]]>
https://blogs.perficient.com/2022/08/01/how-to-create-cascading-parameters-in-reporting-services-ssrs/feed/ 4 315155
How To Create Reports Using SQL Server Reporting Service(SSRS) https://blogs.perficient.com/2022/07/19/create-reports-using-ssrs/ https://blogs.perficient.com/2022/07/19/create-reports-using-ssrs/#comments Tue, 19 Jul 2022 15:11:19 +0000 https://blogs.perficient.com/?p=314007

 

SQL Server Reporting Services (SSRS) is a SQL Server subsystem that enables the creation of graphical, Mobile and printed reports using SQL Server and other data sources. It is part of Microsoft SQL Server Services suite. SQL Server is a relational database management system (RDBMS) that supports transaction processing, business intelligence and analytics applications.  It produces formatted reports with the tables of data, graph, and reports. Reports are hosted on a server and can be configured to run using parameters supplied by users.

Here, are prime reasons for using SSRS tool:

1.SSRS is an enhanced tool compared to Crystal Reports

2.Faster processing of reports on both relational and multidimensional data

3.Allows better and more accurate Decision-making mechanism for the users

4.It provides a World Wide Web-based connection for deploying reports. Hence, reports can be accessed over the internet

5.SSRS allows reports to be exported in different formats. You can deliver SSRS     reports using emails

6.SSRS provides a host of security features, which helps you to control, who can access which report

How SSRS Works?

The report users are the peoples who work with the data as well as want some insights from data. They send a request to the SSRS server.

SSRS server finds the metadata of the report and sends a request for data to the data sources.

Data returned by the data source is merged with the report definition into a report.

When the report is generated, it is returned to the client.

As we know now what SSRS, Let’s see how to use SSRS to create a Reports –

Step1 –

           First we open “Visual Studio(SSDT) and  click on “Create new project “.

1start Ssdt

Step2 –

           Select Reporting Services and create Report project name.

2create Project Name

Step3 –

           This is the home page of SSRS

3home Screen For Ssrs

Step 4 –

           Now we check our dataset in SQL server for that we have to create a report.

4data

Step 5 –

                Now we Right click on shared data sources. First we create data source name and select type Microsoft SQL Server and in the connection string connect with your local server name .

 

5create Data Source

Step 6 –

                Now we Right click on shared data sources and select add data sources . First we create data source name and select type Microsoft SQL Server and in the connection string connect with your local server name and select database name of your dataset(table).

5.11

 

Step 7 –

                Now we Right click on shared data sets. First we create data set name and select data source that we already created and in the query type select text and write a select statement for the table.

5create Data Source 1

Step 8 –

                Now we Right click on Reports and select add then new item.

And new window is open, select Report and it will take to design page of reports.

7

Step 9 –

                Now we Right click on design page then insert and click on any required reporting type, in our case we are using chart.

 

8

Step 10 –

                Now chart type window is open and we can select any required chart type. In our case we are using under shape category.

 

9

Step 11 –

                Now select shape and it will come in our design window.

10

Step 12 –

                Now we right click on chart and new popup window will come and we can

Add columns on our report requirements.

11

Step 13 –

                After selecting proper columns for report we can preview our Report in a preview page and now we can send this report.

 

Final

Here we have successfully generated Report using SSRS.

Please share your thoughts and suggestions in the space below, and I’ll do my best to respond to all of them as time allows.

for more such blogs click here

Happy Reading!

 

 

]]>
https://blogs.perficient.com/2022/07/19/create-reports-using-ssrs/feed/ 4 314007
To SQL or to NoSQL? That is the question. https://blogs.perficient.com/2021/02/10/to-sql-or-to-nosql-that-is-the-question/ https://blogs.perficient.com/2021/02/10/to-sql-or-to-nosql-that-is-the-question/#respond Wed, 10 Feb 2021 15:17:40 +0000 https://blogs.perficient.com/?p=287576

There is a lot of confusion and hype out there regarding persistence technologies. It can be difficult to choose the right one. There are many considerations, and each situation requires thoughtful investigation. In this post, we’ll take a 10,000-foot view to help orient you in this landscape. To get the most out of this discussion, you’ll need a basic understanding of relational (SQL), document, and graph databases. We’ll discuss the trade-off between flexibility and performance and identify where these technologies fall on this spectrum. And finally, I’ll provide some recommendations and other resources to help you determine the best tech for the job.

What’s in a Name? Flexibility vs. Performance

SQL stands for “Structured Query Language,” and NoSQL stands for “Not Only SQL,” but for our purposes, it’s more useful to think of it as “No Structured Query Language.” SQL databases support a rich query language, and the data is structured in a generic form to support asking a wide variety of questions. Many NoSQL databases have a limited query language, and the data is structured to answer a limited number of questions. NoSQL is faster because it is optimized to answers fewer questions.

It’s really that simple. There isn’t a secret sauce that makes NoSQL faster than SQL. Query performance is a product of how closely your data structure matches the questions being asked. If the structure of the data matches the question, then the query provides the answer quickly. But when the question asked looks different from the structure of the data, transformations are applied, and “rendering” the answer takes longer. NoSQL databases can be faster than SQL because the question and answer are pre-rendered or baked-in into the data structure. So instead of querying the data with NoSQL, we are retrieving pre-defined answers to well-known questions. The degree to which queries are baked into the data structure varies by technology. Obviously, there are many factors to consider when evaluating performance, but recognizing that performance comes at the cost of query flexibility is a good place to start.

Pre-rendering Data

Pre-rendering data or baking the question and answer into the data structure is what makes NoSQL fast. But what does it mean to pre-render the data? Pre-rendering data is applying opinionated transformations before the data is persisted. Contrast this with persisting the data in a generic form and then applying an opinionated transformation when the data is accessed.

Pre-Rendering Data into a Nosql Document Database

The image above shows the process of “pre-rendering” data into a NoSQL Document database. (1) The data in its “natural” form is (2) transformed into an opinionated form that matches the application’s questions. (3) The data is persisted, and when (4) accessed, no transformation is required because the data has been pre-rendered for our specific purpose. The answer to the question, “What products, line items, and metadata exist for purchase order X?” has been built into the data structure. This document can’t answer the question, “What products are in the shoe department?” For that, we need a different document.

Sql Database with Limited Pre-Rendering

Contrast that with the image above of a SQL database with limited pre-rendering. (1) The data in its “natural” form (2) undergoes little to no transformation before it is (3) persisted into the database in a generic form. (4) When the data is accessed, it is transformed to match the structure required for the application. This real-time transformation is more expensive, but it is more flexible as the generic data can be transformed to answer various questions. This data structure can answer both, “What products, line items, and metadata exist for purchase order X?” and “What products are in the shoe department?” but not with the same speed and scale as a document database.

This is powerful knowledge because we can dramatically alter our application’s performance by changing where the data is transformed. There are many consequences and cascading effects when we do this, so this change should not be performed casually. A common side effect is denormalization, data duplication, and eventual consistency, requiring complex access patterns or synchronization logic.

And Then There Were Graphs

So far, we’ve discussed two database technologies at the extremes of the flexibility and performance spectrum, relational SQL databases and NoSQL Document databases. Graph databases are a relatively new offering in the NoSQL space. Some have suggested they are the ultimate database and can be used to solve any problem. This simply isn’t true. Like other NoSQL databases, graph databases are special-purpose tools that trade flexibility for performance. There are several different flavors of a graph database, but the most popular, and the one we’ll discuss here, is the Labeled Vertex Graph. This is what most people are thinking of when they say graph database. On the surface, a graph database might appear to be a relational document database. Combining the best of relational SQL with NoSQL documents. However, graph databases are not a superset of these technologies. Instead, they sit in between them on the spectrum of flexibility and performance. Like other NoSQL databases, a graph database’s unique value comes from “baking-in” certain optimizations and limiting the types of questions we can ask quickly.

Graph Database

Consider the graph above. We have two different approaches to modeling the relationship between a skill and its category: edges on the left and vertex properties on the right. You might assume that the left graph is preferred because it models the relationships using an edge. But if you were to ask the question, “What skills exist in the language category?” querying the data on the left would be twice as expensive as querying the data on the right, assuming you’re running this query in an Azure Cosmos DB. Like other NoSQL databases, a graph database must bake-in the right query logic to perform quickly. And different vendors offer different optimizations. When modeling a graph database, it’s critically important to understand the questions asked of the data. We can then model it to answer those questions quickly by baking some of the query logic into the model’s structure.

Graph Database

 

Graph databases have a rich query language and offer a lot of modeling flexibility. For example, by storing the category as an edge relationship and as a property on the vertex, we introduce additional query options, but we also introduce data duplication. Graphs are a great tool if you ask the right questions to the right model. If you’re wondering whether a graph is the right tool for your use case, here’s a simple question to help guide you: “Are the relationships more important than the data?” If they are, that’s a good indicator that a graph may be the right tool.

Summary

There is a large variety of databases in the NoSQL space. And different vendors have optimized their databases, baking in unique performance characteristics. Think of a relational SQL database as a swiss army knife and NoSQL databases as special-purpose, high-performance tools that must be used skillfully. Each database can perform well or poorly based on the structure of the data and the questions’ nature. Higher performance typically requires an understanding of the questions in advance. It bakes-in or pre-renders the answers into specialized data structures that more closely match the question’s structure; this provides better performance at the cost of flexibility. There’s no silver bullet. Matching a database to a domain problem requires a thoughtful evaluation of the questions you’ll be asking.

Our Approach

The Perficient custom development group specializes in the development of custom solutions. So “file > new project” is commonplace. Many of our clients are breaking new ground and creating unique products. This often means that we, and they, are learning about their product domain as we go. We delay decisions about persistence technologies for as long as possible, often building out portions of the application before committing to an approach. Once we better understand the needs and performance characteristics, we can select the right technology for the job. A relational SQL database is a common launching point because it provides more flexibility and can be modeled well without knowing all of the questions we’ll ask upfront. As features that require optimization are identified, we investigate NoSQL solutions that meet those specific needs. Many modern cloud solutions rely on various persistence technologies; SQL is still an excellent tool, and NoSQL provides great new options for managing data on a global scale.

We are technologists with a deep love for connecting people with software solutions that enrich their lives. Don’t hesitate to contact us to share your vision and discuss how we can help you deliver it.

Additional Resources

Here are some additional resources to help in your exploration of NoSQL.

SQL vs. NoSQL Explained – YouTube

A Skeptics Guide to Graph Databases – David Bechberger – YouTube

 

]]>
https://blogs.perficient.com/2021/02/10/to-sql-or-to-nosql-that-is-the-question/feed/ 0 287576
Tackle Security Concerns for Application Modernization https://blogs.perficient.com/2019/09/24/tackle-security-concerns-for-application-modernization/ https://blogs.perficient.com/2019/09/24/tackle-security-concerns-for-application-modernization/#respond Tue, 24 Sep 2019 18:54:00 +0000 https://blogs.perficient.com/?p=244466

In our previous post, Create Your Transformation Roadmap for Application Modernization, we offered guidance to prepare your organization for successful cloud adoption. Part 2 of this series addresses some of the security concerns you may stumble upon in your cloud journey. We also share some best practices for infusing security across your organization.

Questions about cloud security have been a major stumbling block in recent years. The biggest concerns include:

  • Providing IT resilience (e.g., backup, disaster recovery, high availability, and continuity planning) to protect data and ensure continued business operations in the event of a disruption
  • Securing the environment from external threats and preventing any unplanned data exposure
  • Understanding the cloud responsibility model and accounting for all IaaS, PaaS, and SaaS security considerations with your cloud provider and between application owners and platform operations teams
  • Establishing structured identity and role access that limits exposure of all resources to only appropriately privileged resources

Want to see where you stand with other companies’ app modernization?                                                              Complete this short survey and get access to a comprehensive report.

Shift in perspective about cloud and security

Despite these overarching concerns, security teams are significantly more receptive to cloud solutions. “Skepticism about cloud as a viable option has diminished. These teams now believe that cloud is largely more secure than on-premises data centers,” said Mike Porter, CRM and Data Chief Strategist. “Vendors have large teams dedicated to security. The ability to improve upon and offer additional security services is vital for these vendors’ success.”

Most large cloud vendors and many software-as-a-service (SaaS) vendors now let you manage security separate from their infrastructure. However, you also want to protect your organization with built-in quality governance and security policies as cloud adoption scales and decentralizes. Security and compliance are byproducts of proper governance policies.

Your IT organization can also prioritize security by incorporating security best practices for application development/delivery and DevOps pipelines.

Why are more healthcare organizations moving to the cloud? See how they secure data in cloud environments


To learn more, you can download our entire guide here or below.

]]>
https://blogs.perficient.com/2019/09/24/tackle-security-concerns-for-application-modernization/feed/ 0 244466
Create Your Transformation Roadmap for Application Modernization https://blogs.perficient.com/2019/09/10/create-your-transformation-roadmap-for-application-modernization/ https://blogs.perficient.com/2019/09/10/create-your-transformation-roadmap-for-application-modernization/#respond Tue, 10 Sep 2019 14:58:29 +0000 https://blogs.perficient.com/?p=243482

In our previous post, Multiple Paths to Cloud: Migrating Legacy Applications, we explored options that allow you to develop, deploy, and manage new applications on cloud but still continue using your existing data center. Next, we’ll provide guidance on planning and preparing your organization for successful cloud adoption.

Part 1 highlights the importance of creating a transformation roadmap. This strategic element serves as the North Star for your cloud adoption objectives, which will guide your organization to achieve its desired business outcomes.

Getting Started with Application Modernization

While there are multiple paths to cloud adoption, it may be difficult to choose a starting point. We’ve helped numerous clients identify the right emerging technologies for your applications and address the barriers presented by legacy systems. This will require operational changes in your IT organization, including cloud migration, cloud infrastructure transformation, and database migration, to name a few.

“By 2023, enterprise spending on cloud services and infrastructure will be more than $500 billion. Adopting the cloud is no longer primarily about economics and agility – it is becoming enterprises’ most critical and dependable source of sustained technology innovation.” (IDC)

Gain insight into similar companies’ business goals for application modernization.                                              Take this quick survey to see how you compare.

Develop a Transformation Roadmap

A transformation roadmap is essential for achieving cloud adoption goals and getting your company from point A to point B. These roadmaps address a number of areas such as digital business strategy, technical architecture, and IT staff and process maturity.

For cloud adoption, a transformation roadmap typically includes:

  • Your organization’s goals and barriers
  • Journey maps for your customer experience
  • Assessment and inventory of your technology
  • Gap analysis and recommendations
  • IT goals (e.g., cloud first, agile, DevOps)
  • Technology options and reference architecture
  • Application portfolio rationalization and migration options (e.g., build versus buy)
  • Budgets, benefits, and timelines
  • Change and communication plans
  • Program governance (e.g., organization, roles and responsibilities, dashboard KPIs, accountability)
  • Digital products and marketplace offerings
  • Program execution in the context of goals (e.g., agility, innovation, DevOps)

The finished roadmap should show an evolution of your application landscape as your organization transitions from current state (legacy) to future state (modernized). The roadmap should also highlight business and technical outcomes along the way.

Additionally, the future state should consider long-term technical trends and feasibility as well as business considerations for digital product offerings and revenue streams. You must execute a cloud adoption strategy within the context of budgets and timelines, preferably with business value delivered along the way.

To learn more, you can download our entire guide here or below.

]]>
https://blogs.perficient.com/2019/09/10/create-your-transformation-roadmap-for-application-modernization/feed/ 0 243482
Multiple Paths to Cloud: Migrating Legacy Applications https://blogs.perficient.com/2019/08/14/multiple-paths-to-cloud-migrating-legacy-applications/ https://blogs.perficient.com/2019/08/14/multiple-paths-to-cloud-migrating-legacy-applications/#respond Wed, 14 Aug 2019 15:15:49 +0000 https://blogs.perficient.com/?p=243165

In our previous blog post, Multiple Paths to Cloud: Taking a Cloud Native Approach, we discussed the importance of a unified plan for cloud adoption as well as the benefits of cloud native to build apps. In part two, we examine options for migrating legacy applications to the cloud. This path to cloud adoption has several micro intricacies and considerations to determine the best choice for your organization. We take you through some of the nuances and provide key insights to help you on your journey.

Current Options for Migrating Legacy Applications

Legacy applications and systems are like a time capsule. They represent your business model and state of business rules at a particular point in time. This embedded view is invariably outdated and likely a barrier to innovation. Legacy systems typically have dated and undesirable architecture, a cumbersome user experience, and procedural code that doesn’t work well with modern application platforms.

What are your options if you want to develop, deploy, and manage new applications on cloud but continue using your existing data center?

Hybrid cloud is a popular choice. It allows you to keep some of your data and applications on a cloud platform while other information (e.g., complex legacy apps and core business apps) remains on an on-premises server. Migration with hybrid cloud is gradual, which many companies prefer because they aren’t prepared to move everything at once.

“More than 75% of midsize and large organizations will have adopted either a multicloud or hybrid IT strategy by 2021.” (Gartner)

Other options for migrating applications to the cloud include a “lift-and-shift” approach (containerizing) with minor changes. There’s also refactoring applications to adhere to twelve-factor app (native cloud) guidelines. And finally, application rewrites to best leverage cloud and DevOps.

With lift-and-shift, you can move legacy applications to the cloud without redesigning them. Additionally, this approach represents a low-cost migration option. Specific cloud characteristics, such as elastic scalability, is often desirable for redesigning and rewriting customer-facing and business-critical applications.

How are other companies tackling application modernization? Complete this brief survey to find out.

The Next Wave: Modernizing Core Applications

Another big unknown for well-established enterprises is modernizing core business applications – ERP, CRM, and others – that support backend operations. Forrester predicts that this scenario will prompt a second wave of cloud adoption for enterprises. These businesses will need help with designing and modernizing complex, core legacy applications that have run for decades. In fact, many of these applications will eventually move to a SaaS model and require a specific integration and migration strategy.

Companies seek “innovative development services for enterprise apps, which will drive adoption (and spending) as companies start tearing apart core business apps and modernize them with innovative analytics, machine learning, IoT, messaging, and database services created in the cloud” (Forrester)

Considering the variety of cloud migration options, a holistic migration plan is often needed. Large-scale migrations will require taking inventory of applications, categorizing and prioritizing them, and developing a prescribed migration plan.

To learn more, you can download our entire guide here or below.

]]>
https://blogs.perficient.com/2019/08/14/multiple-paths-to-cloud-migrating-legacy-applications/feed/ 0 243165
Multiple Paths to Cloud: Taking a Cloud Native Approach https://blogs.perficient.com/2019/07/31/multiple-paths-to-cloud-taking-a-cloud-native-approach/ https://blogs.perficient.com/2019/07/31/multiple-paths-to-cloud-taking-a-cloud-native-approach/#respond Wed, 31 Jul 2019 20:41:12 +0000 https://blogs.perficient.com/?p=242687

Cloud technologies support an increasing amount of digital transformation initiatives. While cloud isn’t the ultimate goal of transformation, it provides speed, innovation, and scale to re-imagine business in the digital age. This two-part series reveals the differences between cloud consumers within your company and the benefits of taking a cloud native approach.

Consider Your Audience

“Cloud maturity is a multilane highway. Just as you won’t engage customers via a single marketing channel, you won’t have a single cloud strategy,” states Forrester. Before making any changes, you’ve got to create a plan for implementing cloud solutions.

To develop a unified plan, first think about the needs of primary cloud consumers within your company. These are business leaders, development teams, and leaders in the IT organization – each with a different take on how cloud will help achieve their respective goals:

  • Business leaders want to boost customer experience, sales, and customer retention
  • Developers want a better way to build new things
  • Tech leaders want to improve infrastructure agility

Let’s focus on developers and tech leaders, and explore their perspectives on trends related to new and legacy applications. Determining the right approach for application modernization will depend on your company’s specific needs.

Want to see where you stand with other companies’ app modernization? Complete this short survey             and get access to a comprehensive report.

Taking multiple simultaneous paths to cloud accelerates digital business

Cloud Native

With increasing demands from business leaders to modernize, the tech leaders and developers are often looking at this situation in parallel. If your company is focused on building new applications to enhance the customer experience, then your IT organization and development teams need to address a couple of key questions:

  • Where do we build?
  • What do we do with our data center?

Taking a cloud-native approach accelerates your time to market because it’s faster and easier to build and deploy new applications. Cloud native also simplifies the process of modifying apps and deploying those enhancements and updates to customers.

For example, a financial services organization wanting to modernize its applications might opt to use public cloud platform as a service (PaaS) paired with custom microservices development. PaaS storage, SQL database as a service, and cloud-native logging and monitoring can quickly recreate the business value of the original application, which encourages innovation and enhancements that weren’t previously possible. The application and operations teams also benefit from high availability, autoscale capabilities, integrated monitoring, and end-to-end team ownership of service deployments without impacting the entire solution.

To address the question of your data center, you should think about the future of your company as it pertains to running infrastructure. You can move existing applications and data from your data center to the cloud and gain economic and scalable benefits. Some cloud-native platforms offer the ability to modernize your legacy applications to be more resilient, faster, easier to maintain, and cost effective.

Keep in mind that the subject of modernizing legacy applications is slightly more complex and often leads to conversations around legacy migration and solutions to blend your new innovations with your current state of business.

Stay tuned for part two of this series that further explores legacy migration as an approach for application modernization.

 

]]>
https://blogs.perficient.com/2019/07/31/multiple-paths-to-cloud-taking-a-cloud-native-approach/feed/ 0 242687
Northwestern Medicine Uses Epic to Deliver Value-Based Care https://blogs.perficient.com/2015/08/17/northwestern-medicine-uses-epic-to-deliver-value-based-care/ https://blogs.perficient.com/2015/08/17/northwestern-medicine-uses-epic-to-deliver-value-based-care/#respond Mon, 17 Aug 2015 13:56:53 +0000 http://blogs.perficient.com/microsoft/?p=27602

health-laptop
Recently, Kate Tuttle, my colleague and healthcare marketing guru, wrote a post over on Perficient’s Healthcare Industry Trends blog, describing the shift from a fee-for-service based model to a value-based care model and the subsequent need for a 360-degree patient view. Many healthcare organizations are facing challenges around transforming data into meaningful information – information that outlines the population and identifies the most high-risk patients, resulting in improved management of chronic diseases and improved preventative care.
Health data has become a powerful influencer in population health management as organizations seek to analyze data and translate it into actionable, real-time insights that will lead to smarter business decisions and better patient care.
Because of the changes in the delivery model and payment reform,  these organizations increasingly look to implement a centralized data warehouse that will meet the growing data and reporting needs, and provide the health system with a central data repository for clinical, financial and business data.
Kate also shared that Cadence Health, now part of Northwestern Medicine (a large Epic user) sought to leverage the native capabilities of Epic in the management of their population health initiatives and value-based care program. Cadence Health engaged us because of the work we’ve done with ProHealth Care, the first healthcare system to produce reports and data out of Epic’s Cogito data warehouse in a production environment.

By leveraging Epic’s Cogito and Healthy Planet, nmNorthwestern Medicine is able to track the health of their population and evaluate whether or not patients with chronic diseases are proactively getting care. They also have real-time reports generated that provide their physician’s with a dashboard view, designed to instantly provide them with an overview of the performance of their patient population across all registry-based measures.

You can learn more about Northwestern Medicine’s value-based care journey in a webinar next weekon Thursday, August 27th at 1:00 PM CT..
Register below to join the live session or receive the on-demand version to hear Rob Desautels, Senior Director IT, Cadence Health and Perficient healthcare experts:

  • Analyze how Epic’s Healthy Planet and Cogito platforms can be used to manage value-based care initiatives.
  • Examine the three steps for effective population health management: Collect data, analyze data and engage with patients.
  • Discover how access to analytics allows physicians at Northwestern Medicine to deliver enhanced preventive care and better manage chronic diseases.
  • Discuss Northwestern Medicine’s strategy to integrate data from Epic and other data sources.


One more thing… if you are a Epic user planning to attend 2015 Epic: UGM in just two weeks, we welcome you to join us for an evening event on September 3rd at the Edgewater in Madison, WI. Heidi Rozmiarek, Assistant Director of Development at UnityPoint Health and Christine Bessler, CIO at ProHealth Care, will lead a discussion focused on how organizations are currently leveraging the data housed in Epic systems and planned initiatives to gain even further insights from their data.  Register here – space is limited

]]>
https://blogs.perficient.com/2015/08/17/northwestern-medicine-uses-epic-to-deliver-value-based-care/feed/ 0 225002
Azure: Did You Know? Hybrid Connections as VPN Alternative https://blogs.perficient.com/2015/06/01/azure-did-you-know-hybrid-connections-as-vpn-alternative/ https://blogs.perficient.com/2015/06/01/azure-did-you-know-hybrid-connections-as-vpn-alternative/#respond Tue, 02 Jun 2015 00:56:21 +0000 http://blogs.perficient.com/microsoft/?p=27047

hybrid2In real life cloud deployment scenarios one of the very common cases is when only part of the application resides in the cloud. Usually, it’s when there is a legacy system which can’t be migrated to the cloud and resides on premises, or it’s not optimal to deploy entire system to the cloud. After all, cloud is not an answer to every question. In this case, these is a need to establish a connection between parts of the applications which are deployed in Azure (for example, a web site), and parts which reside on premises (for example, a mainframe).

There are more than one way of connecting Azure resources to on-premises application. The most obvious is a VPN (Azure ExpressRoute) between Azure cloud and on-premises (or co-located) environment. It’s fast, solid, but not exactly cheap (see http://azure.microsoft.com/en-us/pricing/details/expressroute/).
Then there is an alternative way: Azure Hybrid connections: https://azure.microsoft.com/en-us/documentation/articles/integration-hybrid-connection-overview/ which also allows application deployed in Azure to access applications on premises. And, unlike VPN, it’s free. In essence, setting up Azure Hybrid connection requires the following steps:
– You need to set up new BizTalk service in Azure portal (or piggy-back on existing BizTalk service if you have one already)
– You need to configure new Hybrid Connection in Azure portal. Each hybrid connection needs to be specific to on-premises server and port number. For example, if you have on-premises SQL server, then you need to create a new hybrid connection for that SQL server name and port (usually 1434). Of course, server name (or IP) could be internal to your environment.
– Finally, you need to download and install on your internal network a Hybrid Connection Listener (a Windows service). It doesn’t have to reside on the same server as your resource which you trying to access from Azure, but it should have access to it. This listener will act as a software router enabling Azure to connect to you on-premises application.
And then, the magic will start to happen: your Azure application will be able to work with your on-premises resource, just like it was on the same network with it. Note that Azure application should keep addressing on-premises resource by its internal server name or IP. Azure will take care of routing traffic through the listener.
wabs_hybridconnectionimage

]]>
https://blogs.perficient.com/2015/06/01/azure-did-you-know-hybrid-connections-as-vpn-alternative/feed/ 0 224968
Microsoft Azure Data Platform Capability Expands Yet Again https://blogs.perficient.com/2015/05/20/microsoft-azure-data-platform-capability-expands-yet-again/ https://blogs.perficient.com/2015/05/20/microsoft-azure-data-platform-capability-expands-yet-again/#respond Wed, 20 May 2015 13:31:22 +0000 http://blogs.perficient.com/microsoft/?p=26932

Yet again, Microsoft builds on their increasingly compelling Data Platform story by bringing out new offerings.
As my colleague Stan Tartinovksy wrote last week, Azure Data Warehouse is coming.   But that’s not the only new piece of the Microsoft data environment.
Also announced at the Ignite 2015 conference was a new Elastic Databases feature for Azure SQL Database.  This feature is ideal for developers who build SaaS applications that use large numbers of databases to scale to unpredictable resource demands.   Rather than needing to overprovision in order to accommodate peak demand, developers and sysadmins will be able to use Elastic Databases to configure a database pool to help share resources across multiple databases (upwards of thousands) within a controllable budget   Microsoft will also be making tools available to help query and aggregate results, as well as to implement policies and perform transactions across the database pool.
And the other major new offering is Azure Data Lake.  A Data Lake is a hyper-scale data store for big data analytic workloads, designed as a single place to store every type of data in its native format, with no fixed limits on account size or file size, and with high throughput to increase analytic performance.  Azure Data Lake is a Hadoop File System, compatible integrated with Azure HDInsight.  It will also be integrated with Revolution-R Enterprise and industry standard Hadoop distributions like Hortonworks and Cloudera, not to mention supporting individual Hadoop projects like Storm, Spark, Kafka, Flume, etc.
Elastic Databases for Azure SQL Database is currently in preview.  Azure Data Lake will be released to preview later in 2015.
 

]]>
https://blogs.perficient.com/2015/05/20/microsoft-azure-data-platform-capability-expands-yet-again/feed/ 0 224959