Shiv Singh – Perficient Blogs https://blogs.perficient.com Expert Insights Thu, 07 Nov 2019 19:53:28 +0000 en-US hourly 1 https://i2.wp.com/blogs.perficient.com/files/favicon-194x194-1.png?fit=32%2C32&ssl=1 Shiv Singh – Perficient Blogs https://blogs.perficient.com 32 32 30508587 Resiliency – A Core Pillar of Any Business-Critical App in the Cloud https://blogs.perficient.com/2019/11/01/resiliency-a-core-pillar-of-any-business-critical-application-running-on-cloud/ https://blogs.perficient.com/2019/11/01/resiliency-a-core-pillar-of-any-business-critical-application-running-on-cloud/#respond Fri, 01 Nov 2019 22:01:47 +0000 https://blogs.perficient.com/?p=246402

Cloud offers undeniable benefits when it comes to building cost-effective agile solutions. Enterprise-wide pushes to modernize application stacks have added fuel to cloud migration initiatives. As more and more monoliths are decomposed and redesigned using a distributed microservices architecture, the large portfolio of cloud-native services makes developing and deploying in the cloud faster and economical. However, users who build and plan for cloud assume the high availability of cloud services as advertised by the service provider. As such, they also presume the continued availability of their solution through a disaster. To put this in perspective, cloud migration and adoption of cloud-native services do not automatically help achieve “resiliency” in your application architecture. An unintended consequence of this assumption is an increase in the overall enterprise risk.

Resiliency Function

To better account for resiliency, cloud in itself should be considered as a separate entity with functional and non-functional aspects that should be defined, assessed and monitored separately. Most implementations account for optimum and economical use of cloud, with no pervasive planning on resiliency.

Having a resiliency-driven enterprise function or a focused team can help achieve better upfront assessments and planning for the application and physical architecture. Such a function can go a long way to striking the right balance between the agility that cloud provides and the organization’s tolerance for risk. If organizations fail to build and integrate a resiliency function into their application development lifecycle, they willingly accept the risk of unplanned downtime, especially when they are dealing with business-critical workloads.

Assessing this risk requires reliably quantifying the cost of downtime. According to some estimates “On average, an infrastructure failure can cost $100,000 an hour and a critical application failure can cost $500,000 to $1 million per hour.*” Businesses can’t afford repeat occurrences of unpredictable downtime events. As such, Resiliency Function is of paramount importance to any organization with business-critical workloads that are either developed in the cloud or are being re-platformed and migrated to cloud.

Resiliency function can help bring unprecedented business resiliency while simultaneously assessing, measuring, and rectifying any potential technical issues that can pose an inadvertent risk to any business-critical application. The team tasked with this function builds a hypothesis around edge cases that can affect application availability.

At the end of this blog, I have posted links to articles detailing the consequences when company’s do not overemphasize on edge cases for critical workloads.” 

Resiliency Approach in the Cloud

A resiliency focused team should critically look at all components that are tied to application design. Failure mode effect analysis (FMEA) is a good starting point in assessing failure intensity. The resiliency team should work with application, networking, security, and infrastructure architects to develop an interaction diagram of all components that are a part of the overall application stack. Each interaction point should then be assessed individually for all possible failures. Such failures should be scored against severity, observability, and probability.

The outcome of this exercise is the creation of a risk profile for each specific failure by calculating the risk priority number (RPN) using the above three specificities. The high-risk failures are the ones with high probability, high severity, and low observability. The resiliency team should then identify a finite set of high RPN failures that could be assessed and replicated via PoCs. From there, the resiliency team should then recommend potential solutions to different stakeholders based on observable output from the PoCs. In a more mature setup, resiliency teams can become an integral part of application development teams and can help build utilities and frameworks that specifically target failure points and bridge structural deficiencies in application code. 

CHAOS and cloud service verification are two other offshoots of the resiliency function. CHOAS builds on failure hypothesis and carries out failure ingestions at all vulnerable sections of application architecture and its associated infrastructure. CHAOS outputs your application’s ability to sustain any possible inadvertent attacks and checks applications inbuilt defence against such attacks.

Ideally, CHAOS experiments are conducted in live production environments but if your organization is new to CHAOS testing, running it in a production-like environment (UAT, performance, etc.) is a good starting point.

Cloud service verification is another exercise that is specific to running deep targeted experiments on cloud-native services, which are a part of your application architecture. It includes understanding and end to end design of the service being provisioned and how it operates under a stress event. Any failure or issues encountered during this process are fixed by building custom utilities or tuning service configuration. Issue encountered during a service verification process should also be discussed in detail with the cloud provider.

Summary

An always-on world on the cloud comes with inherent risks. The complexity and impact of outages require detailed and focused attention. A typical application architecture on cloud has many components. Availability of such an application is an aggregate of availability for all components. Understanding the interaction points, building a failure mode for every possible failure scenario tied to that interaction point, and testing each of these scenarios to understand the impact and build solutions to address those gaps can provide significant efficiency to your disaster recovery plans.

At Perficient, we continue to monitor cloud technologies and trends. We do understand the challenges in embracing cloud technologies and hence have come up with proven cloud-based solutions, platforms, architectures, and methodologies to aid smoother migration.  If you’re interested in learning more, please reach out to one of our specialists at sales@perficient.com 

References and links

*“IDC, “DevOps and the Cost of Downtime: Fortune 1000 Best Practice Metrics Quantified.” Stephen Elliot. December 2014, IDC #253155”

https://www.forbes.com/sites/donnafuscaldo/2019/10/17/chime-suffers-outage-that-prevents-customers-from-making-purchases-accessing-cash/#7e9e5c75ca3a

https://www.wired.com/story/feds-boeing-737s-better-designed-humans/

https://www.ntsb.gov/investigations/AccidentReports/Reports/ASR1901.pdf

ttps://www.bleepingcomputer.com/news/technology/amazon-aws-outage-shows-data-in-the-cloud-is-not-always-safe/

]]>
https://blogs.perficient.com/2019/11/01/resiliency-a-core-pillar-of-any-business-critical-application-running-on-cloud/feed/ 0 246402
Anthos, building your cloud ecosystem https://blogs.perficient.com/2019/04/10/anthos-building-your-cloud-ecosystem/ https://blogs.perficient.com/2019/04/10/anthos-building-your-cloud-ecosystem/#respond Thu, 11 Apr 2019 00:07:54 +0000 https://blogs.perficient.com/?p=238458

Google Next 2019, where 30,000 tech enthusiasts descend on the streets of San Fran to learn, promote and embrace what Google Cloud has to offer. Google cloud is steadily gaining momentum in cloud space and is now the third largest cloud services provider in the world. Google’s engineering team understands the complexities of a rigid IT setup let alone modernizing it on the cloud. As cloud computing is proliferating, the app teams are still asking this question – why “Write once, run anywhere” is still a myth?

During Next 2019 first Keynote session Googlers took an affirmative step to burst this myth and introduced “ANTHOS” to the tech community. Anthos is Google Cloud’s new open platform that lets you run an app anywhere in a simple, flexible and secure environment. Built on the foundations of open source, which Google itself is a pioneer at, Anthos lets you run applications unmodified on on-prem hardware or any other public cloud. Built on “Cloud Services Platform”, that Google announced last year, Anthos steps up the game for hybrid or multi-cloud strategy with one core focus in mind “Write once, run anywhere”.  One of the interesting facts about Anthos is that it is a 100% software-based solution with open APIs that makes its adoption standardized and easy to use.

So what is Anthos comprised of? Anthos is an integrated platform. GKE, GKE On-Prem, Istio and Anthos Config Management are the core building blocks of Anthos. It is integrated with Stackdriver and GCP Marketplace for rapid application development and has adapters to connect with existing or Googles CI-CD toolsets for automated rapid build and infrastructure deployment.

Anthos is an enabler in your app modernization roadmap. If your organization’s applications stack is a distributed monolith and you are on your path to adopting microservices architecture, Anthos components can not only manage the on-prem containerized microservices but can scale them out in a hybrid environment or a multi-cloud setup in a unified, consistent single pane of glass.

Anthos is a right step towards building Multi-Cloud ecosystem. Applications need the scale to grow. App modernization is breaking apps into containerized microservices to enable scale and abstract infrastructure porting. Anthos is an integrated platform that provides this multi-cloud hybrid ecosystem with necessary tools to develop, build and deploy applications in a unified, consistent and reliable fashion. Anthos promises applications simpler to operate, secure and modernize where you can truly “Write once, and run it anywhere”. As the developer community looks deeper into the platform, the platform in itself will proliferate through partnerships, mature and serve broader business needs. You can learn more about Anthos here.

We at Perficient are taking a deeper look at this platform and will continue to share our findings and best practices soon.

]]>
https://blogs.perficient.com/2019/04/10/anthos-building-your-cloud-ecosystem/feed/ 0 238458
Developing PaaS Using Step Functions and Hashicorp Tools https://blogs.perficient.com/2018/11/19/developing-paas-using-step-functions-and-hashicorp-tools/ https://blogs.perficient.com/2018/11/19/developing-paas-using-step-functions-and-hashicorp-tools/#respond Mon, 19 Nov 2018 18:05:26 +0000 https://blogs.perficient.com/?p=233481

Introduction:

Cloud tools now have the ability to let DevOps deliver cloud infrastructure along-side applications that are deployed on it. Did I just say, build a PaaS solution? Commercial PaaS solutions like OpenShift and Pivotal Cloud Foundry can be expensive and require specialized skills. They do speed up development and keep your enterprise cloud adoption vendor agnostic. However, adopting them calls for a strategic shift in the way your organization does application development. All good with this approach, it just takes time – POC, POV, Road Show and then a Decision. While PaaS solutions are great, another alternative is to use individual AWS services alongside open source tools that can help provision, secure, run and connect cloud computing infrastructure.

Operating knowledge of these tools and orchestrating them in a cohesive workflow can help your DevOps team do continuous deployment on a cloud infrastructure whose results are similar to commercial PaaS solutions. This solution is economical and manageable without hiring specialized skill sets. Why no specialized skill set here – because your development team already has the skills to build “Castles in the Cloud”. While they are conceptualized as solutions, the end result is a full-blown product with its own governance and management lifecycle. It can easily be integrated with application delivery pipeline. Moreover, the solution provisions immutable EC2 instances that capture log information for monitoring and debugging. Underlying belief driving this approach – “Complete Automation, seamless integration using non-commercial tools and services”.

Solution:

At first, it appears that the solution lies in Elastic Beanstalk. Though Beanstalk produces immutable infrastructure, it has certain drawbacks when it comes to encrypting configuration and log data during infrastructure provisioning. This could pose a challenge to organizations that operate in a highly regulated industry. As such, the requirements to push service logs to an encrypted S3 bucket, to make the AMI generation process configuration driven and to be able to automate the monitoring and auditing of infrastructure requires a custom comprehensive configuration driven solution. Moreover, highly regulated industries like finance and healthcare require complete encryption of transitive and logged data.

Cloud infrastructure automation can be broken into five key processes:

  • Pre Provision
  • Bakery
  • Provision
  • Validation
  • Post Provision

Consider the above process as individual workers trying to accomplish a fixed and independent task. AWS Step functions can easily orchestrate the workflow among these individual workers (activity workers) and can be configured to build a comprehensive, configuration driven and dynamic infrastructure provisioning process. With Step functions, the above five processes are now individual states that are executed in a chronological order. A process remains in that particular state till the activity worker completes its activity. A state machine is set up to pass control to different states which internally executes activity workers that are build using Lambda functions.

A quick summary of each process/state:

  • Pre Provision – This is the first stage of the process that is triggered by the applications’ CI pipeline. Mostly enterprise CI pipelines are built using CI tools like Jenkins. The pipeline sends a notification to an SNS topic. A lambda function subscribed to the topic then triggers the step function execution. In this step, the activity gathers pertinent information from an application configuration file. It combines this information with process-specific configuration and environment-related information received from the pipeline trigger. It then encrypts this information and saves it to an encrypted EC2 parameter store. Application configuration file is generated by the application development teams using a rule-based UI that restricts access to AWS services as per application needs.

 

  • Bakery – This process is the heart of automation solution. This step is the next transition state after Pre Provisioning. It uses tools like Packer, InSpec, Chef and AWS CW Agent. The state calls a Lambda activity worker that executes an SSM command. The command starts a packer build running on a separate EC2 instance. The packer pulls all the relevant information required for the build from the encrypted EC2 param store and starts the build. It uses Chef to layer application, middleware and other dependencies on the AMI. Post packer build, application-specific AMI is encrypted and shared with the application AWS account owner for provisioning.

 

  • Provision – Once the AMI is ready and shared by the application account owner, next state in the automation process is Provision. This state calls a Lambda activity worker which executes another SSM command that starts executing Terraform modules, which provisions the following – ALB, LC with AMI Id that is baked in the previous state and ASG to supplement elasticity. At the end of this state, the entire application AWS physical architecture is up and running and one should be able to use the ALB DNS to connect to the application. SSH access is removed to keep instances immutable.

 

  • Validation – Validation is the next stage in the process. After the infrastructure is provisioned, automated InSpec validation scripts validate the OS and services provisioned. This phase too is invoked by a Lambda activity worker. InSpec logs are moved to an encrypted S3 bucket from where they are sourced to the testing team to review and log defects as necessary. These defects are then triaged and assigned to respective teams.

 

  • Post Provisioning – This is the last state in the process where the new provisioned infrastructure undergoes a smoke test before it is delivered to the application/testing teams. This state configures the EC2 based CW logs with an encrypted S3 bucket. From S3, the logs are exported into log management tools like Splunk. In Splunk operations team can build monitoring dashboards. Moreover, in this step, all AWS services provisioned along with application ID are stored in a Dynamodb table for logging and auditing purposes. Lastly, this stage also initiates blue-green deployments for a smoother transition to the new release.

The above infrastructure automation process nukes and paves the infrastructure using AWS services. A new release or an update to the base SOE image triggers the execution of the automation process. It can significantly improve the efficiency of deploying applications on AWS. It greatly reduces the EC2 provisioning time and can bring down your AWS operating costs over a period of time. Though custom, these automation solutions are complex and require deep knowledge of Cloud Native Services and tools that help build Infrastructure through code. Perficient’s Cloud Platform Services team is adept in such solutions and can help your organization look past the “pet EC2 instances” world. if you’re interested in learning more, reach out to one of our specialists at sales@perficient.com and download our Amazon Web Services guide for additional information.

]]>
https://blogs.perficient.com/2018/11/19/developing-paas-using-step-functions-and-hashicorp-tools/feed/ 0 233481
Building Data Streaming APIs https://blogs.perficient.com/2018/04/19/building-data-streaming-apis/ https://blogs.perficient.com/2018/04/19/building-data-streaming-apis/#respond Thu, 19 Apr 2018 14:37:53 +0000 https://blogs.perficient.com/?p=194096

Internet produces 2.5 Exabytes of data every day. Data management systems will accumulate 44 Zettabytes of data by the end of 2020. To put things in perspective, 1 Zettabyte is equivalent to 1 trillion Gigabytes. That is a lot of data and most of if contains relevant information.  Automated workflows, social media, government agencies and IOTs have contributed significantly to this accumulation. As big data continues to grow so does its importance and relevance. Structured or unstructured, this data is paramount to existing operations. More so, it has been a vital source of subjective and predictive analysis. Therefore, organizations that generate and govern these data streams are interested in sharing it with a value adding consumer.

Streaming APIs are a great way to share this data with both external and internal consumers. Moreover, leveraging cloud based managed services or cloud IAAS helps with scaling and global outreach. However, there are few key challenges that organizations often face when sharing data streams. To name a few:

  • Security (inflight and at rest)
  • Usage
  • Latency
  • Global outreach
  • Transfer protocols (focused on realtime processing)
  • Availability, and
  • Cost

While API gateways can address many of the above challenges, to build APIs that can secure, monitor and stream data simultaneously is challenging.  Listed below are few streaming API designs that one can evaluate for such use cases.

Data Streaming Networks (DSN) – DSN are fully managed streaming services that stream real time data globally. They have built-in networking intelligence and to some extent cloud IAAS redundancy to securely transfer streaming data to the global audience. PubNub and Pusher are two large DSNs that stream trillions of messages a year with minimal latency. Moreover, they support serverless edge computing, which filters and processes data right before it is consumed. Costs to use DSN could run high for large streaming volumes and can pose economic challenge for smaller organizations with large streaming requirements.

API Gateway with websocket server – API management tools like CA API Gateway support websocket connections. It proxies a REST API call to a backend websocket server. Websocket server can tbe configured to connect to a streaming source. The API gateway secures the call, monitors the data and handles the websocket protocol. It could be deployed both on promise and on cloud. Solution team is responsible for building the infrastructure and deploying the gateway. For global outreach with minimal latency, deploy the gateway on cloud.

Cloud based streaming services – Cloud based streaming services like AWS Kinesis, Kinesis Data Firehose and Azure Streaming Analytics is another way to stream data globally. These services are fully managed services that have service specific configurations to retrieve data from the streaming source. While these services are region specific, they can be configured to distribute streaming data globally, which reduces latency for region specific consumers. Moreover, these services can push data straight to a managed big data service like RedShift or Azure Data Warehouse to perform runtime analytics.

Custom broker solutions – A cheaper but more controlled way of building streaming API solution is to use open source broker solutions. These are time tested solutions with a strong open source community support. Stable releases could be deployed on cloud and can be configured behind an API gateway (preferably cloud ) for security and monitoring purposes. Mosquitto and RabbitMQ are two such broker solutions. While Mosquitto supports only MQTT (Message Queue Transport Telemetry), RabbitMQ supports MQTT, Websockets and AMQP (Advanced Message Queuing Protocol) 1.0 and 0.x. The brokers can be deployed in a scalable environment with producer libraries acting as a bridge between them and the streaming source. They can be put behind an API Gateway to address networking and access challenges. Another simpler way is to use the broker’s API endpoints directly alongside its inbuilt security and monitoring features.

IOT will continue to generate more data and AI and Analytics Engines will continue to consume it in ways that are yet to be uncovered. Between generation and consumption lies an opportunity to control, standardize and enrich this data. If done right, managing the data flow process can generate value for both the data producer and the data consumer. Above is a brief on some streaming API design patterns and services that you can evaluate for governing your data streams. For more detailed explanation and help with building streaming API solutions please reach out to one of our sales representatives at sales@perficient.com

 

]]>
https://blogs.perficient.com/2018/04/19/building-data-streaming-apis/feed/ 0 194096
AWS OpsWorks for Chef Automate https://blogs.perficient.com/2018/02/23/aws-opsworks-for-chef-automate/ https://blogs.perficient.com/2018/02/23/aws-opsworks-for-chef-automate/#respond Fri, 23 Feb 2018 18:49:55 +0000 https://blogs.perficient.com/integrate/?p=5638

CIOs expect to shift 21% of their company’s applications to a public cloud this year, and 46% by 2020, according to a report by Morgan Stanley.

Intro

Recently, I attended a webinar on “Cloud Migration”. It was a joint presentation by folks from AWS and Chef. It touched on two key areas – “migration to cloud” and ” developing DevOps simultaneously”. They demonstrated how Chef can be used to migrate, monitor, secure and automate the application development in a hybrid environment. Why hybrid – cause that is what many smart companies do, maintain a hybrid version to minimize low availability risk.

Large organizations are slowly but steadily evaluating cloud adoption. Architecture teams are gradually modifying the organization’s Enterprise Reference Architecture. This reflects their willingness to investigate cloud technologies. Choices rendered for migration are either infrastructure centric or application centric.  Economic benefits drive infrastructure migration whereas cloud-native architectures drive application migration. This post covers a brief on Chef Automate for infrastructure centric migration. I use the words AWS and cloud interchangeably in my posts, primarily cause of my experience in the AWS space.

Application Centric Migration

As a Solutions Architect, the foremost question that I encounter when planning for cloud migration is – how to scrape a public cloud with minimal or no impact to my existing and rather healthy application development? Chef Automate in AWS OpsWorks appears to be a good answer.

Some organizations do have on premise Chef installation. The easiest way for them is to start with infrastructure centric cloud migration  is to spin an ec2 instance in cloud (security and networking setup implied), bootstrap the new ec2 instance to the inhouse Chef server and attach the existing runlist of required recipes to the instance. That is it! Your native Chef server will now treat this new node as any other instance in your organization’s network. It will push Cookbooks or Recipes to this new node as it has been doing to the existing ones. What did we achieve with this simple spinning and bootstrapping of ec2 instance – our first step on cloud without any impact on the existing DevOps process. Once the ec2 node is tested for stability and performance, more ec2 instances can then replace the inhouse instances. Hence comes along a gradual migration to cloud through DevOps.

For organizations that do not have on premise Chef installation as doing so requires specialized skill set, a simpler way is to proceed with AWS OpsWorks Chef Automate. It is a fully managed chef server that has all the goodies of rich Chef installation including but not limited to workflow automation, compliance checking and monitoring. It takes between 5-10 mins to set up the server. You get a choice to pick your server instance based on the number of projected nodes. Default security credentials to log onto the Chef Automate server and a sample test chef repository are made available through console. The test repository has the required directory structure built into it. Well that spares some time to do more meaningful work. Chef Automate is full compatible with supermarket. Most commonly used cookbooks can be found there. You can download and modify them for your application’s deployment needs. You can also generate a new one and code accordingly. However, that does require some knowledge of Ruby and JSON.  Once the server is up and running, you can bootstrap both the on premise and ec2 instances to this server. Now this appears to be a more confident and a bigger step towards infrastructure centric cloud migration. After your hybrid chef configuration is in place, you can set up DevOps workflow to automate your application deployment.

Compliance is another good features that comes out of the box with Chef Automate. CIS benchmark could be downloaded and configured with the Chef server and this will help evaluate each node’s security profile. Ultimate result “instance hardening”. Who loves to be hacked anyway!

Summary

In short, migration to cloud is a first step in a totally new direction. With this comes anxiety and no matter how adept the teams are, a little professional help to mitigate risks is always helpful. At Perficient we continue to monitor cloud technologies and trends. We do understand the challenges in embracing cloud technologies  and hence have come up with proven cloud based solutions, platforms, architectures and methodologies to aid smoother migration.  If you’re interested in learning more, please reach out to one of our specialists at sales@perficient.com and download our Amazon Web Services guide for additional information.

]]>
https://blogs.perficient.com/2018/02/23/aws-opsworks-for-chef-automate/feed/ 0 196524
Discovering AWS Data Migration Service https://blogs.perficient.com/2018/01/22/introduction-to-aws-data-migration-service-dms/ https://blogs.perficient.com/2018/01/22/introduction-to-aws-data-migration-service-dms/#respond Mon, 22 Jan 2018 20:49:34 +0000 https://blogs.perficient.com/integrate/?p=5423

Pete – “This year we have the budget to replace our existing data replication technology.”

Harry – “Good to hear. Have you evaluated any tools or services?

Pete – “I recently attended an Amazon Web Services (AWS) seminar hosted by our cloud services team. They showcased many managed and core compute services. Harry – You are a certified AWS developer. Do you know of any AWS offering that might come in handy? Our organization’s IT goals are geared towards cloud adoption; better we choose a cloud based solution and ensure that our department’s IT strategy is in alignment with that of our organization.

Harry – “I might just know of an AWS managed service that is worth evaluating – AWS Data Migration Service (DMS)

Brief

Like Pete and Harry many in our departments face a similar dilemma when evaluating data transfer technology. Coupled with costs associated with the new databases, these discussions lead to complex and lengthy migration plans with prolonged downtime. AWS’ DMS is a data migration service that helps with quick and secure data migration with minimal downtime to the consuming application. The service can be used to migrate data from on premise to AWS and vice versa or even between two on premise data bases. The service supports both homogeneous migrations (same source and destination dbs) as well as heterogeneous migrations (source and destinations dbs are different). The service can also move data straight to the Redshift data warehouse, where it could be used for SQL based data analysis.

In case of heterogeneous migrations, a free of cost Schema Conversion Tool (SCT) can be used to automate the schema mappings. The tool automatically migrates the source to the destination schemas. There could be instances where certain elements could not be mapped and the tool flags them for manual intervention and rectification. Once configured, DMS’s replication task can read data from the source and moves it to the target database. It will continue to do so even for the data that has recently changed after migration. Replication task can be configured to continue moving new or changed data to the target database in real-time.

As with other AWS offerings, DMS is most economical when used with other AWS offerings. If the target database is Aurora, Redshift or DynamoDB, then, in a single AZ (Availability Zone) environment the service use is free for six months with a possible extension of another three months upon request. This preludes to cost effectiveness when the organization plans further adoption of the AWS’ echo system. For example, AWS Aurora, an InnoDB based storage management system which costs one tenth the price of Oracle, if chosen as the target database can not only save you cost in migration but also in storage. DMS is a low-cost service where the consumer only pays for the underlying compute resources, storage to manage replication logs and data transfer if the outbound data is not stored on the same AZ or AWS region or resides outside of AWS. For a typical use case where an on premise data is moved to AWS, DMS will cost as much as $3.00 in service charges.

Sources for DMS

Source databases could be both on premise and EC2 based instances –  Source could be Oracle, MySQL, MS SQL Server edition, MariaDB, PostgreSQL, MongoDB and SAP ASE. In short if your organization uses the most common database systems and are not stuck in versions from the prehistoric IT era, in all probability your source database is a good candidate for DMS.

DMS also supports full load when using Azure SQL database as source. Good to know if your organization decides to migrate over from Azure to AWS.

AWS RDS supported databases are Oracle, MySQL, MS SQL, MariaDB, PostgreSQL and Amazon Aurora with MySQL compatibility.

DMS supports full data load and CDC (Change Data Capture) when using S3 as source.

Please refer DMS documentation HERE for exact versions supported.

Targets for DMS

Irrespective of the source storage system, DMS creates the MySQL-compatible target table as an InnoDB table by default.

Supported on premise and EC2 based engines are Oracle, MySQL, MS SQL Server edition, MariaDB, PostgreSQL and SAP ASE.

AWS RDS supported databases are RDS Oracle, RDS MySQL, RDS MS SQL, RDS MariaDB, RDS PostgreSQL, Amazon Aurora with both MySQL and PostgreSQL compatibility, Redshift, S3 and DynamoDB.

Please refer DMS documentation HERE for exact versions supported.

Security

DMS supports security for data that is in transit or at rest. AWS KMS can be used to generate and manage encryption keys. It support SSL and require authentication through IAM as an authorized user.

Summary

AWS DMS is a highly resilient data migration service. The service continuously monitors the source and target databases alongside network connectivity. Any interruption that results in halting of replication task is automatically rectified and the process resumes from where it was halted. For higher availability, the process can be deployed in a multi AZ environment. If your organization is looking for AWS based replication solution, then DMS could be your answer. 

If you’re interested in learning more, reach out to one of our specialists at sales@perficient.com and download our Amazon Web Services guide for additional information.

 

]]>
https://blogs.perficient.com/2018/01/22/introduction-to-aws-data-migration-service-dms/feed/ 0 196506
The “Services Soup” https://blogs.perficient.com/2017/05/01/the-services-soup/ https://blogs.perficient.com/2017/05/01/the-services-soup/#respond Mon, 01 May 2017 18:05:57 +0000 http://blogs.perficient.com/integrate/?p=3594

As efficient software applications penetrated every aspect of business workflow, so did the information systems that delivered them. Whether small or large, they spread like a viral fever. Every workflow, department, business unit or a subsidiary took advantage of their efficiency and stable record management capabilities. While their adoption grew, certain challenges became inevitable. How do you make them talk to each other? Most of them are disparate systems looking for information that is captured and stored in another. Efficiencies were there, but the scale was missing.

This gave rise to the whole new world of System Integration (SI), an area within IT that is very dear to me – simply because I have spent most of my work life doing this. Tech evolution is feeling its effect as well.

From dedicated connected pipes to service oriented architectures, SI has come a long way. In recent past, terminologies like “Microservices” and “API” have started to significantly influence the SI design. These two along with SOA at times create a solution kludge for SI architects. Hence, I’m writing this small blog to examine these three concepts and see which among the three in our “Services Soup” are worth further evaluation.

Simplified View

  • SOA (Service Oriented Architecture) – A style of software design where services are provided to the other components by application components, through a communication protocol over a network.
  • Microservices – Scaled down version of SOA, where services are fine grained and protocols are light weight to improve modularity.
  • API (Application Programming Interface) – Set of specifications that aids communication between different software components.

 

A little Tech Talk

  • SOA – This name is so indiscriminately used with Enterprise Service Bus (ESB) that at this point I consider SOA as synonymous to ESB. However, for a Services Oriented Architecture business, entities matter. They are the functional core of the services. As the underlying data model gets complex, so does the business entities that it represents. This leads to the development of complex large size services. Dismantling a large service becomes an inherent challenge. End result:  a “Megatron” service with many consumer versions and a tool to manage those versions. SOA garners more business value as technical integration strategy takes a back seat. It promotes inter-operability, flexibility and the concept of shared services. These value-adds help businesses respond actively to changing market conditions but the added complexity, heterogeneity and many moving parts make SOA less attractive in its current shape, especially in an Agile development environment.

 

  • Microservices – They are the contemporary interpretation of SOA. One might consider SOA an evolution and Microservices a revolution in SI. Componentization was the need of the hour and fine grained objective driven Microservices were the answer. They support custom integration and are technology agnostic. They exist to service a very specific need. If another need arises, even though a similar one, you don’t modify an existing Microservice. Rather you create a new one. Adaptive and agile software development methodologies make them a great fit in the current application development environment. They fit well in the Continuous Integration realm. As and when the underlying technologies evolve, Microservices can take advantage of those. When integration architects are tasked with a new SI undertaking, please see if Microservices can be a solution before harping on an existing SOA driven solution.

 

  • API – APIs span over a large section in the SI spectrum. They started as a low-level programming interfaces. In today’s world they can be appropriated as simple HTTP interfaces. They often equate to REST and follow JSON data formats. APIs became popular with the advent of smartphones. Developers needed swift access to back end functions and voila, the commercial API market was born. They started as functional gateways to internal consumers. As markets expanded organization’s data became a vital commercial entity. This made APIs a source gateway to external consumers. Their demand surged with the rise in data analytics. Such scale and widespread distribution of data brought forth the need for security, simple consumption and self-service. Hence, started a new era in IT capability –  the world of API Management.

 

The simplicity and self-administering capabilities of APIs have blurred the line between APIs and SOA. Many companies now use APIs to expose capabilities inside the companies. However, many still use the term “service” for internal design purposes. Microservice is an alternative architecture in the SOA space. It partitioned an aggregate business entity into meaningful atomic units that significantly enhanced agility, scalability and resilience to the SI architecture.

From a financial perspective, SOA and Microservices were developed to satiate internal integration needs. As such they fit the cost containment model of IT. On the other hand, APIs were developed to expose the enterprise data or modular functions to the outside world with an intent to generate revenue. As such they fit the asset management model in IT. Exposing a capability or data to the outside world and developing a revenue model around its consumption made APIs the economic powerhouse in this new Internet Age.

SOA, Microservices and APIs are modern day integration techniques. It is worth evaluating each of them to see which of these technologies can better help meet your organization’s integration needs. From cloud to mobile, the infrastructure platforms are evolving at an ever faster pace. So are the requirements to integrate the systems that service these platforms. Sound knowledge of modern day integration architectures can add agility, elasticity, fault tolerance and adaptability to your design.

 

]]>
https://blogs.perficient.com/2017/05/01/the-services-soup/feed/ 0 196356
Serverless Computing https://blogs.perficient.com/2017/04/23/serverless-computing/ https://blogs.perficient.com/2017/04/23/serverless-computing/#respond Mon, 24 Apr 2017 01:35:51 +0000 http://blogs.perficient.com/integrate/?p=3568

Introduction

Cost optimization, better flexibility and efficient resource management are some of the key factors fueling the growth of Cloud infrastructure. Serverless Computing is another step in this direction.

Serverless Computing, also knows as function as a service (FAAS), is a Cloud-computing model where the Cloud provider completely manages the container that processes the service request. The name “Serverless Computing” is kind of a misnomer. It does not mean that function calls are Serverless, rather it implies that the Cloud provider manages the container that runs the functions. In short, it adds another layer of abstraction to monolithic application development. I personally see this as another step in scaling down the atomic unit of computation. From virtual machines to containers and now a step further where even functions can be hosted on Cloud.

Serverless Computing at times gets confused with micro services architecture. So how does it stand out – micro service is a common interface that any consumer can call to perform a specific task. It internally has functions that may leverage Serverless architecture. Developers can grab these functions on the fly to assemble micro services without provisioning any dedicated resources.

Some commonly used Serverless Computing models are:

  • AWS Lambda
  • IBM’s OpenWhisk (Open Source)
  • Microsoft’s Azure Functions
  • Google’s Firebase

Common Use Cases

Both Startups and mature organizations should consider Serverless Computing architectures during application development or migration in Cloud. Monolithic applications are primary contenders for Serverless Computing. Batch processes that run frequently may not require independent virtual servers. As such they should always be evaluated for Serverless Computing.

Moreover, micro services that leverage common utility functions can also use Serverless Computing platforms. Video Encoders, Online Tutorials and Image Processors are some other use cases for Serverless Computing. “aCloud.guru”, a widely used online AWS training portal, is entirely built using AWS Lambda and S3(Simple Storage Service).

Advantages

Serverless Computing could be highly cost effective. In most cases you only pay for the lines of codes executed. For Use Cases that qualify, significant savings are expected, as no dedicated virtual servers need to be provisioned.

It furthers resource optimization. Most Cloud platform supports Node.js, Swift, Java and Python. This lets developers focus more on building micro reusable functions.  JSON helps with auto serialization and deserialization of function request and response parameters. As such, concerns regarding multi threading and HTTP request processing are alleviated.

Constraints

While Serverless Computing is great for monolithic application development, not all use cases qualify. In some cases where a particular function sits idle for most of the time, such functions may see performance degradation. This happens because the Cloud provider may choose to shut down dormant processes. For example a Java function may experience JVM restart related latency if it is not being used intermittently. Also, Serverless Computing is not the best option for compute intensive applications as Cloud provider may limit the resources provisioned for Serverless Computing.

Conclusion

Serverless Computing is a step up towards developer’s efficiency. It brings forth developers point of view. Here, virtual infrastructure concerns melt away. Developers now can do what they are expected to do – build software from scratch. Serverless Computing is in its very nascent stages. Organizations are still grappling with efforts tied to Cloud migration. However, as Cloud adoption grows stronger, benefits of Serverless Computing will become more palpable and enticing.

 

 

]]>
https://blogs.perficient.com/2017/04/23/serverless-computing/feed/ 0 196353