Spark Pan, Author at Perficient Blogs https://blogs.perficient.com/author/span/ Expert Digital Insights Mon, 14 May 2018 15:56:19 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Spark Pan, Author at Perficient Blogs https://blogs.perficient.com/author/span/ 32 32 30508587 How to Use Amazon S3 and EC2 Backup and Restore https://blogs.perficient.com/2018/03/27/how-to-use-amazon-s3-and-ec2-backup-and-restore/ https://blogs.perficient.com/2018/03/27/how-to-use-amazon-s3-and-ec2-backup-and-restore/#respond Tue, 27 Mar 2018 15:56:34 +0000 https://blogs.perficient.com/delivery/?p=10427

Amazon Simple Storage Service (S3) and Amazon Elastic Compute Cloud (EC2) are two major storage services for AWS. S3 is more lightweight and provides the capability to store data. EC2, on the other hand, is a web service that provides secure, resizable computing capacity in the cloud.

As we know, backup and recovery are becoming more and more important in disaster recovery. Backing up data in case of a loss or before making a big change in the current environment is vital in case we need to roll back to the latest working environment.

Below are the detailed steps for backing up and restoring with S3 and EC2.

S3 Backup and Restore

Create the S3 bucket

1.Sign in to the AWS Management Console and open the Amazon S3 console at https://console.aws.amazon.com/s3/.

2.Choose Create bucket.

How to Use Amazon S3 and EC2 Backup and Restore

3. In the Bucket name field, type a unique DNS-compliant name for your new bucket. Create the bucket name using the follow naming convention (blog-demo-S3):

The name must be unique across all existing bucket names in Amazon S3.

After you create the bucket, you cannot change the name.

4. Choose Create.

How to Use Amazon S3 and EC2 Backup and Restore

5.After creating the bucket, go to Permissions -> Bucket Policy.

6.Paste the following policy. Make sure the ARN name matches the name of your new bucket and click save.

How to Use Amazon S3 and EC2 Backup and Restore

Create Sync Backup folder to S3 Bucket

  1. Putty into the server and navigate to the folder you want to sync

        Backup Script: aws s3 sync {folder path} s3://{S3 ARN bucket name}

  1. To restore the folder, make sure you rename or delete the existing folder before executing the command.

        Restore Script:  aws s3 sync s3://{S3 ARN bucket name} {folder path}

EC2 – Create, Backup and Restore Snapshots in AWS

To create a snapshot using the console

  1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
  2. Choose Snapshots in the navigation pane.
  3. Choose Create Snapshot.
  4. In the Create Snapshot dialog box, select the volume to create a snapshot for, and then choose Create.

To restore an EBS volume from a snapshot using the console

1.Click on the EC2 Link to access all instances

How to Use Amazon S3 and EC2 Backup and Restore

2. In the left navigation pane, choose ELASTIC BLOCK STORESnapshots.

3. Search and Select the snapshot you want to restore and choose Create Volume.

4. The defined values for the volume configuration will be populated.

How to Use Amazon S3 and EC2 Backup and Restore

5. Choose Create Volume and note the volume ID.

How to Use Amazon S3 and EC2 Backup and Restore

6.NEXT STEP: After you have restored a volume from a snapshot, you can attach it to an instance.

To attach an EBS volume to an instance using the console

  1. Click on the EC2 Link to access all instances

How to Use Amazon S3 and EC2 Backup and Restore

2. In the left navigation pane, choose Volumes, select a volume and choose ActionsAttach Volume.

3. In the Attach Volume dialog box, start typing the name or ID of the instance to attach the volume to and select it from the list. You must stop the instance before attaching volume.

4.Keep the suggested device name and choose Attach

Note: Newer Linux kernels may rename your devices to /dev/xvdf through /dev/xvdp internally, even when the device name entered here (and shown in the details) is /dev/sdf through /dev/sdp.

5.NEXT STEP: Connect to your instance and make the volume available.

]]>
https://blogs.perficient.com/2018/03/27/how-to-use-amazon-s3-and-ec2-backup-and-restore/feed/ 0 211018
How to Use JIRA Service Desk to Monitor and Measure SLA https://blogs.perficient.com/2017/10/18/how-to-use-jira-service-desk-to-monitor-and-measure-sla/ https://blogs.perficient.com/2017/10/18/how-to-use-jira-service-desk-to-monitor-and-measure-sla/#respond Wed, 18 Oct 2017 06:41:23 +0000 http://blogs.perficient.com/delivery/?p=9496

Service Level management (SLM) is a vital process for every IT service provider organization in that it is responsible for agreeing and documenting service level targets and responsibilities within SLAs. Defining acceptable response and resolution times is a key task in the production of IT service level agreements(SLAs).

SLA response times and SLA resolution times are two major measurements. SLA response times usually refer to how quickly you will respond to a technical issue being raised via phone, email or other methods and a resolution time refers to how long it takes from the time an issue is logged until it is fully resolved.

Alerts/requests response targets are measured by sending acknowledge email to stakeholder both in own organization and client side. There is no easy way to calculate the response times for each alerts/request by the emails. JIRA service desk plugin is a solution to monitor and measure SLA response times and SLA resolution times if you have configured all the inbound tickets/request will generate a ticket in your JIRA project.

After you apply the JIRA service desk plugin for your JIRA project, it will have out of box functionality for you to setup the ticket’s workflow and metric. For example, you can create an issue type and its workflow which named service request as a common one in project settings — > JIRA workflows — > issue type.

Below is the diagram for the issue type:

The original though is we will need to triage the open request/alerts in the ticket pool. You don’t want to the ticket stay at the open status and no one to take care of it. Then can create a metric by Project setting — > SLAs — > New metric.

Below is an example metric for time to acknowledge:

By defining the start/stop conditions and SLA targets, you can monitor and measure the ticket acknowledge times. In the Goals section, it will filter the JIRA tickets by priority and Severity and setup expected response times base on this.

After all, you can monitor and measure the ticket status in the queue.

For example, if you want to monitor all tickets which created in past week. You can create a filter and chose columns which you want to show up in the list.

Below is the result to display time to SLA acknowledge times information. Both TEST-01 and TEST-02 ticket are Medium Severity and Critical priority. The target is 1h. TEST-01 break the SLA:  -10m of 1h and TEST-02 meet the SLA: 59m of 1h.

Below is the sample time to resolved metric and result. 48h is the resolution time target.

Setting SLA targets provides you with a valuable opportunity to manage your customer’s expectations and protect your business. Most importantly, it gives you a chance to present a realistic view of what can be expected of you.

By using service desk plugin, it will be more efficient to overview all ticket status and continuously to improve managed service offering.

The ITIL definition of service desk is: “The single point of contact between the service provider and the users. A typical service desk manages incidents and service requests, and handles communication with the users.”

Source: ITIL 2011 glossary

https://www.axelos.com/Corporate/media/Files/Glossaries/ITIL_2011_Glossary_GB-v1-0.pdf

]]>
https://blogs.perficient.com/2017/10/18/how-to-use-jira-service-desk-to-monitor-and-measure-sla/feed/ 0 210982
Getting Started with Core Data https://blogs.perficient.com/2013/06/13/getting-started-with-core-data/ https://blogs.perficient.com/2013/06/13/getting-started-with-core-data/#respond Thu, 13 Jun 2013 09:22:29 +0000 http://blogs.perficient.com/delivery/?p=2386

Enabling persistent object mapping is a relatively straight-forward process. It differs from transient object mapping in only a few ways:

  1. libRKCoreData.a must be linked into your target
  2. Apple’s CoreData.framework must be linked to your target
  3. A Data Model Resource must be added to your target and configured within Xcode
  4. The RestKit Core Data headers must be imported via #import <RestKit/CoreData/CoreData.h>
  5. An instance of RKManagedObjectStore must be configured and assigned to the object manager
  6. Persistent models inherit from RKManagedObject rather than RKObject
  7. Creates the object mapping targeting the Core Data entity with the specified name by RKManagedobjectMapping
  8. A Primary Key property must be defined by primaryKeyAttribute

Example for configuring the Core Data entities:

1. defining –  product and product description

 01

02

 

2. Configuring the Core Data object mapping.

 03

The primaryKeyAttribute was defined the primary key for the table ‘MKG_PROD_DESC’.

 04

 

The primaryKeyAttribute was defined the primary key for the table ‘MKG_PROD’. The hasOne relationship defined the relationship between product and product description.

 

Once these configuration changes have been completed, RestKit will load & map payloads into Core Data backed classes.

There are a couple of common gotchas and things to keep in mind when working with Core Data:

  1. You can utilize a mix of persistent and transient models within the application — even within the same JSON payload. RestKit will determine if the target object is backed by Core Data at runtime and will
    return managed and unmanaged objects as appropriate.
  2. RestKit expects that each instance of an object be uniquely identifiable via a single primary key that is present in the payload. This allows the mapper to differentiate between new, updated and removed objects.
  3. Apple recommends utilizing one managed object context instance per thread. When you retrieve a managed object context from RKManagedObjectStore, a new instance is created and stored onto thread local storage if the calling thread is not the main thread. You don’t need to worry about managing the life-cycle of the managed object contexts or merging changes — the object store observes these thread-local contexts and handles merging changes back into the main object context.
  4. RestKit assumes that you use an entity with the same name as your model class in the data model.
  5. There is not currently any framework level help for working with store migrations.

 

 

]]>
https://blogs.perficient.com/2013/06/13/getting-started-with-core-data/feed/ 0 210626