virtualization Articles / Blogs / Perficient https://blogs.perficient.com/tag/virtualization/ Expert Digital Insights Mon, 14 May 2018 15:39:55 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png virtualization Articles / Blogs / Perficient https://blogs.perficient.com/tag/virtualization/ 32 32 30508587 Everything IBM Customers Need to Know about PVU https://blogs.perficient.com/2016/08/18/everything-ibm-customers-need-to-know-about-pvu/ https://blogs.perficient.com/2016/08/18/everything-ibm-customers-need-to-know-about-pvu/#comments Thu, 18 Aug 2016 11:09:28 +0000 https://blogs.perficient.com/ibm/?p=6871

In 2006, IBM introduced a new license metrics called Processor Value Unit (PVU). PVU is a unit of measure which streamlines IBM licensing policies and contracts. I’ve found that these changes in IBM licensing jargon and calculations can be very confusing to IBM customers, IBM business partners, and IBM software sellers.

Before we deep dive into how PVU  is calculated and affects you as IBM customers, let’s start with some key terminology.Woman on computer and laptop

  • Core: Core is a logical execution unit on which programs or software runs. It is also referred to as processor core.
  • Chip: Chip refers to a physical Integrated Circuit (IC) on a computer. Multi-core chips (dual-core, quad-core) have more than one processor core on the chip.
  • Sockets:  Socket refers to a physical connector on the motherboard that accepts a single physical chip. Many motherboards can have multiple sockets that in turn can accept multi-core chips.
  • Processor: IBM defines processor as the processor core on a chip, while many middleware vendors and some hardware vendors (such as Intel, AMD, etc) define processor as a chip.

In a nutshell, socket is actually what the CPU chip is connected into via pins and leads. A core is full-blown CPU sitting on the chip. Basically, a single socket chip can have one, two, or four cores on it.

  • Processor Value Unit (PVU) is a unit measure used to differentiate licensing of software on distributed processor technologies (defined by Processor Vendor, Brand, Type, and Model number).


PVU License Types

Entitlements can be Full Capacity or Sub (Virtualization) Capacity. Before we jump into Full Capacity or Sub Capacity, it is very important to define “Activated Processor core.”

An Activated Processor core is a processor core that is available for use on a physical or virtual server, and regardless of capacity of the processor, can be or limited through virtualization technologies, operating system command, BIOS settings, or similar restrictions.

For Full Capacity licensing, the licensee must obtain PVU entitlements sufficient to all activated processor core in a physical hardware environment made available to or managed by program.

For Sub Capacity licensing, the licensee must obtain entitlements sufficient to cover all activated processor cores made available to or managed by the program, as defined according to the Virtualization Capacity Licensing Rules.

By default, IBM customers are subjected to full capacity licensing, which means they need to license all the activated physical processor cores, regardless of whether the IBM software actually uses all of the cores.

In a virtualized environment, IBM customers only need to procure PVU entitlements for those processor cores that are available to IBM software.

Calculating Processor Value Unit

The number of PVUs required is based on the processor technology and the number of the processor cores available for the IBM software. The number of PVUs assigned to a processor is defined in the PVU per core table.
The following information must be available in order to determine the corresponding PVU value:

  • Server vendor and manufacture, type, and model of the processor
  • Quantity of processor Core Available

Assume that you’re an IBM customer trying to procure IBM MQ Series and IBM WebSphere Application Server for Dell Power Edge server with 2 Intel Xeon 3400 processor sockets, each with eight cores.

Processor Architecture Server Vendor & Brand Processor Vendor & Brand Processor Model No. No. of Processor Socket No. of Cores per processor sockets
X86 Dell Power Edge Intel Xeon 3400 2 8

 

The formula to calculate PVU is very simple:
No. of Processor Sockets X No. of Cores/Processor Socket X PVUs/core PVU table = Total No. of PVU

Examples 1:  Full Capacity on a physical server

PVU Diagram 1

As shown in the above example, the environment is not virtualized. This will result in a Full Capacity license, which is counted as highest number of PVUs on the server where software is installed. As per the PVU table for above processor configuration, 70 PVU per core is assigned. The below table shows how Full Capacity licenses are calculated for MQSeries and WAS. An assumption is made for Price per PVU.

IBM Software IBM MQ Series IBM WebSphere Application Server
Cores To License 16 16
PVU Per Core 70 70
Price Per PVU $50 $75
Total License Cost 16 X 70 X 50 = 56,000 16 X 70 X 75 = 84,000

 

Example 2: Sub Capacity on two virtual machine

PVU Diagram 2

As shown above two virtual machines VM1 and VM2 are deployed on a physical server that has two Intel Xeon 3400 processors, each with eight cores. VM1 has MQ and WAS installed, VM2 just has MQ. Since MQ is installed on the both VMs it has access to 16 cores. The below table shows Sub Capacity license for MQSeries and WAS.

IBM Software Sub Capacity for IBM MQ Series Sub Capacity for IBM WebSphere Application Server
Cores To License 16 8
PVU Per Core 70 70
Price Per PVU $50 $75
Total License Cost 16 X 70 X 50 = 56,000 16     X 70 X 75 = 42,000

 

As you can see from above two examples, sub-capacity licensing can significantly reduce licensing costs for IBM customers.

Note : Some of the above definitions are taken from IBM.

]]>
https://blogs.perficient.com/2016/08/18/everything-ibm-customers-need-to-know-about-pvu/feed/ 1 214398
What Does Azure Stack Mean for the Enterprise? https://blogs.perficient.com/2016/02/01/what-does-azure-stack-mean-for-the-enterprise/ https://blogs.perficient.com/2016/02/01/what-does-azure-stack-mean-for-the-enterprise/#respond Mon, 01 Feb 2016 14:23:45 +0000 http://blogs.perficient.com/microsoft/?p=29060

At the Ignite 2015 conference Microsoft had announced a few details about what Azure Stack is, and that at its most basic level it’s Azure Services that run in the On-Premises, Enterprise Data Center. Back when Microsoft Azure launched in 2010, there was a promise of Azure in your Data Center, that wasn’t talked about for a couple years. Thankfully they’ve brought it back as Azure Stack, and it’s proven to be quite a large effort for Microsoft to plan, implement and integrate with Windows Server. Now that the initial Technical Preview of Azure Stack is available for download, what exactly does Azure Stack mean for the Enterprise? This article will outline the details to answer that question.

Azure Services in Your Data Center

Microsoft has very aggressively grown the entire Azure platform (IaaS, PaaS and SaaS) services over the last 6 years into the best Cloud platform. Recent Microsoft earnings released show evidence of both the success of Azure and how much it means for the future of Microsoft. One area that has fallen a bit behind in Windows Server and what it provides for on-premises, enterprise data centers. After all, there only been 2 updates to Windows Server in the last six years since Azure launched; Windows Server 2012 and 2012 R2. While enterprises are traditionally slow to deploy the latest version of Windows Server (for many reason that make plenty of business sense), it’s still definitely time for the Windows Server vNext to come. And, appropriately Microsoft is readying Windows Server 2016 and the all new Azure Stack along with it to bring Azure features into availability to host within the on-premises, enterprise data center.
Microsoft Azure is made up of some very complicated pieces of software; from the Service Fabric and Controllers to custom versions of both Windows Server and SQL Server, along with many other proprietary pieces of software built to handle all the powerful infrastructure of VM hosting, Storage and every other feature of the Azure platform. It really is no mystery why it’s taken them so long to get a version of Azure that can be hosted within your own data center. Thankfully, the time has come where we can all download and try out Azure Stack and begin figuring out a long term strategy for its adoption along side Microsoft Azure to continue building out hybrid-cloud solutions.

Business Implications

From a technology standpoint having Microsoft Azure running within your own, on-premises data center sounds really cool. However, what are the business implications? What does this really mean for the enterprise?
While additional training will be needed to get all the members of your Enterprise IT team up to speed with using Azure Stack, just as many companies are embarking on with Microsoft Azure currently. Fortunately there is a lot of consistency between the Management Portal, API’s and SDK’s that work with both Microsoft Azure and Azure Stack. This means very little, if any, modifications to code and deployment scripts to be able to switch applications from running between Microsoft Azure and Azure Stack. This reuse not only has huge benefits of reducing training, but also a greatly improved flexibility to run applications either on-premises or in the Azure cloud.

Write once, deploy to Azure or Azure Stack

Microsoft Azure and Azure Stack mean a hybrid-cloud like no other. Your cloud, on your terms; whether in Microsoft data centers or your own Enterprise data center. Easily and flexibly go from Enterprise-scale (on-premises) data centers to Hyper-scale (Azure cloud) data centers when the business needs require.

What IT Pros and Developers Needs to Know

Just as the adoption of Microsoft Azure has and continues to provide many changes and new technologies for both Developers and IT Pros to learn and master; the introduction of Azure Stack will bring with it many similar challenges. Microsoft has announced that both Microsoft Azure and Azure Stack will come with consistent and compatible API’s and Tools for implementation. This means there will be minimal changes necessary to alternatively deploy, host or manage Azure Services whether they’re hosted in Azure Stack or in the Microsoft Azure Cloud. In addition to development tools and API’s, their is a Management Portal for the Azure Stack environment that is consistent with the Azure Management Portal since it’s built with the same code as Azure.

“Azure and Azure Stack have a standardized architecture, including the same portal, a unified application model, and common DevOps tools.”

Here’s a list of some of the Azure Stack environment highlights for both IT Pros and Developers to understand:

  • Azure Stack has a familiar Web Portal; as it’s the same code as Azure
  • Use the tools you know; focus on solving problems, rather than learning new development and deployment tools
  • Reuse Automation code; powered by a consistent API for automation, development, deployment and the management capabilities
  • Deployment and Configuration in a single, coordinated operation; done via the web portal or programmatically through a consistent SDK
  • Templated Deployments across different environments such as Testing, Staging and Production

The first Technical Preview of Azure Stack was made available Jan 29, 2016. While this is the first public preview, it offer the benefits of providing feedback to Microsoft and the ability to start getting familiar with the new platform and begin planning how to integrate it into your Enterprise data center when General Availability (GA) arrives.

Services Supported

What Azure Stack services will be supported when Azure Stack goes to General Availability (GA)? Here’s a list of the Azure Stack services Microsoft has planned for the initial GA release of Azure Stack:

  • Compute – Virtual Machines, Service Fabric
  • Data & Storage – Blobs, Tables, Queues
  • Networking – Virtual Network, Load Balancer, VPN Gateway
  • Mgmt. & Security – Microsoft Azure Portal, Key Vault
  • Web & Mobile – App Service (Web Apps, Logic Apps, Mobile Apps, API Apps)
  • Developer Services – Azure SDK

While the above isn’t the full feature set within Microsoft Azure (which is extremely huge and diverse), it does include all of the most commonly used features that are necessary for an initial release of Azure Stack to make it useful for the enterprise.

Hardware and Deployment Requirements

Before downloading Azure Stack and setting it up on a server you will need to ensure can meet the necessary requirements for setting up your own on-premises Azure environment.
Here’s a short list of the system requirements for setting up Azure Stack on a physical machine or virtual machine (VM):

  • Windows Server 2016 Datacenter Edition – with the Azure Stack TP1 release you will need Windows Server 2016 Datacenter Edition Technical Preview (TP) 4
  • Latest Window Server 2016 updates, including KB 3124262
  • The machine does NOT need to be joined to a Domain

The above are the major requirements for setting up Azure Stack, but there is a much more detailed list of networking requirements and information available within the documentation.

Whitepaper and Other Documentation

There is a very large amount of information and documentation on Azure Stack that’s already been published by Microsoft. These resources provided the source materials for this article. To read these materials, including an Azure Stack Whitepaper and other documentation, please review the following links:

]]>
https://blogs.perficient.com/2016/02/01/what-does-azure-stack-mean-for-the-enterprise/feed/ 0 225090
Top 5 Life Sciences Blog Posts From April 2015 https://blogs.perficient.com/2015/05/04/top-5-life-sciences-blog-posts-from-april-2015/ https://blogs.perficient.com/2015/05/04/top-5-life-sciences-blog-posts-from-april-2015/#respond Mon, 04 May 2015 12:08:55 +0000 http://blogs.perficient.com/lifesciences/?p=1826
shutterstock_151413983 (1)

 

Now that May is here, I thought it would be neat to look back at what our readers found most interesting last month. Below are the top five blog posts Perficient’s life sciences practice wrote in April – they’re ranked in order of popularity, with number one being the most viewed piece. 

  1. Where Did The Name Argus Safety Come From?
  2. Regulatory Report Tracking In Oracle Argus Safety
  3. The Elephant In The (Server) Room
  4. Admit It, Finding The SME At Your Organization Ain’t Easy
  5. Where’s The Amazon Dash Button For Pharma?

As always, thank you for your continued support – our team finds it very rewarding to have you as a reader.

]]>
https://blogs.perficient.com/2015/05/04/top-5-life-sciences-blog-posts-from-april-2015/feed/ 0 189395
Supercomputer in the cloud: Azure G-series VMs https://blogs.perficient.com/2015/02/11/supercomputer-in-the-cloud-azure-g-series-vms/ https://blogs.perficient.com/2015/02/11/supercomputer-in-the-cloud-azure-g-series-vms/#respond Thu, 12 Feb 2015 01:49:39 +0000 http://blogs.perficient.com/microsoft/?p=25529

microsoft-windows-azure-datacentreIn January, Microsoft announced the general availability of a new, top-tier of Azure virtual machines: the G-series. These are some really powerful machines. The top configuration, G5, has Intel Xeon E5 CPU with 32 cores, 448 GB RAM and 6,144 GB SSD disk and it costs $9.65/hr to use. Considering how powerful the computer is, the price is very reasonable. The best thing about Microsoft Azure is that you pay only for what you use. If VM is not in use then it could be shut down and customer is only charged for the storage of VMs image (which is really minuscule).
None of the usual Azure competitors (AWS and Rackspace Cloud) offer servers as powerful as G5.
Traditionally, cloud is positioned as cheap, simple hardware which can be scaled out easily and on demand. However, there are cases when you may need a plenty of raw power in a single server. For example, when you running an application which doesn’t scale out very well… and this is the case for the majority of legacy applications.
The complete G-series lineup:

VM Size

Cores

RAM

Local SSD storage

Persistent Data Disks Max

Standard_G1

2

28 GiB

412 GB

4

Standard_G2

4

56 GiB

824 GB

8

Standard_G3

8

112 GiB

1,649 GB

16

Standard_G4

16

224 GiB

3298 GB

32

Standard_G5

32

448 GiB

6596 GB

64

At the moment, G-series VMs are only available at West US and East US 2 regions.

]]>
https://blogs.perficient.com/2015/02/11/supercomputer-in-the-cloud-azure-g-series-vms/feed/ 0 224879
Windows Server 2003 End of Support Looms – A Webinar Recap https://blogs.perficient.com/2014/12/16/windows-server-2003-end-of-support-looms-a-webinar-recap/ https://blogs.perficient.com/2014/12/16/windows-server-2003-end-of-support-looms-a-webinar-recap/#respond Tue, 16 Dec 2014 14:54:20 +0000 http://blogs.perficient.com/microsoft/?p=24726

It’s no secret – Microsoft support for Windows Server 2003 ends on July 14, 2015. Clock
Last week, Perficient, AppZero and Cisco teamed up for a webinar, Planning & Preparing for Windows Server 2003 End of Life. During the session, the speakers discussed the options and paths available when moving off Windows Server 2003, including the transition to a cloud model, benefits of Windows Server 2012, virtualization on Cisco UCS, and what exactly AppZero can do for your migrations.
First, Steve Andrews, a senior solutions architect at Perficient, explained exactly what end of support/end of life means: no updates, no compliance, no protection. But, the good news is, for those still on Windows Server 2003, there is the opportunity to transform your datacenter by transitioning to a hybrid cloud model, which Steve reviewed. He then showed attendees how to get started:

  1. Discover & Assess: Catalog and categorize apps and workloads
  2. Target: Identify destinations
  3. Migrate: Make the move

You have a variety of target options, from replacing the server hardware or virtualizing with Hyper-V to a new server, to relocating to a cloud service such as Azure IaaS or decommissioning if no longer in use.
Next, Andy Vigil, a consulting systems engineer at Cisco UCS provided background on Cisco’s Unified Computing System (UCS) and explained how the Cisco datacenter and fabric computing platform unified computing, networking, storage access, and virtualization resources into one cohesive system. Andy showed how UCS Manager provides you with a single point of contact for all UCS components, a discussed how it integrates with System Center tools.
Finally, Terry Walsh, area sales director at AppZero, talked about using AppZero’s automated migration tool to accelerate migration timeframes with lower cost and less risk. Terry also shared a detailed case study showing how one pharma company had benefited, in terms of effort and time, cost, and project duration, by using AppZero verus a manual migration.
You can watch the one hour webinar replay here.

]]>
https://blogs.perficient.com/2014/12/16/windows-server-2003-end-of-support-looms-a-webinar-recap/feed/ 0 224826
Power BI Basics Inside Office 365 – A Video Series https://blogs.perficient.com/2014/11/13/power-bi-basics-inside-office-365-a-video-series/ https://blogs.perficient.com/2014/11/13/power-bi-basics-inside-office-365-a-video-series/#respond Thu, 13 Nov 2014 19:42:02 +0000 http://blogs.perficient.com/microsoft/?p=24282

Yesterday, we were fortunate to have a customer, Heidi Rozmiarek, Assistant Director of IT Development for UnityPoint Health, speak alongside our Microsoft BI team for the webinar, “Hybrid Analytics in Healthcare: Leveraging Power BI and Office 365 to Make Smarter Business Decisions.” power-bi
It was an informative session that began by covering architectural components and functions, architecture options including on premises, hybrid, cloud and delivery considerations. Following this, we had a live Power BI demo, and last but not least, Heidi shared how her organization is using the Microsoft BI stack to provide simple solutions for complex questions. Keep an eye out for a post describing the webinar in more detail, but in the meantime, you can view the replay here. 
Whether or not you attended the webinar, if you are interested in learning more about building a hybrid analytics platform with Power BI and Office 365,  I highly recommend you take a look at the following short video series.

  1. Introduction to Power BI:  The first video includes an introduction to Power BI, particularly around Power BI Sites, “My Power BI” and the Power BI Admin page.
  2. Administration and Permissions in Power BI: This video focuses on Site Admin and security basics.
  3. Data Exploration and Visualization in Power BI: The third video in the series discusses data exploration and visualization using Excel and related power tools, including Power Pivot and Power View.
  4. Data Management Gateway for Power BI: Here, we cover the steps to enable data feeds in Power BI using the Data Management Gateway.
]]>
https://blogs.perficient.com/2014/11/13/power-bi-basics-inside-office-365-a-video-series/feed/ 0 224799
Upcoming Webinar: Planning for a Lync 2013 on a Global Scale https://blogs.perficient.com/2014/10/17/upcoming-webinar-planning-for-a-lync-2013-on-a-global-scale/ https://blogs.perficient.com/2014/10/17/upcoming-webinar-planning-for-a-lync-2013-on-a-global-scale/#respond Fri, 17 Oct 2014 13:00:17 +0000 http://blogs.perficient.com/microsoft/?p=23702

At Perficient, we communicate via Lync 2013. As an end user, I can’t say enough about the ability to use it from anywhere I have internet access to take calls, instant message colleagues, customers and partners, and to hold meetings with content sharing and video. Webinar_1Using Lync 2013 is a simple, easy process for me, whether from my computer or my phone, but I know that’s due in part to our implementation team spending the necessary time planning the solution design and preparing to implement.
When it comes to planning for a global Lync deployment, there is a lot more to take into consideration to get your core Lync Server 2013 infrastructure ready to support voice, video and content sharing capabilities. It’s important that you understand the impacts Lync Server 2013 can have on the global IT infrastructure’s network, security, telephony and virtualization.
To understand how to get “Lync Ready,” join Perficient’s Microsoft Certified Masters Jason Sloan and Keenan Crockett on Thursday, October 30, 2014 at 1 p.m. CT for a webinar, How to Plan for a Lync Deployment on a Global Scale. They’ll cover topics like high-level server and pool design and placement, importance of the edge servers, the hardware vs. virtualized debate, and ultimately a high-level understanding of the impact Lync has on your network.
If you’d like to learn more about the topic, I recommend taking a look at a white paper that Jason recently authored, “The CIO’s Guide to a Lync Server 2013 Global Deployment.” You can download it here. In the guide, Jason addresses two key areas often overlooked by organizations during the planning stage: impact to server infrastructure and the impact to the network.
To register for the webinar, click here.
How to Plan for a Lync Deployment on a Global Scale
Thursday, October 30, 2014
1:00 p.m. CT
 
 

]]>
https://blogs.perficient.com/2014/10/17/upcoming-webinar-planning-for-a-lync-2013-on-a-global-scale/feed/ 0 224764
Insights on SQL Server 2014 Data Warehousing Edition https://blogs.perficient.com/2014/08/13/insights-on-sql-server-2014-data-warehousing-edition/ https://blogs.perficient.com/2014/08/13/insights-on-sql-server-2014-data-warehousing-edition/#comments Wed, 13 Aug 2014 23:08:34 +0000 http://blogs.perficient.com/microsoft/?p=23179

For anyone that is thinking about selecting the Data Warehouse edition of SQL Server 2014, I just want to highlight a few things required to install this product and get it up and running.
First off though, what is SQL 2014 DataWarehousing Edition? In short, it is a version of SQL server that  is available as an image on an Azure VM, the product seems to be flying a little bit under the radar.  In terms of licensing and features, it is closest to Enterprise Edition and is similar to BI Edition.  It houses the full stack of BI products, and it also allows for database snapshots like Enterprise Edition.  The biggest single difference I can find is that it is optimized to use Azure Storage in the cloud-interesting no?  I see its primary purpose as replacing an existing on premise data warehouse, or to function as a starting point for a new data warehouse that will be fairly large.
SQL Server 2014 EvolutionI won’t go into provisioning a cloud VM in this blog, but if you want more info here’s a link:
http://msdn.microsoft.com/library/dn387396.aspx
Ok, on to some tech points and what to expect:
First and foremost, this edition’s minimum recommended VM size is an A7-whoah!
Pretty steep for a minimum spec, A7 is 8 cores with 56 GBs of RAM.  We all know minimum specs are just that, the bare minimum, and usually we end up going larger.
If you are unfamiliar with Azure VM sizing take a look here:
http://msdn.microsoft.com/en-us/library/azure/dn197896.aspx
Second, even to do a basic install, it is going to require that you have several 1 Terabyte Storage locations available for it to harness in Azure Storage. Double Whoah!
When you first login to this VM, you will not be able to connect SSMS to the SQL instance. Instead you are prompted to configure Storage containers for SQL 2014 DW Edition. This can be done in the Portal, or it can be done via Azure PowerShell and is documented quite well here:
http://msdn.microsoft.com/library/dn387397.aspx
In a nutshell it is quite easy to attach the disks through the Portal Application on Azure, you just browse to your VM and click “Attach” at the bottom of the screen.  The VM will reboot, and you can then confirm the process in the logs listed in the link above.  But as I mentioned earlier, you will know when it is up and running because you will get a login error from SSMS if it is not properly setup.  One thing to keep in mind is that LUNS are numbered 0-X  not 1-X, I made this mistake when I first read the log  and thinking it was complete when I still needed to attach one more disk.
Once you have configured the appropriate number of storage LUNS, you must then use Disk Manager in Windows to format and label them – E: , F:, G:, etc.
Once the SQL Instance finds its required number of storage containers, it will then and only then, allow you to login via SSMS.
So what is going on here? Well, some good stuff in my opinion.

  1. It is forcing the end user to appropriate several disk locations instead of just using the default c:\ drive to store everything. This is a great optimization because it will spread the disk activity out over multiple LUNS. It also enforces separating the data files from the operating system disk and the page files. Think about how many database systems you have worked on that have this design flaw-a lot of them.
  2. It is assuming you mean business and it requires a massive amount of storage up front to even install it. Either you need this edition of SQL Server or you don’t. This is not SQL Express or a departmental application server, this is a full size enterprise application that is capable of migrating an on premise DW to Azure.

Even though one might be put off a bit that it requires 4+ terabytes of storage to install, I actually like the fact that it enforces good design and automatically gives some overhead for growth.
No hardware budget excuses this time, a very important point, is that even though it requires you to appropriate 4+ TB’s of storage, YOU ARE NOT BILLED FOR THE STORAGE YOU APPROPRIATE, you are only billed for the storage that you actually fill with data.
Once you understand that, this product starts making more sense. You can design a large storage location, with plenty of room to grow, without having to buy a large storage location. In a traditional on premise environment, this could mean forking over some major cash. If you have never noticed, SANs are not inexpensive, and they take a long time to arrive onsite!
In summary,  I am glad that this product is designed the way it is. It enforces good design from the beginning. It is not the correct product for a lot of different applications due to its scale, but for the person or place that wants to migrate or build a true Enterprise size data warehouse in Azure, SQL 2014 DW Edition is perfect.
 
 

]]>
https://blogs.perficient.com/2014/08/13/insights-on-sql-server-2014-data-warehousing-edition/feed/ 1 224727
Microsoft Server 2003 to 2012R2 – More than just end of Life https://blogs.perficient.com/2014/08/11/microsoft-server-2003-to-2012r2-more-than-just-end-of-life/ https://blogs.perficient.com/2014/08/11/microsoft-server-2003-to-2012r2-more-than-just-end-of-life/#respond Mon, 11 Aug 2014 13:04:03 +0000 http://blogs.perficient.com/microsoft/?p=23105

With the end of life fast approaching, on July 14 2015, for Microsoft Server 2003 it will be hard for many organizations to make the move to a new Server Operating System, not unlike the pain many organizations are feeling with the move from Microsoft Windows XP.
End-Is-Ahead-Graphic-sm-570x350There are many business related reasons that companies need to start now with their migration to server 2012R2. For example when customers made the move from Windows XP, many found they should have planned more in advance, because many migrations can take 8 months or longer depending on the size and complexity of the environment. Security alone should be a big enough business reason to move to a supported platform, in 2013 Microsoft released 37 critical updates for Windows Server 2003, once end of life happens there will not be any more patches released.  By not patching the server environment, you now run the risk malicious attacks, system bugs and PCI compliance.
The good news is that while the move might be painful,  in the long run it will be worth the trouble. Microsoft Server 2012R2 offers so many enhancements and new features, that once you have completed the migration and become familiar with Microsoft Server 2012R2 you will probably wonder why you waited so long.
Microsoft Server 2012R2 offers many enhancements, including

  • PowerShell 4.0 – PowerShell 3.0 alone has 2300 more cmdlets than PowerShell 2.0
  • Hyper-V 3.0 – Supports 64 processors and 1Tb of Memory. Also supports VHDX format for large disk capacity and live migrations
  • SMB 3.02 – Server 2003 supports SMB 1.0
  • Work Folders – Brings the functionality of Dropbox to your corporate servers
  • Desired State Configuration – Lets you maintain server configuration across the board with baselines
  • Storage Tiering – Dynamically move chunks of stored data between slower and higher drives
  • Data Deduplication – Data compression and now with Server 2012R2 you can run Data Deduplication on Virtual Machines also is great for VDI environments.
  • Workplace Join – Allows users to register personal devices with Active Directory gain certificate based authentication and single sign on to the domain.

You can see from just these features how far Microsoft Server OS has come over the last 10 years. Scalability, Speed, Virtualization, Mobile Device Management and Cloud Computing have been vastly improved or were not possible with Microsoft Server 2003.
With  current trends moving towards organizations embracing a user centric environment and moving to cloud computing, Server 2012R2 is a stepping stone in the right direction.
So while the migration to Microsoft Server 2012R2 may be painful, all will be forgotten once the organization and Server Administrators, can utilize the new features and notice the new ease of daily management activities.
 
 
 

]]>
https://blogs.perficient.com/2014/08/11/microsoft-server-2003-to-2012r2-more-than-just-end-of-life/feed/ 0 224722
Virtualizing SharePoint 2013 Workloads https://blogs.perficient.com/2014/07/08/virtualizing-sharepoint-2013-workloads/ https://blogs.perficient.com/2014/07/08/virtualizing-sharepoint-2013-workloads/#comments Tue, 08 Jul 2014 16:59:12 +0000 http://blogs.perficient.com/microsoft/?p=22853

Most new SharePoint 2013 implementations these days run on virtual machines, and the question on whether to virtualize SQL servers has been long put to rest. Indeed, with the new Windows Server 2012 R2 Hyper-V VM specs of up to 64 vCPUs, 1 TB RAM and 64 TB data, it is  hard to make a case for physical hardware.
Both Microsoft Hyper-V and VMware have published recommendations for working with virtualized SharePoint farms. The list of recommendations is long (and somewhat tedious), so this cheat-sheet aims to summarize the most important ones and provide real-world advice for SharePoint and virtualization architects.

  • When virtualizing SharePoint 2013, Microsoft recommends minimum of 4 and maximum of 8 CPU cores per VM. Start low (4) and scale up  as needed. With multiprocessor virtual machines, the physical host needs to ensure enough physical CPU cores are available before scheduling threads execution of that particular VM. Therefore, in theory the higher the number of vCPUs, the longer potential wait times for that VM. In every version starting 4.0, VMware has made improvements to the CPU scheduling algorithm to reduce the wait time for multiprocessor VMs using relaxed co-scheduling. Still, it’s wise to consult documentation on your particular version and see what are the specific limitations and recommendations.

 

  • Ensure true high availability by using affinity rules.  Your SharePoint admin should tell you which VM hosts which role, and you will need to keep VMs with same role on separate physical hosts.  For example, all VMs that host the web role should not end up on the same physical host, so your typical mid-size 2 tier farm should look something like this:

VMAffinity

  • When powering down the farm, start with the web layer, and work your way down to the database layer. When powering up, go in the opposite direction

 

  • Do not over oversubscribe or thin-provision PROD machines, do oversubscribe and thin-provision DEV and TEST workloads

 

  • NUMA (non-uniform memory access) partition boundaries: The high-level recommendation from both Microsoft and VMware is not to cross NUMA boundaries. Different chip manufacturers have different definitions of NUMA, but the majority opinion seems to be that NUMA node equals physical CPU socket, and not CPU core. For example, for a physical host with 8 quad-code CPUs and 256 GB of RAM, a NUMA partition is 32 GB. Ensure that individual SharePoint VMs will fit into a single partition i.e. will not be assigned more than 32 GB or RAM each.

 

  • Do not use dynamic memory: Certain SharePoint components like search and distributed cache use memory-cached objects extensively and are unable to dynamically resize their cache when the available memory changes. Therefore, dynamic memory mechanisms like minimum/maximum RAM, shares, ballooning driver etc. will not work well with SharePoint 2013. Again, your SharePoint admin should provide detailed design and advise which VM hosts which particular service.

 

  • Do not save VM state at shutdown or use snapshots in PROD: SharePoint is transactional application and saving VM state can lead to inconsistent topology after the VM comes back up or is reverted to a previous snapshot.

 

  • Disable time synchronization between the host and the VM: Same as previous point. All transaction events are time stamped, and latency during time synchronization can cause inconsistent topology. SharePoint VMs will use the domain synchronization mechanism to keep local clocks in sync.

 

  • Do not configure “always start machine automatically”: There may be cases where SharePoint VM is shut down for a reason, and starting it automatically after physical host reboot can cause problems.

 

  • TCP Chimney offload: Please refer to this VMware post on reasons why this setting may need to be disabled. This is not a setting unique to SharePoint and unless it is the standard practice for all web VMs or is part of the image, it should not be configured.

 

  • When configuring disaster recovery, virtualization has been a godsend for quite some time. Using VM replication to a secondary site is by far the simplest SharePoint DR scenario to configure and maintain.

 

  • Other settings that are not SharePoint-specific : things like storage host multi-pathing, storage partition alignment, physical NIC teaming, configuring shared storage for vMotion etc. hold true for all VMware implementations

 
 

]]>
https://blogs.perficient.com/2014/07/08/virtualizing-sharepoint-2013-workloads/feed/ 2 224704
Docker, mobile, and putting things in boxes https://blogs.perficient.com/2014/07/03/docker-mobile-and-putting-things-in-boxes/ https://blogs.perficient.com/2014/07/03/docker-mobile-and-putting-things-in-boxes/#respond Thu, 03 Jul 2014 08:18:10 +0000 https://blogs.perficient.com/delivery/?p=2845

Docker and custom mobile application development are both very hot. Recently we decided to run a small internal project to gain some ‘sleeves-up’ insight into Docker as well as how we could deliver containerized versions of applications.

This blog article, along with others to follow from both my colleagues and myself will document some of our learning. We hope they will be of value to others who may not have the time or environment to conduct a similar exercise on their own.

Nearly 5 years ago I wrote a blog entry about Perficient being the best technology school in Hangzhou. I briefly introduced our Boot Camp training program, an intensive 3 week program targeted towards university seniors interested in joining our company. We strongly believe that the best way to learn is by doing, so during the Boot Camp — in addition to lectures and labs — the participants build real software. Clearly we don’t want individuals of limited experience working on client projects, so the projects that our Boot Camp teams work on are systems that we can deploy and benefit from internally. One project developed by one of our recent Boot Camp groups is a Liferay portal based web application. The application itself is simple, but useful. Basically there are only a couple of user roles, the librarian and borrowers. The librarian can add books to the library, check books out both for themselves and on behalf of others, check books in, and mark books as unavailable in the event they are lost. The screenshots below show the key pages of the web application.

Screen Shot 2014 07 03 at 3 04 42 PM

Screen Shot 2014 07 03 at 3 04 13 PM

Screen Shot 2014 07 03 at 3 03 29 PM

Screen Shot 2014 07 03 at 3 03 48 PM

Screen Shot 2014 07 03 at 3 53 59 PM

We use this simple application to manage our own internal library which consists of about 1000 books that we have purchased or which team members have contributed over time.

It’s a simple application, and works great for our needs.

Recently we’ve also been helping some of our new team members develop there mobile application development skills, so we began creating Android and iOS mobile applications that interface with the library providing the same functionally as the web application and then some.

One of the problems with the web application is that we either need to have a computer available near the library (where we really don’t have space for it, so there isn’t one) for people to use to check out books or they need to remember to check them out when they get back to their desk. This inconvenience results in some lost traceability as people forget to do this, so books can go missing.

The problem of inconvenience is largely solved by the prevalence and functionality available in modern smart phones. Most smart phones provide fairly good cameras, and with the ready availability of image processing libraries we are able to use the camera to capture the barcode on the book, so now checking out a book is as simple as taking a picture.

Below are a couple of screen shots of the Android and iOS versions of our mobile Library apps.

Login screen

Screenshot 2014 07 03 15 28 19

Specifying the server address

Screenshot 2014 07 03 15 28 45

Sidebar menu (to select admin option)

Screenshot 2014 07 03 15 29 57

Adding a book to the library

Screenshot 2014 07 03 15 31 11

The book is now available

Screenshot 2014 07 03 15 31 34

We can scan the ISBN or swipe on the UI to Check Out

Screenshot 2014 07 03 15 31 38

We can see the book in You Borrowed

Screenshot 2014 07 03 15 31 47

And also Checked Out

Screenshot 2014 07 03 15 31 53

And we can scan the ISBN or swipe on the UI to Check In

Screenshot 2014 07 03 15 40 55

iOS Login Screen

IMG 1212

iOS My Books

IMG 1216

iOS Preparing to Scan

IMG 1217

iOS Adding a Book

IMG 1218

iOS Added book available for Check Out

IMG 1220

So now we have this nice, simple library system, and we thought: “hey, this is probably something that could be useful for others as well”. We’d also been looking for an opportunity to roll up our sleeves and get a little dirty with Docker by creating some practical containers, and so things fell in place for us to try our first “in a Box” project.

So, there you have it. Library in a Box

In some subsequent articles we’ll identify some of the issues we encountered while doing this. We also have a related sub-project going on that will help us better quantify the benefits of containerization vis-a-via Docker vs. virtualization vis-a-via a traditional virtual machine that we’ll be sharing. But before doing that we wanted to share the background. We’re also investigating whether we can make this available to the broader public if there is sufficient interest.

We hope you’ll find the articles interesting and informative, and would invite your thoughts, questions, and feedback as we share our experience with you.

]]>
https://blogs.perficient.com/2014/07/03/docker-mobile-and-putting-things-in-boxes/feed/ 0 210669
End Of Life For Windows XP Or Is It? https://blogs.perficient.com/2014/05/07/end-of-life-for-windows-xp-or-is-it/ https://blogs.perficient.com/2014/05/07/end-of-life-for-windows-xp-or-is-it/#respond Wed, 07 May 2014 16:31:28 +0000 http://blogs.perficient.com/microsoft/?p=22191

Microsoft finally ended support for Windows XP, its end of life happened April 8th 2014. So what does this mean for those of us still on Windows XP?  No more support, hot fixes, and patches? Well not really, Microsoft will be creating patches and security updates for years ahead. But like everything it has a cost.xp_end-680x400
Most who know this, think ‘great I am glad I can still get support but how?’ Microsoft has Custom Support programs that are designed for large customers. According to the information I have seen there is an annual cost that increases each year, and is approximately $200 per machine for the first year. Now at first that does not seem too crazy, but this can get quite expensive if you have 10,000 Windows XP machines, that would cost a company $2,000,000 for one year of support “WOW!”. The expert analysts are saying that Patches rated at Critical  will be included in this support but Bugs marked as Important will come with an extra cost, and anything rated lower will not be patched at all.
Customers will receive hotfixes in a secure process, Microsoft will only make the information available to the companies that are enrolled in the Custom Support program. Typically Microsoft will enable Custom Support agreements for up to three years after the end of life of an Operating System.
What is interesting is that even though end of life has happened for Windows XP and Microsoft has the Custom Support Program available, they still seem to be doing some limited support. For example the vulnerability that was exploited in IE Windows XP machines.  Microsoft decided to patch Windows XP machines that are outside of the Custom Support Program for this vulnerability. Microsoft states that the patch was created and released because it occurred so close to the end of Windows XP, as stated in this BlogPost released by Microsoft.
It’s great that you can still get support for your Windows XP machines, but the cost associated with being a retired Operating System should make any company want to make a leap to Windows 7 or 8 as soon as possible. Fortunately Microsoft has many tools in place to make these moves so much easier then they were in the days of Windows XP. For example with SCCM 2012 you can keep your machines current with OS, Patches, Antivirus and Software just to name a few features, and it can all be automated.
If your company is still on Windows XP and you have not started to move off of it, now is the time to start moving from where you are today, to where you need to be in the future.  This starts with planning, proper infrastructure and tools. If done properly companies can stay current for many years to come.
 
 
 

]]>
https://blogs.perficient.com/2014/05/07/end-of-life-for-windows-xp-or-is-it/feed/ 0 224655