Microsoft

Blog Categories

Subscribe to RSS feed

Archives

Follow our Microsoft Technologies board on Pinterest

Archive for the ‘Virtualization’ Category

Insights on SQL Server 2014 Data Warehousing Edition

For anyone that is thinking about selecting the Data Warehouse edition of SQL Server 2014, I just want to highlight a few things required to install this product and get it up and running.

First off though, what is SQL 2014 DataWarehousing Edition? In short, it is a version of SQL server that  is available as an image on an Azure VM, the product seems to be flying a little bit under the radar.  In terms of licensing and features, it is closest to Enterprise Edition and is similar to BI Edition.  It houses the full stack of BI products, and it also allows for database snapshots like Enterprise Edition.  The biggest single difference I can find is that it is optimized to use Azure Storage in the cloud-interesting no?  I see its primary purpose as replacing an existing on premise data warehouse, or to function as a starting point for a new data warehouse that will be fairly large.

SQL Server 2014 EvolutionI won’t go into provisioning a cloud VM in this blog, but if you want more info here’s a link:

http://msdn.microsoft.com/library/dn387396.aspx

Ok, on to some tech points and what to expect:

First and foremost, this edition’s minimum recommended VM size is an A7-whoah!

Pretty steep for a minimum spec, A7 is 8 cores with 56 GBs of RAM.  We all know minimum specs are just that, the bare minimum, and usually we end up going larger.

If you are unfamiliar with Azure VM sizing take a look here:

http://msdn.microsoft.com/en-us/library/azure/dn197896.aspx

Second, even to do a basic install, it is going to require that you have several 1 Terabyte Storage locations available for it to harness in Azure Storage. Double Whoah!

When you first login to this VM, you will not be able to connect SSMS to the SQL instance. Instead you are prompted to configure Storage containers for SQL 2014 DW Edition. This can be done in the Portal, or it can be done via Azure PowerShell and is documented quite well here:

http://msdn.microsoft.com/library/dn387397.aspx

In a nutshell it is quite easy to attach the disks through the Portal Application on Azure, you just browse to your VM and click “Attach” at the bottom of the screen.  The VM will reboot, and you can then confirm the process in the logs listed in the link above.  But as I mentioned earlier, you will know when it is up and running because you will get a login error from SSMS if it is not properly setup.  One thing to keep in mind is that LUNS are numbered 0-X  not 1-X, I made this mistake when I first read the log  and thinking it was complete when I still needed to attach one more disk.

Once you have configured the appropriate number of storage LUNS, you must then use Disk Manager in Windows to format and label them – E: , F:, G:, etc.

Once the SQL Instance finds its required number of storage containers, it will then and only then, allow you to login via SSMS.

So what is going on here? Well, some good stuff in my opinion.

  1. It is forcing the end user to appropriate several disk locations instead of just using the default c:\ drive to store everything. This is a great optimization because it will spread the disk activity out over multiple LUNS. It also enforces separating the data files from the operating system disk and the page files. Think about how many database systems you have worked on that have this design flaw-a lot of them.
  2. It is assuming you mean business and it requires a massive amount of storage up front to even install it. Either you need this edition of SQL Server or you don’t. This is not SQL Express or a departmental application server, this is a full size enterprise application that is capable of migrating an on premise DW to Azure.

Even though one might be put off a bit that it requires 4+ terabytes of storage to install, I actually like the fact that it enforces good design and automatically gives some overhead for growth.

No hardware budget excuses this time, a very important point, is that even though it requires you to appropriate 4+ TB’s of storage, YOU ARE NOT BILLED FOR THE STORAGE YOU APPROPRIATE, you are only billed for the storage that you actually fill with data.

Once you understand that, this product starts making more sense. You can design a large storage location, with plenty of room to grow, without having to buy a large storage location. In a traditional on premise environment, this could mean forking over some major cash. If you have never noticed, SANs are not inexpensive, and they take a long time to arrive onsite!

In summary,  I am glad that this product is designed the way it is. It enforces good design from the beginning. It is not the correct product for a lot of different applications due to its scale, but for the person or place that wants to migrate or build a true Enterprise size data warehouse in Azure, SQL 2014 DW Edition is perfect.

 

 

Microsoft Server 2003 to 2012R2 – More than just end of Life

With the end of life fast approaching, on July 14 2015, for Microsoft Server 2003 it will be hard for many organizations to make the move to a new Server Operating System, not unlike the pain many organizations are feeling with the move from Microsoft Windows XP.

End-Is-Ahead-Graphic-sm-570x350There are many business related reasons that companies need to start now with their migration to server 2012R2. For example when customers made the move from Windows XP, many found they should have planned more in advance, because many migrations can take 8 months or longer depending on the size and complexity of the environment. Security alone should be a big enough business reason to move to a supported platform, in 2013 Microsoft released 37 critical updates for Windows Server 2003, once end of life happens there will not be any more patches released.  By not patching the server environment, you now run the risk malicious attacks, system bugs and PCI compliance.

The good news is that while the move might be painful,  in the long run it will be worth the trouble. Microsoft Server 2012R2 offers so many enhancements and new features, that once you have completed the migration and become familiar with Microsoft Server 2012R2 you will probably wonder why you waited so long.

Microsoft Server 2012R2 offers many enhancements, including

  • PowerShell 4.0 – PowerShell 3.0 alone has 2300 more cmdlets than PowerShell 2.0
  • Hyper-V 3.0 – Supports 64 processors and 1Tb of Memory. Also supports VHDX format for large disk capacity and live migrations
  • SMB 3.02 – Server 2003 supports SMB 1.0
  • Work Folders – Brings the functionality of Dropbox to your corporate servers
  • Desired State Configuration – Lets you maintain server configuration across the board with baselines
  • Storage Tiering – Dynamically move chunks of stored data between slower and higher drives
  • Data Deduplication – Data compression and now with Server 2012R2 you can run Data Deduplication on Virtual Machines also is great for VDI environments.
  • Workplace Join – Allows users to register personal devices with Active Directory gain certificate based authentication and single sign on to the domain.

You can see from just these features how far Microsoft Server OS has come over the last 10 years. Scalability, Speed, Virtualization, Mobile Device Management and Cloud Computing have been vastly improved or were not possible with Microsoft Server 2003.

With  current trends moving towards organizations embracing a user centric environment and moving to cloud computing, Server 2012R2 is a stepping stone in the right direction.

So while the migration to Microsoft Server 2012R2 may be painful, all will be forgotten once the organization and Server Administrators, can utilize the new features and notice the new ease of daily management activities.

 

 

 

Virtualizing SharePoint 2013 Workloads

Most new SharePoint 2013 implementations these days run on virtual machines, and the question on whether to virtualize SQL servers has been long put to rest. Indeed, with the new Windows Server 2012 R2 Hyper-V VM specs of up to 64 vCPUs, 1 TB RAM and 64 TB data, it is  hard to make a case for physical hardware.

Both Microsoft Hyper-V and VMware have published recommendations for working with virtualized SharePoint farms. The list of recommendations is long (and somewhat tedious), so this cheat-sheet aims to summarize the most important ones and provide real-world advice for SharePoint and virtualization architects.

  • When virtualizing SharePoint 2013, Microsoft recommends minimum of 4 and maximum of 8 CPU cores per VM. Start low (4) and scale up  as needed. With multiprocessor virtual machines, the physical host needs to ensure enough physical CPU cores are available before scheduling threads execution of that particular VM. Therefore, in theory the higher the number of vCPUs, the longer potential wait times for that VM. In every version starting 4.0, VMware has made improvements to the CPU scheduling algorithm to reduce the wait time for multiprocessor VMs using relaxed co-scheduling. Still, it’s wise to consult documentation on your particular version and see what are the specific limitations and recommendations.

 

  • Ensure true high availability by using affinity rules.  Your SharePoint admin should tell you which VM hosts which role, and you will need to keep VMs with same role on separate physical hosts.  For example, all VMs that host the web role should not end up on the same physical host, so your typical mid-size 2 tier farm should look something like this:

VMAffinity

  • When powering down the farm, start with the web layer, and work your way down to the database layer. When powering up, go in the opposite direction

 

  • Do not over oversubscribe or thin-provision PROD machines, do oversubscribe and thin-provision DEV and TEST workloads

 

  • NUMA (non-uniform memory access) partition boundaries: The high-level recommendation from both Microsoft and VMware is not to cross NUMA boundaries. Different chip manufacturers have different definitions of NUMA, but the majority opinion seems to be that NUMA node equals physical CPU socket, and not CPU core. For example, for a physical host with 8 quad-code CPUs and 256 GB of RAM, a NUMA partition is 32 GB. Ensure that individual SharePoint VMs will fit into a single partition i.e. will not be assigned more than 32 GB or RAM each.

 

  • Do not use dynamic memory: Certain SharePoint components like search and distributed cache use memory-cached objects extensively and are unable to dynamically resize their cache when the available memory changes. Therefore, dynamic memory mechanisms like minimum/maximum RAM, shares, ballooning driver etc. will not work well with SharePoint 2013. Again, your SharePoint admin should provide detailed design and advise which VM hosts which particular service.

 

  • Do not save VM state at shutdown or use snapshots in PROD: SharePoint is transactional application and saving VM state can lead to inconsistent topology after the VM comes back up or is reverted to a previous snapshot.

 

  • Disable time synchronization between the host and the VM: Same as previous point. All transaction events are time stamped, and latency during time synchronization can cause inconsistent topology. SharePoint VMs will use the domain synchronization mechanism to keep local clocks in sync.

 

  • Do not configure “always start machine automatically”: There may be cases where SharePoint VM is shut down for a reason, and starting it automatically after physical host reboot can cause problems.

 

  • TCP Chimney offload: Please refer to this VMware post on reasons why this setting may need to be disabled. This is not a setting unique to SharePoint and unless it is the standard practice for all web VMs or is part of the image, it should not be configured.

 

  • When configuring disaster recovery, virtualization has been a godsend for quite some time. Using VM replication to a secondary site is by far the simplest SharePoint DR scenario to configure and maintain.

 

  • Other settings that are not SharePoint-specific : things like storage host multi-pathing, storage partition alignment, physical NIC teaming, configuring shared storage for vMotion etc. hold true for all VMware implementations

 

 

End Of Life For Windows XP Or Is It?

Microsoft finally ended support for Windows XP, its end of life happened April 8th 2014. So what does this mean for those of us still on Windows XP?  No more support, hot fixes, and patches? Well not really, Microsoft will be creating patches and security updates for years ahead. But like everything it has a cost.xp_end-680x400

Most who know this, think ‘great I am glad I can still get support but how?’ Microsoft has Custom Support programs that are designed for large customers. According to the information I have seen there is an annual cost that increases each year, and is approximately $200 per machine for the first year. Now at first that does not seem too crazy, but this can get quite expensive if you have 10,000 Windows XP machines, that would cost a company $2,000,000 for one year of support “WOW!”. The expert analysts are saying that Patches rated at Critical  will be included in this support but Bugs marked as Important will come with an extra cost, and anything rated lower will not be patched at all.

Customers will receive hotfixes in a secure process, Microsoft will only make the information available to the companies that are enrolled in the Custom Support program. Typically Microsoft will enable Custom Support agreements for up to three years after the end of life of an Operating System.

What is interesting is that even though end of life has happened for Windows XP and Microsoft has the Custom Support Program available, they still seem to be doing some limited support. For example the vulnerability that was exploited in IE Windows XP machines.  Microsoft decided to patch Windows XP machines that are outside of the Custom Support Program for this vulnerability. Microsoft states that the patch was created and released because it occurred so close to the end of Windows XP, as stated in this BlogPost released by Microsoft.

It’s great that you can still get support for your Windows XP machines, but the cost associated with being a retired Operating System should make any company want to make a leap to Windows 7 or 8 as soon as possible. Fortunately Microsoft has many tools in place to make these moves so much easier then they were in the days of Windows XP. For example with SCCM 2012 you can keep your machines current with OS, Patches, Antivirus and Software just to name a few features, and it can all be automated.

If your company is still on Windows XP and you have not started to move off of it, now is the time to start moving from where you are today, to where you need to be in the future.  This starts with planning, proper infrastructure and tools. If done properly companies can stay current for many years to come.

If your company is on Windows XP and need help with getting the wheels in motion. Check out this link to get started today.

 

 

Pervasive Data in Microsoft’s Cloud OS

As early as the beginning of this year, Microsoft began positioning their “Cloud OS” concept, based on the pillars of Windows Server and Microsoft (formerly Windows) Azure.  This perspective on the spectrum of Microsoft’s offerings casts Cloud OS as a giant next generation operating system, with all the scope and scalability that cloud computing offers.

Pervasive Data in Microsoft's Cloud OSComplementary to Windows Server and Microsoft Azure, additional supporting technologies provide services like identity management (Active Directory and Azure AD), built-in virtualization capability (Hyper-V), and consolidated management capability (System Center).  Basically, it’s a bundle of products that can be mostly seamlessly joined to provide a completely cloud-based computing environment.  Microsoft technologies are increasingly being tailored to help flesh out the Cloud OS story, and they demonstrate Microsoft’s pivot towards the “platforms and services” line.

But another critical part of the Cloud OS story is data, and that’s where SQL Server comes in.  SQL Server 2014 — running on-premises in your datacenter, in the cloud on Azure VMs, or on both — is your modern organization’s data backbone.  As part of the Cloud OS story, SQL Server 2014 is a bridge between on-premise and (Azure) cloud-based data assets.  Advanced integration with Microsoft Azure allows SQL Server to support next-generation Hybrid architectures for backups, DR, Replication and data storage/delivery.

Schedule automatic offsite backups.  Pin a frequently used Data Mart in-memory for maximum query performance.  Leverage your Big Data assets against SQL Server data with an Azure HDInsight cluster.  Refresh Power BI content from a Data Warehouse hosted on an Azure VM.  All from within the cost-effective environment of the Cloud.

So what does the Cloud OS story mean to users?  It means that what we’re seeing now in terms of shared experiences across our tablets, smartphones, and TVs is just the beginning.  As the era of Big Data continues to dawn, we will all be floating on a sea of data.  And the cloud is where that sea will reside.

The Cloud OS as a whole ultimately empowers consumers and users with computing capability over sea of data — wherever they are, wherever they want it.   In terms of data, this moves toward the larger goal of giving business the ability to identify, gather, and use data from an endless variety of internal and external sources to unlock business insights, and turn that information into action.

That’s the idea of pervasive data.  And in Microsoft’s Cloud OS story, it’s empowered by self-service BI from Office365 and SharePoint Online, using SQL Server and Azure technologies under the covers but all accessed through an interface as familiar and comfortable as Excel.    And it’s coming soon to a device near you…

Strengthen Company Culture with Yammer enhanced by HDInsight

In a world of broadband internet connections, online collaboration tools and the ability to work from almost anywhere – office culture can be difficult to sustain.  This especially holds true for people who live in large cities (where the commute can be problematic) or in harsh climates (like the never ending winter in Chicago this year).   Yammer can help by creating remote social interactions.

Strengthen Company Culture with Yammer enhanced by HDInsightYammer is an enterprise social network that aims to connect people in the office.  A few of its features are instant messaging, user profiles, a primary news-feed, interest groups, recommendations for people to follow, groups to join as well and a recent activity feed.  The interface is clean and well designed.  One of the great things is that once you start using Yammer it is really easy to continue.

There is one area where Yammer seems to fall short.  There is no clear way to bring people together who have common interests.  The other users and groups that are recommended to me by Yammer are made based on the groups I am a part of and people I follow.  It does not take into consideration any of the data in my user profile.

Perficient recently held a hack-a-thon where my team identified this short coming.  Social interaction via online collaboration tools wasn’t cutting it.  In an online culture how can we leverage all of our tools to help facilitate more meaningful social gatherings?  The answer was to use interest data that co-workers have provided through Yammer to generate meaningful recommendations.  A Yammer profile consists of many different “interest groups”.  It lists categories such as Expertise, Interests, Previous Company and Schools Attended.  All of these can be classified as conversation topics and can be used as a common social interest.

This is where HDInsight powered by Hadoop and Mahout can help.  Mahout can consume massive quantities of information and return logical connections represented within the data.  For additional reading about Hadoop and Mahout click here.

Using an HDInsight Hadoop cluster in coordination with the Mahout recommendation engine we could provide meaningful recommendations to users based on their individual interests.  This wouldn’t just recommend topics that a user might be interested in but also groups they could create or join with other users based on their mutual interests – similar to the recommendations Facebook suggests regarding people you may know, groups to join or pages you may like.

Creating these logical, online groups would “connect the dots” to uncover a similarity between people where it might otherwise remain hidden.  It could also help facilitate in-person group outings, social gatherings or simply more friends and comraderie in the office.  Through this you are creating a more meaningful environment aided by technology.

A thriving office culture can stand out in a world where telecommuting tends to be more convenient.  This may not convince everyone to come to the office. However, instead of viewing it as obligatory, implementing a solution like this can encourage more people to choose to commute to the office for the social comraderie.  All of this can be done for free through the Yammer API and a Windows Azure account.

Windows Azure: Retiring Windows Server 2008 and how to upgrade

Beginning on June 2, 2014 Windows Azure will be retiring Windows Server 2008.  This means that you will no longer be able to deploy a new Cloud Service or manage your existing services on virtual machines running Windows Server 2008.

Windows Azure: Retiring Windows Server and how to UpgradeWindows Azure currently supports four different GuestOS ‘versions’:

  • GuestOS 1.x – Windows Server 2008
  • GuestOS 2.x – Windows Server 2008 R2
  • GuestOS 3.x – Windows Server 2012
  • GuestOS 4.x – Windows Server 2012 R2

If your Cloud Service has not been upgraded and is still running on Windows Server 2008 you must upgrade the servers that power your service.  How do you do that?  Isn’t the point of a running a PaaS cloud service instead of using IaaS to handle the operating system and hardware for me?  The short answer is yes, but…

PaaS will take care of much of the hardware, IIS patches and OS patches for you but Azure will not do entire OS upgrades for your entire service unless you tell it to.  This happens because incompatibilities between cloud services and operating systems are likely to arise.  This would cause developers to try and fix code on the fly.  That is not only bad for up time but could also come with some very serious security holes.

Thankfully, living in a world where you have to manually upgrade the server OS for your service is in the past.  Azure makes it easy to upgrade the guest OS for your service.  You can even have your production service remain on Windows Server 2008 while upgrading your staging environment and deploying your service there.  This will allow developers to fix any outstanding bugs that are introduced with the operating system upgrade.

How do you upgrade your staging environment?  It is pretty straight forward.  From the cloud service dashboard select your staging environment and choose Configure.  At the bottom of the page find the operating system section.  You will see drop down menus for OS Family and OS Version.  Select proper OS Family (in this case anything but 1.x) and OS Version.  To always have the most up to date OS Version select automatic.  This ensures your cloud service will always be running on the latest Azure VM that is available.  If you do not want this select a static version of an OS.  This guarantees that your cloud service will remain running this OS until you upgrade it in the future.

When the service is cleared for production it is time to configure your production environment.  Upgrading your production environment can lead to some downtime for your service, but there is a way to avoid it.  Normally you will need to configure your staging and production environment independently but now you can swap your staging and production environments using the Swap option in the portal.  This will effectively swap your staging environment into production.  The change will happen within seconds and any downtime experienced will be minimal.

After the swap you can rebuild and configure the former production environment, which is now your staging environment to match that of your current production environment.

New Azure VMs improve SQL Server Data Warehousing in the cloud

While poking around in Azure, looking to set up a BI Demo VM , I noticed that Microsoft had added a few SQL Server-oriented images to their catalog.   VMs labeled “SQL Server… for Data Warehousing”!

There was one for SQL Server 2012 (SQL Server 2012 SP1 for Data Warehousing on WS 2012) and one for the current CTP2 version of SQL Server 2014  (SQL Server 2014 CTP2 Evaluation for Data Warehousing on WS 2012)!

My curiosity piqued, I ran (figuratively) to the Bings, and sure enough!  There it was:  confirmation, including some nice guidelines on configuration of VMs for DW purposes.

My favorite factoids:

  • Use an A6 VM for the SQL 2012 image, an A7 for 2014.   (This was well-timed for me because I was about to put the 2014 on an A6…)
  • Use page compression for Data Warehouses up to 400GB
  • Use one file per filegroup for best throughput (this prevents multilevel disk striping), and for Data Warehouses under 1GB you should need only one filegroup
  • However, you can look at using multiple file groups to store staging data separately from production, to separate low-latency data from high-latency, to run parallel data loads, and more.

Just be aware that the SQL Server 2014 Azure image will expire at the beginning of August, as that product moves along the path to RTM.

 

Windows Azure: PaaS changing the landscape of online gaming

Titanfall is a new blockbuster game for the Xbox One.  It is being published by Electronic Arts and is due to be released in March 2014.  Titanfall is a first person shooter that will have much of its AI hosting , physics calculations, online match making and multi-player dedicated servers hosted in Windows Azure.  This means several things:

  1. Azure’s IaaS provides dedicated servers for multi-player games providing near infinite bandwidth with low server pings and anti-cheat enabledWindows Azure: PaaS Changing the Landscape of Online Gaming
  2. Azure’s PaaS is being utilized to provide physics calculations and specialized AI to learn your style of play
  3. PaaS and dedicated servers auto scale to provide fast dynamic content to players around the world on a consistent scale

Multi-player infrastructure background

Traditionally multi-player games have been played using a client/server paradigm.  This paradigm generally involves a computer acting a dedicated server for the game.  This dedicated server accepts connections from a specific amount of players and handles communication between the clients/players.  The server normally does not perform any game relevant calculations but would act as a central repository where players send update information which would then be distributed and consumed by every client.

Recently the game development community has moved away from the dedicated server model due to operational cost and replaced it with a player-host model.  The player-host model essentially means that one player hosts the game and every other player connects to the host.  This new paradigm has several disadvantages to network multi-player gaming but was implemented to save costs on running dedicated servers as game hosts.  A few of the obvious disadvantages to the player-host model are:

  1. Inconsistent bandwidth and server lag of the player chosen to be the host
  2. No anti-cheat enabled on host
  3. Slower updates / increased lag due to server not being dedicated
  4. Local player receives faster updates than other players

How Azure fixes this

The dependence on a cloud infrastructure for a fast paced reactionary game is a significant leap of faith.  Video games generally run in a continuous loop created by the game engine to repeatedly update all of the game data (AI, particles, physics, player movement, event handling etc.) and then draw that data to the screen.  It takes a  substantial amount of CPU and GPU power to calculate and render all of the in-game objects at speeds necessary to achieve the target of 60 frames per second.

The developer of Titanfall, Respawn Entertainment, is utilizing Azure PaaS to handle several expensive calculations normally performed by the local host (console or PC).  These calculations are typically done on the local host so the player experiences minimal lag.  With these calculations off loaded to the cloud and not affecting any game play, it allows the developers to optimize the Xbox One hardware to handle more graphically intense environments.  This strategy could also extend the life of the Xbox One even further in the future.

Cloud computing services such as Azure have allowed dedicated servers to once again be economical.  With automatic server scaling and incredibly cheap virtual machine costs, the server cost and total hours of man power have been significantly reduced.  The more calculations that are performed in the cloud the more you can do with the hardware available.  Another way to look at this is, the more calculations you can do in the cloud significantly impacts the entry point for other hardware platforms.  If a developer is able to process 90% of intense calculations on an Azure compute cluster then the hardware needed to play the game can be anything from a tablet to a workstation.  This has the opportunity to increase the install base substantially.

Games are real time applications that depend on milliseconds and timing.  Azure is effectively performing calculations for a real time application and delivering results to multiple parties simultaneously.  If the Titanfall launch performs well, expect hundreds of future Xbox One games to utilize Windows Azure in making the cloud (and Azure) a dominant force in multi-player gaming for years to come.

Windows Server 2012 R2 Hyper-V – Overview of Generation 2 VM’s

With the release of Windows Server 2012 R2 comes many great new features, including a improved virtual machine named generation 2.

Generation 2 virtual machines provide quite a few enhancements across the spectrum of Hyper-V VM technology. Perhaps most notable is the removal of legacy emulated hardware. Removal of the legacy network adapter, IDE controller, floppy controller, serial controller (COM ports), and PCI bus, results in a more efficient VM. You should see faster boot times, and quicker installations from .iso. How does a VM boot without these integral components? Where necessary, they have been replaced with software based versions.

Other enhancements include:

  • Replaced BIOS with UEFI (Unified Extensible Firmware Interface)
    • Faster boot times
    • Support for boot volumes up to 64TB (Uses GPT instead of MBR)
  • Enhanced Security
    • Smaller attack surface
    • Secure Boot – Prevents unauthorized firmware, drivers and OS from running during boot.
  • Expansion of data and boot disks while VM is running. Nice!
  • Complete reliance on VHDX file format resulting in much better performance (VHD’s are no longer supported).
  • Enhanced Session Mode
    • This allows device redirection and the ability to control display configuration when connected via the Virtual Machine Connection tool.

Some things to keep in mind with generation 2 machines: Read the rest of this post »