Translate

Total Pageviews

My YouTube Channel

Thursday 30 July 2015

How to Efficiently Deploy Virtual Machines from VMware vSphere Content Library?

This post is the first of a series which aims to assess the performance of the VMware vSphere Content Library solution in various scenarios and provide vSphere Administrators with some ideas about how to set up a high performance Content Library environment. After providing an architectural overview of the Content Library components and inner workings, the post delves into the analysis and optimization of the most basic Content Library operation, i.e., the deployment of a virtual machine.

Introduction

The VMware vSphere Content Library empowers vSphere administrators to effectively and efficiently manage virtual machine templates, vApps, ISO images, and scripts. Specifically an administrator can leverage Content Library to:
  • Store and manage content from a central location;
  • Share content across boundaries of vCenter Servers;
  • Deploy virtual machine templates from the Content Library directly onto a host or cluster for immediate use.
Typically, a vSphere datacenter includes a multitude of vCenter servers, ESXi servers, networks, and datastores. In such an environment it can be time-consuming to clone or deploy a virtual machine through all the ESXi servers, vCenter servers, and networks from a source datastore to a destination datastore. Moreover, this problem is compounded by the fact that the size of virtual machines and other content keeps getting larger over time. The objective of Content Library is to address these issues by transferring large amounts of data in the most efficient way.

Architectural Overview

Content Library is composed of three main components which run on a vCenter server:
  • A Content Library Service, which organizes and manages content sitting on various storage locations;
  • A Transfer Service, which oversees the transfer content across said storage locations;
  • A Database which stores all the metadata associated with the content (e.g., type of content, date of creation, author/vendor, etc.)
The architecture diagram in Figure 1 shows how the three components interact with each other and with other vCenter components, along with the control path (depicted as thin black lines) and data path (depicted as thick red lines).
fig1
Figure 1. VMware vSphere Content Library architecture
The Content Library Service implements the control plane that manages storage and handles content operations such as deployment, upload, download, and synchronization. The Transfer Service implements the data plane that is responsible for actual data transfers between content stores, which may be datastores attached to ESXi hosts, NFS file systems mounted on the vCenter Server, or remote HTTP(S) servers.

Data Transfer

The data transfer performance varies depending on the storage type and available connectivity. The Transfer Service can transfer data in two ways: streaming mode and direct copy mode. The diagram in Figure 2 shows how the two modes work in a data transfer between datastores.
fig2
Figure 2. Content Library Data Transfer Flows
If the source and destination hosts have direct connectivity, the Transfer Service asks vCenter to instruct the source host to directly copy the content to the target host. When this is not possible (e.g., if the two hosts are connected to two different vCenter servers) streaming mode is used instead. In streaming mode the data flows through the Transfer Service itself. This involves one extra hop for the data, and also compression/decompression for the VMDK disk files. Also, vCenter appliances are usually connected to a management network, which could become a bottleneck due its limited bandwidth. For these reasons, direct copy mode typically has better performance than streaming mode.

Optimizing Virtual Machines Deployment

Having covered Content Library architecture and transfer modes, we can now discuss how to optimize its performance starting from the most basic operation, the deployment of a virtual machine. Deploying a virtual machine from Content Library creates a new virtual machine by cloning it from a template. We assess the performance of deployment operations by measuring their completion time.This metric is obviously the most visible and important one from an administrator’s perspective.
The experiments discussed in this blog post demonstrate how deployment performance is impacted by the Content Library backing storage configuration and provides some guidelines to help administrators choose the most appropriate configuration based on performance and cost tradeoffs.

Experimental Testbed

We used a total of three servers, one for running the vCenter Appliance and another two to create a cluster over which virtual machines were deployed from the Content Library. The following table summarizes the hardware and software specifications of the testbed.
vCenter Server Host
    Dell PowerEdge R910 server
         CPUsFour 6-core Intel® Xeon® E7530 @ 1.87 GHz, Hyper-Threading enabled.
         Memory:80GB
    Virtualization PlatformVMware vSphere 6.0. (RTM build # 2494585)
         VM Configuration16 vCPU, 32GB of memory
         vCenter ApplianceVMware vCenter Sever Appliance 6.0 (RTM build # 2562625)


ESXi Hosts
    Two Dell PowerEdge R610 servers
         CPUsTwo 4-core Intel® Xeon® E5530 @ 2.40 GHz, Hyper-Threading enabled.
         Memory:32GB
    Virtualization PlatformVMware vSphere 6.0. (RTM build # 2494585)
     Storage AdapterQLogic ISP2532 DualPort 8Gb Fibre Channel to PCI Express
     Network AdapterQLogic NetXtreme II BCM5709 1000Base-T (Data Rate: 1Gbps)
  Storage ArrayEMC VNX5700 Storage Array exposing two 20-disk RAID-5 LUNS with a capacity of 12TB each
Figure 3 illustrates the experimental testbed along with the data transfer flows for the various experiments. We ran a workload that consisted of deploying a virtual machine from a Content Library item onto a cluster. All experiments used the same 39GB OVF template. We conducted various experiments, based on the possible configurations of the source content store (the storage backing the Content Library) and the destination content store (the storage where the new virtual machine was deployed), as shown in the following table.
Experiment 1An ESXi host is connected to a VAAI-capable storage array (VAAI stands for vStorage API for Array Integration and it is a technology which enables ESXi hosts to offload specific virtual machine and storage management operations to compliant storage hardware.)  Both the source and destination content stores are datastores residing on said array.
Experiment 2An ESXi host is connected to the same datastores as in Experiment 1. However, these datastores are either hosted on a non-VAAI array or on two different arrays.
Experiment 3An ESXi hosts is connected to the source datastore while a different host is connected to the destination datastore. The datastores are hosted on different arrays.
Experiment 4The source content store is an NFS file system mounted on the vCenter server, while the destination content store is a datastore is hosted on a storage array.

fig3
Figure 3. Storage configurations and data transfer flows

Experimental Results

Figure 4 shows the results of the four experiments described above in terms of deployment duration (lower is better), while the following table summarizes the main observations for each experiment.
Experiment 1The best performance was achieved in Experiment 1 (two datastores backed by a VAAI array). This was expected, as in this scenario the actual data transfer occurs internally to the storage array, without any involvement from the ESXi host. This is obviously the most efficient scenario from a deployment perspective.
Experiment 2In Experiment 2, although the array is not VAAI-capable (or the datastores are hosted on two separate arrays), the source and the destination datastores are connected to the same ESXi host. This means the data transfer occurs through the 8 Gb/s Fibre Channel connection. This scenario is about 20% slower than Experiment 1.
Experiment 3The scenario of Experiment 3 is significantly slower (about three times) than Experiment 1 because the datastores are attached to two different ESXi hosts. This causes the data transfer to go through the 1Gbps Ethernet connection. We also ran this experiment using a 10Gbps Ethernet network, and found that the deployment duration was similar to the one measured in Experiment 2. This suggests that the 1Gbps Ethernet connection is a significant bottleneck for this scenario.
Experiment 4In the final scenario, Experiment 4, the template resides on an NFS file system mounted on the vCenter server. Because the template is stored in a compressed format on the NFS file system in order to save network bandwidth, its decompression on the vCenter server slows the data transfer quite noticeably. The network hops between the vCenter Server and the destination ESXi host may further slow the end-to-end data transfer. For these reasons, this scenario was about seven times slower than Experiment 1.  We also ran the same experiment using a 10Gbps network between the NFS server and the vCenter server and measured a completion time only slightly better than with the 1Gbps network (1260s vs. 1380s). Given that compression and decompression are CPU-heavy operations, using a faster network may result in only a marginal performance improvement.

fig4
Figure 4. Deployment completion time for the four storage configurations

Conclusions

This blog post explored how different Content Library backing storage configurations can affect the performance of a virtual machine deployment operation. The following guidelines may help an administrator in optimizing the Content Library performance for said operation based on the storage options at her/his disposal:
  1. If no other optimizations are possible, the Content Library should be at least backed by a datastore connected to one of the ESXi hosts (scenario of Experiment 3). Ideally a 10Gbps Ethernet connection should be employed.
  2. A better option is to have each ESXi host connected to both the source datastore (the one backing the Content Library) and the destination datastore(s) (the one(s) where the new virtual machine is being deployed). This is the scenario of Experiment 2.
  3. The best case is when all the ESXi hosts are connected to a VAAI-capable storage array and both the source and destination datastores reside on said array (Experiment 1).
Source:-
http://blogs.vmware.com/performance/2015/07/efficiently-deploy-vms-vmware-vsphere-content-library.html

Tuesday 28 July 2015

Gold Master Option of vApp Template in vCloud Director


You can see here there is “Gold Masters” option. There really is no difference between a “Template” and a “Gold Master” from functional view point. It’s more to do with development process. So you can start off with a vApp Template being regarded as a “beta” version during a testing phase. Once the vApp Author is satisfied with the template it can be marked as a “Gold Master” its is regarded as reaching its final state.

Monday 27 July 2015

Sunday 26 July 2015

Clipboard Copy and Paste does not work in vSphere Client 4.1 and later (1026437)

Symptoms

  • Cannot copy and paste from the virtual machine remote console to the system in which the vSphere Client is installed.
  • The Copy and Paste options are disabled.

Cause

This issue occurs because by default, the Copy and Paste options are disabled due to security concerns in vSphere Client 4.1 and later.

Resolution

To resolve this issue, you must enable the Copy and Paste options using the vSphere Client. Alternatively, you can use RDP (Remote Desktop Protocol) to connect to the Windows virtual machines.
To enable Copy and Paste option for a specific virtual machine:

Note: This procedure will enable Copy and Paste within a file but not Copy the file itself.

Note: VMware Tools must be installed for the Copy and Paste option to work. For more information, see Installing VMware Tools in a Windows virtual machine (1018377).
  1. Log in to a vCenter Server system using the vSphere Client and power off the virtual machine.
  2. Select the virtual machine and click the Summary tab.
  3. Click Edit Settings.
  4. Navigate to Options > Advanced > General and click Configuration Parameters.
  5. Click Add Row.
  6. Type these values in the Name and Value columns:

    NameValue
    isolation.tools.copy.disableFALSE
    isolation.tools.paste.disableFALSE


    Note: These options override any settings made in the VMware Tools control panel of the guest operating system.
  7. Click OK to close the Configuration Parameters dialog, and click OK again to close the Virtual Machine Properties dialog.
  8. Power on the virtual machine.
Note: If you perform vMotion of a virtual machine to a host where the isolation.tools.*="FALSE" is already set, the Copy and Paste options are automatically activated for that virtual machine.
To enable Copy and Paste option for all the virtual machines in the ESXi/ESX host:
  1. Log in to the ESX/ESXi host as a root user.
  2. Take a backup of the /etc/vmware/config file.
  3. Open the /etc/vmware/config file using a text editor.
  4. Add these entries to the file:

    vmx.fullpath = "/bin/vmx"
    isolation.tools.copy.disable="FALSE"
    isolation.tools.paste.disable="FALSE"

  5. Save and close the file.

    The Copy and Paste options are only enabled when the virtual machines restart or resume the next time or shutdown and power-on the virtual machine for changes to take effect.  This must be done on the virtual machine side, not the guest operating system side.
Note: These options do not persist after the host upgrade. If you upgrade to a newer version after enabling these options, the changes are lost and you may have to re-enable them.
For more information, see the Limiting Exposure of Sensitive Data Copied to the Clipboard section in the ESX Configuration Guide.
Source:-
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1026437&src=vmw_so_vex_ragga_1012

Wednesday 22 July 2015

Tuesday 21 July 2015

Virtual NIC settings on a Windows guest are lost after a virtual hardware upgrade (1015572)

Symptoms

After upgrading a Windows virtual machine from hardware version 4 to hardware version 7, virtual NIC settings (such as static IP configuration) are lost.

Resolution

This issue occurs when the virtual network card configured in the virtual machine is moved to a different position on the virtual PCI bus and results in the guest Operating System treating the existing virtual network card as a new hardware device.

Overview of the VMUpgradeHelper Tool

To workaround this issue, use the VMUpgradeHelper service before performing the virtual hardware upgrades on a virtual machine. For the updated version of VMware Upgrade helper, see:
 
The purpose of the VMUpgradeHelper service:
  • It allows you to save the current configuration of any virtual network cards to the registry.
  • It allows you to restore a previously saved configuration, overwriting the current configuration of the virtual network cards.

Note: The VMUpgradeHelper service is only available with the latest version of VMware Tools, so you must upgrade VMware Tools before upgrading from a hardware version 4 virtual machine to a hardware version 7 virtual machine. For more information, see:

Settings that are saved when using the VMUpgradeHelper Tool 

The VMUpgradeHelper service saves and restores the following NIC information:
  • IP
  • IPv4/v6 network addresses and subnets masks
    • Default gateways and cost metrics
  • DHCP state (enabled or disabled)
  • DNS
    • Domain name
    • Server search order list
    • The Register this connection's addresses in DNS setting

Settings that are not saved using the VMUpgradeHelper Tool

The VMUpgradeHelper service does not save the following network protocol connections:
  • WINS
    • The NetBios setting
  • Alternate configuration settings
  • IP filtering
    • Permitted IP protocols
    • Permitted TCP ports
    • Permitted UDP ports

Using the VMUpgradeHelper Tool

Note: In later versions of VMware Tools the VMUpgradeHelper tool may have a .bat extension. The usage is the same as the .exe version.
VMUpgradeHelper.exe / VMUpgradeHelper.bat usage:
  • /s  Saves network configuration into the registry.
  • /r  Restores network configuration from the registry.
  • /i  Installs the VMUpgradeHelper service.
  • /u  Removes the VMUpgradeHelper service.

Recommended procedure to save the network configuration before performing a virtual hardware upgrade: 
  1. Open a command prompt inside the Windows virtual machine.
  2. Navigate to the VMware Tools install directory (C:\Program Files\VMware\VMware Tools).
  3. Save the network configuration using one of the following:
    • VMUpgradeHelper.exe /s
    • VMUpgradeHelper.bat /s
  4. Power off virtual machine and upgrade virtual machine hardware. See Upgrading a virtual machine to the latest hardware version (1010675)
  5. Power on virtual machine and open a command prompt inside the Windows virtual machine.
  6. Navigate to the VMware Tools install directory (C:\Program Files\VMware\VMware Tools).
  7. Restore the virtual machine network configuration using one of the following:
    • VMUpgradeHelper.exe /r
    • VMUpgradeHelper.bat /r
Source:-
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1015572&src=vmw_so_vex_ragga_1012