Sunday, 7 July 2013

Uploading Files to ESXi Local Datastore without vSphere Client / WebClient


In one of our remote locations, we wanted to build DC VM. We had the iso file for Win2008 but unfortunately not vSphere Client to upload iso to datastore. The internet speed was very poor (8 kBps) where we weren't able to download vSphere Client. Lucky we had SSH enabled on that host which is a standard we are following for each new deployed ESXi box.

Here is the procedure to upload to local datastore through SFTP.

  1. Connect to ESXi server using SFTP client (I used WinSCP)
  1. Navigate to /vmfs/volumes/<local-datastore-name>/
  1. Drag and drop the iso image from local disk to local datastore.

ESXi Keepalives


In my office, we are currently in the process of deploying ESXi hosts on Drilling Rigs. Those are placed in middle of oceans and using VSAT networks for communication.

The challenge was having those hosts connected to vCenter which is placed in our HQ where the latency can reach upto 1.5 sec.

ESX/ESXi hosts send heartbeats every 10 seconds. vCenter Server has a window of 60 seconds to receive the heartbeats. If the UDP heartbeat message is not received by vCenter Server, it treats the host as not responding.

To increase the timeout limit: 
  1. Open the C:\Documents and Settings\All Users\Application Data\VMware\VMware VirtualCenter\vpxd.cfg file (on Windows 2008, C:\ProgramData\VMware\VMware VirtualCenter\vpxd.cfg) using a text editor.

  1. Add this information in the <vpxd> tags:

    <heartbeat>
    <notRespondingTimeout>120</notRespondingTimeout>
    </heartbeat>

  1. Restart the VMware VirtualCenter Server service.

Sunday, 24 March 2013

vCloud Usage Meter - Introduction


As we mentioned in VSPP section, vCloud Usage Meter (UM) is the product used to monitor, measure, and report monthly points usage to Aggregator. Let's examine it.

Information collected on an hourly basis includes:
  • Time at which data is collected
  • Host DNS name
  • Host RAM (physical memory)
  • vSphere license type
  • Virtual machine vCenter name
  • Virtual machine hostname
  • vRAM (allocated virtual memory)
  • Billing vRAM (calculation based on reserved virtual memory and per-virtual-machine memory cap)
  • Virtual machine CPU (count of virtual CPUs)
  • Virtual machine Instance UUID (universal unique identifier)
  • Virtual machine location in vSphere Inventory
Data collected is stored in the PostgreSQL database of the virtual appliance itself.

UM TCP Ports
Deploying UM

1. Download UM OVA Template (by the time of this post UM v3.0.2 was the latest released from the following link http://www.vmware.com/download/download.do?downloadGroup=UMSV3) 
2. Deploy OVA Template using vCenter or Host (while deployment, you need to configure VA network settings) 
3. From the console, login to CLI using default username/password (root/vmware). Change the root password using passwd command.
4. Assuming network settings weren't properly configured during OVA deployment, you can run the following script from console to reconfigure network settings (/opt/vmware/share/vami/vami_config_net)
5. Next you need to configure Service Provider WebUI password. For this, run the following script /opt/vmware/cloudusagemetering/scripts/webpass. 
6. Change the hostname to user friendly one using the script /opt/vmware/share/vami/vami_set_hostname <NAME>.
7. Configure your time settings from UM console
8. After configuring hostname and time settings, you need to restart tomcat service as follow service tomcat restart for changes to reflect.

Once all above steps are completed, login to UM WEBUI using the URL https://<um-ip>:8443/um

Tuesday, 19 March 2013

VMware Service Provider Program (VSPP)


Before writing about vCloud Usage Meter, I thought to brief VSPP Program which is strongly related to Usage Meter.

All the details through this post are based on VSPP Product Guide (VMware Q4 2012).

This program is offered by VMware to License VMware products through a points based model. Its combining two components which are points per product and points per reserved Virtual RAM (vRAM).

If you subscribe for VSPP program, a VM running in your cloud infrastructure with 5GB reserved vRAM can be charged 50 points. Another example, vShield Edge VM with 3GB reserved vRAM be charged 4 points as product and 30 points for vRAM, i.e. the total points for this VM will be 34 points.

How this works?

VSPP program offers three Bundles as below:

- VMware vCloud Service Provider Bundle - Premier Plus Edition 
- VMware vCloud Service Provider Bundle - Premier Edition 
- VMware vCloud Service Provider Bundle - Standard Edition

Each bundle will charge different number of points per reserved vRAM per month. In addition, each bundle will include a different set of VMware products. Therefore, its up to service provider to subscribe with the suitable bundle based on his product needs.

Note: Minimum vRAM reservation to be charged is 50% of VM vRAM even if reservation isn't configured. Maximum reservation to be charged is 24 GB.

In addition, each VMware product will be charged some points per month independently from the bundle.

Points per Product and Points per Reserved vRAM are summarized in table below.
The products per bundle can be summarized as below:
Note that you will be getting technical support with all plans

For example, vFabric Suite Advanced total points is the total points used for the infrastructure based on one of the vCloud bundles plus the total points used for vFabric Suite Advanced itself.


The calculation for reserved vRAM points is straight forward but can be more complex if the reserved vRAM changes during the month. Therefore, VMware provided this formula to calculate points per month:

Net points = (vGB hours X points per 1 GB reserved RAM)/hours per month

Hours per Month can be:

28-day months = 672 hours
29-day months = 696 hours
30-day months = 720 hours
31-day months = 744 hours

Example:

During one 30-day calendar month, a Service Provider uses the vCloud Service Provider Bundle – Premier Edition to configure his or her virtual machine with 16 vGB for 15 days and 48 vGB for the remaining 15 days. The reservation level for the virtual machine is set at 75 percent for the entire month.

15 days x 24 hours x 16 vGB x 0.75 = 4,320 vGB hours
15 days x 24 hours x 24 vGB (48 vGB x 0.75 but capped at 24GB) = 8,640 vGB hours
Total vGB hours = 12,960 vGB hours
Total points = 12,960 vGB hours ÷ 720 hours/month x 7 points (for Premier) = 126 points

All of the VSPP Products must be installed and used solely by the Service Provider on their owned or leased hardware and premises as part of a multi-tenant Hosted IT Service with the following exception:

- The Site Recovery Manager Protection licenses may be installed on a hosting customer’s premises as long as Service Provider controls all hardware and administration associated with the hosted environment.

There are multiple Point Packages Per Month offered by VMware Authorized Aggregators (360, 1,800, 3,600, 10,800, 18,000 or 30,000). You need to subscribe for the suitable one based on your estimated usage and plan.

Reporting

Each service provider should report the total reserved vRAM hours every month as well as the products running in the Cloud environment to the aggregator. The aggregator will calculate the total points per month based on above formula. Any excess points above the monthly package will be charged. Reporting to aggregator can be done using the following methods:

- The vCloud Usage Meter is used to monitor the vCloud Service Provider Bundles, vCenter Operations Management Suite and vCloud Integration Manager, and it must be installed by the Service Provider to monitor and report usage information to their Aggregator. 
- Separate license keys must be identified by the vCloud Usage Meter in order to meter the Cloud Test Demonstration Environment to report usage information to their Aggregator. 
- SRM servers must be identified and linked to vCenter Servers in order to report on protected virtual machines. 
- All other VSPP Products must be manually reported to the Aggregator under the specific data collection process outlined by the Aggregator

The total of these submissions will be used by the Aggregator to calculate the total point usage for the month

Note: The Service Provider Agreement requires Service Provider to record all data related to points reporting which we mentioned above for a minimum of 3 years

There are other benefit s provided by VMware to VSPP Partners. You may refer to this link for further details about the program.

Saturday, 16 March 2013

vCloud Connector 2.0 - Data Flow


After having your vCC environment ready with your clouds added, let's see how data flow works. In this example we will copy a VM from vSphere Cloud to vCD Cloud.

First we will list the steps to copy VM using vCC UI.
Note: The target vCD cloud must have at least one Catalog (published or unpublished). Without catalog, copy operation won't take place. We will see in details later.
Note: You need to select the target vApp Network for the VM to be connected. This vApp network will be created always in Fenced mode.

Now let's see what's happening in the background from both vCC UI and vSphere Client.
Now let's explain the output of vSphere Client using blocks.
1. Assuming vCC Server isn't running SSL, vCC UI (vSphere Client or vcloud.vmware.com) will send copy request to vCC Server on TCP port 80. 
2. vCC Server will forward the request on TCP port 443 to vCC Node (since SSL is enabled by default in vCC Node). 
3. vCC Node will send request to vCenter Server (using REST API on TCP port 443) to initiate "Export OVF Template" operation for the source VM. 
4. Accordingly, vCenter Server will locate the VM, start export task, and direct ESXi Server (hosting the VM) that template should be exported in vCC-Node (the path will be vCC-Node transfer storage). 

Here is a small hint about OVF templates.

OVF Template can be exported in two formats:

  1. .ovf: The template is made of multiple files which are the components of OVF.
  1. .ova: The template is made of one file which combines all components.

OVF Template is made of three components:

  1. Manifest file (.mf): This file contains SHA-1 digests of the other files and used by vCenter to verify the integrity of other files. This file is protected by certificate.
  2. OVF Descriptor XML Document (.ovf): This file contains information about the template such as virtual hardware used, requirements, product details, licensing, etc.
  1. Virtual Hard Disk (.vmdk): This represents that virtual hard disk which carries the actual data of the VM. OVF templates support different virtual hard disk formats other than VMDK which can be used by VMware vSphere.

Now let's see this operation from vCC-Node debugs (tail -f /opt/vmware/hcagent/logs/hca.log).

!!! … vCC-Node received request from vCC-Server to start copy operation. Although no connection request is listed, but we can see that request received on tomcat-http service which is the way of communication between vCC-Node and vCC-Server as we mentioned in point 2. The connection between vCC-Node and vCC-Server is pre-established the moment vCC-Node registers with vCC-Server.

[tomcat-http--22] INFO  c.v.hc.agent.service.TaskScheduler - Enter scheduleJob()
[tomcat-http--22] INFO  c.v.hc.agent.service.TaskScheduler - Enter scheduleJob()
[tomcat-http--22] DEBUG c.v.hc.agent.service.TaskScheduler - TaskType copy
[tomcat-http--22] INFO  c.v.hc.agent.service.TaskScheduler - Exit scheduleJob()
[tomcat-http--22] INFO  c.v.hc.agent.service.TaskScheduler - Exit scheduleJob()
[tomcat-http--22] DEBUG c.vmware.hc.agent.business.task.Task - isCompleted = false

!!! ... vCC-Node sent request to vCenter Server (vCenter Server IP is 80.227.54.182) to start Export OVF Template operation as a reaction for copy request.

[taskExecutor-4] DEBUG c.v.h.a.service.TaskProgressUpdater - job name: AgentGroup.copy.copy.4132570d-cb5e-4dcd-8aa9-812898536fcf
[taskExecutor-4] DEBUG c.vmware.hc.agent.business.task.Task - taskName copy, progressLabel Exporting to source node.
[taskExecutor-4] DEBUG c.vmware.hc.agent.business.task.Task - taskName copy, progressLabel Exporting to source node.
[taskExecutor-4] DEBUG c.v.hc.cloud.vsphere.VsphereUtils - ServiceInstance ServiceInstance:ServiceInstance @ https://80.227.54.182/sdk login with key 522661ed-fb14-e4c5-47ba-6be61ec99e27 to https://80.227.54.182/sdk.

!!! ... vCenter Server responds to the request with OVF Template components. The first component is VMDK file which vCC-Node will start downloading. Important to note that vCC-Node will connect directly to ESXi host to download VMDK file from datastore. Therefore, vCC-Node should be able to connect to ESXi Host using https protocol. In our example ESXi host IP is 10.157.102.10.The details of ESXi Host IP and VMDK file name/hash will be provided to vCC-Node by vCenter Server.

[taskExecutor-4] DEBUG c.v.h.a.s.v.i.VsphereStorageAdaptorImpl - Total vmdk bytes to be written: 42949672960.
[taskExecutor-4] DEBUG c.v.h.a.s.v.i.VsphereStorageAdaptorImpl - Downloading https://10.157.102.10/nfc/5207b89e-13d3-d51a-9116-d7c19e76a242/disk-0.vmdk to disk-0.vmdk.
[taskExecutor-4] INFO  c.v.h.a.s.v.i.VsphereStorageAdaptorImpl - SHA1 digest for disk-0.vmdk : 96d36dba9aa974a83dc0eacf6a8bb3730af6029a
[taskExecutor-4] DEBUG c.v.h.a.s.v.i.VsphereStorageAdaptorImpl - Total vmdk bytes written: 72704.

!!! … The second component is OVF descriptor which will be copied from vCenter Server

[taskExecutor-4] DEBUG c.v.h.a.s.v.i.VsphereStorageAdaptorImpl - Writing ovf Descriptor: <?xml version="1.0" encoding="UTF-8"?>
<!--Generated by VMware VirtualCenter Server, User: EHDFCLOUD\baqari, UTC time: 2013-03-15T17:37:10.37785Z-->

… output omitted ...

[taskExecutor-4] DEBUG c.v.h.a.s.v.i.VsphereStorageAdaptorImpl - Adding descriptor.ovf to tar file.
[taskExecutor-4] DEBUG c.v.h.a.s.v.i.VsphereStorageAdaptorImpl - Finished adding descriptor.ovf to tar file.
[taskExecutor-4] DEBUG c.v.h.a.s.v.i.VsphereStorageAdaptorImpl - SHA1 digest for descriptor.ovf: f78f35757d37ba8caa973e70a500addb79aca7a7

!!! … The last component is manifest which is used for integrity check for both VMDK and Descriptor files. In earlier debugs, we noticed that after each task SHA1 value is calculated (its bolded for VMDK file and descriptor file). Each SHA1 value is added now to manifest file.

[taskExecutor-4] DEBUG c.v.h.a.storage.StorageAdaptorUtil - Figuring out manifest file size...
[taskExecutor-4] DEBUG c.v.h.a.storage.StorageAdaptorUtil - Manifest file size: 121
[taskExecutor-4] DEBUG c.v.h.a.storage.StorageAdaptorUtil - Adding manifest to tar file: [ descriptor.mf ]...
[taskExecutor-4] DEBUG c.v.h.a.storage.StorageAdaptorUtil - SHA1(disk-0.vmdk)=96d36dba9aa974a83dc0eacf6a8bb3730af6029a
[taskExecutor-4] DEBUG c.v.h.a.storage.StorageAdaptorUtil - added hash for: disk-0.vmdk
[taskExecutor-4] DEBUG c.v.h.a.storage.StorageAdaptorUtil - SHA1(descriptor.ovf)=f78f35757d37ba8caa973e70a500addb79aca7a7
[taskExecutor-4] DEBUG c.v.h.a.storage.StorageAdaptorUtil - added hash for: descriptor.ovf
[taskExecutor-4] DEBUG c.v.h.a.storage.StorageAdaptorUtil - finished writing manifest to tar.
[taskExecutor-4] DEBUG c.v.h.a.storage.StorageAdaptorUtil - Finished adding manifest to tar file: [ descriptor.mf ].

!!! … We noticed also that manifest and descriptor files were compressed in tar file. This is due to the fact that OVA template is generated for the VM which is one file including VMDK, descriptor.mf, manifest. This is the nature of OVA templates as mentioned earlier.

!!! … Now vCC-Node will logout from vCenter Server as export operation is completed.

[taskExecutor-4] DEBUG c.v.hc.cloud.vsphere.VsphereUtils - ServiceInstance ServiceInstance:ServiceInstance @ https://80.227.54.182/sdk logged out from https://80.227.54.182/sdk.
[taskExecutor-4] INFO  c.v.h.a.s.v.i.VsphereStorageAdaptorImpl - Done exporting OVA archive.
[taskExecutor-4] INFO  com.vmware.hc.agent.jobs.CopyJob - Exported from source to archive file: 4132570d-cb5e-4dcd-8aa9-812898536fcf/TEST-VM.ova

5. The initial copy request sent from vCC-Server to vCC-Node includes copying the OVA template after export to destination vCC-Node transfer storage. Copying between vCC-Nodes uses checkpoint-restart. This is good to resume copying in case transfer fails instead of zero-restart.


!!! … In our example the destenation vCC-Node IP is 213.132.34.10

[taskExecutor-9] DEBUG c.v.hc.agent.impl.AgentAdaptorImpl - Connection to the agent: https://213.132.34.10/agent/api/v2/org/ehdf-private-cloud is expired. Creating new connection
[taskExecutor-9] DEBUG c.v.hc.agent.impl.AgentAdaptorImpl - Renewed RestTemplate for agent https://213.132.34.10/agent/api/v2/org/ehdf-private-cloud , the new expiry time is 1363430391942 Date: Sat Mar 16 14:39:51 GST 2013
[taskExecutor-9] DEBUG c.vmware.hc.agent.business.task.Task - taskName copy, progressLabel Transferring to destination node.
[taskExecutor-9] DEBUG c.vmware.hc.agent.business.task.Task - taskName copy, progressLabel Transferring to destination node.
[tomcat-http--9] DEBUG c.vmware.hc.agent.business.task.Task - isCompleted = false
[taskExecutor-9] DEBUG c.v.h.a.i.RemoteAgentStorageAdaptorImpl - Archive file transferred 1.0. Total bytes transferred: [81708, 81708]
[taskExecutor-9] DEBUG c.v.h.a.i.RemoteAgentStorageAdaptorImpl - Archive file transferred 1.0. Total bytes transferred: [81708, 81708]
[taskExecutor-9] INFO  com.vmware.hc.agent.jobs.CopyJob - Copied to destination node archive file: /817479d6-7094-446e-aceb-847924ad205b/Copy_of_TEST-VM

6. Once copy is completed, the destination vCC-Node will connect to vCD Server using REST API to start importing OVA template as vApp Template in Org Catalog. Next will be importing vApp from vApp Template. Both actions will require communication in the background between vCD Server and associated vCenter Server which we have seen in vSphere Client Snapshot above. 

Note: This is the reason for saying earlier that we need to have at least one Org Catalog for copy to be completed successfully.

[tomcat-http--26] DEBUG com.vmware.hc.vcd.VcdUtils - Updating vcloudclient version to V5_1.
[tomcat-http--26] DEBUG com.vmware.hc.vcd.VcdUtils - Created VcloudClient com.vmware.vcloud.sdk.VcloudClient@3ca1b989.
[tomcat-http--26] DEBUG com.vmware.hc.vcd.VcdUtils - VcloudClient com.vmware.vcloud.sdk.VcloudClient@3ca1b989 login with vcloudToken ZxbbBTI/A5GBWGv71Kh+529aWqMCnBpvRnQC13oHkYk= to https://vcd.ehdf.com.
[tomcat-http--26] DEBUG c.v.h.c.impl.CloudConnectionFactory - Created VcloudClient$$EnhancerByCGLIB$$841d9ec {}.
[tomcat-http--26] DEBUG com.vmware.hc.vcd.VcdUtils - VcloudClient com.vmware.vcloud.sdk.VcloudClient@3ca1b989 logged out from https://vcd.ehdf.com/api.
[taskExecutor-4] DEBUG c.v.h.a.service.TaskProgressUpdater - job name: AgentGroup.import.import.0668f520-3e3a-4502-a750-04810e09acf9
[taskExecutor-4] DEBUG c.v.h.a.service.TaskProgressUpdater - jobName import
[taskExecutor-4] DEBUG c.vmware.hc.agent.business.task.Task - taskName import, progressLabel The task has completed
[taskExecutor-4] DEBUG c.vmware.hc.agent.business.task.Task - taskName import, percent complete 100
[taskExecutor-4] DEBUG c.vmware.hc.agent.business.task.Task - taskName import, state COMPLETED
[taskExecutor-4] DEBUG c.vmware.hc.agent.business.task.Task - taskName import, completed true

Troubleshooting vCloud Connector

1. Test the vCC Server connections by logging on to the vCC Server. 
2. Test the connection between the vCC Server and a vCloud Director cloud:
3. Test the connection between the vCC Server and a vCenter Server: 4. Test the connection between the vCC Server and a vCC Node: 5. If you are surfacing the vCloud Connector UI at vcloud.vmware.com, test the connection between vCC Server and the website: 6. Test the vCC Node connections used in the copy path by first logging on to the vCC Node located in the vSphere internal cloud. 
7. Test the connection between the vCC Node and the vCenter Server:
8. Test the connection between the vCC Node and the ESX host: 9. Test the connection between the vSphere vCC Node and a vCloud Director vCC Node outside the firewall: 10. Log on to the vCloud Director vCC Node. 
11. Test the connection between the vCloud Director vCC Node and the vCloud Director cloud:

You can check the logs of vCC-Server and Node using GUI or Console. CLI Logs path is /opt/vmware/hcserver_or_hcagent/logs/hcs.log_or_hca.log. To restart vCC-Node or vCC-Server service, use the command service tomcat-hcagent restart or service tomcat-hcserver restart.

Thursday, 14 March 2013

Deploying vCloud Connector 2.0


vCloud Connector is a product which provides single user interface to manage multiple clouds (Public vCD Cloud, Private vCD Cloud, vSphere Cloud). This management includes moving contents between clouds, power-on / power-off VMs, suspend/resume VMs, transfer VMs / vApps / Templates between clouds, check performance, etc.

vCloud Connector Components

1. vCloud Connector UI 
2. vCloud Connector Server 
3. vCloud Connector Nodes
Building vCloud Connector Environment

For Cloud Service Providers

Typical deployment for service providers is to have one vCC Server and one vCC Node (utilizing vCC multi-tenant feature). In this case, cloud provider admin will have visibility to all clouds (organizations) through single node. Additionally, vCC Node can be installed in vSphere environment to move VMs between vcloud and vsphere.

For Cloud Clients

In this scenario, the clients will deploy vCC Server which will connect to vCC Node deployed by service provider. vCC Node URL needs to be provided by service provider for client to configure vCC Server along with organization name and credentials.

Note: Organization Credentials to be used by Client vCC Server should have 'Org Admin' privileges.

1. Install vCC Server. This can be done using two methods:


a. Install vCC Server using vSphere Client. This is simple  by deploying OVF Template
b. Install vCC Server using vCD (you need to have at least one organization in order to install vCC Server):

1. Add vCC Server to a vCD Catalog as a vApp Template
2. Create vCC Server from the Template

PS: Only one vCC Server is required to manage vSphere Cloud, vCD Cloud, or both.

A part of vCC Server deployment from template, you need to setup vCC Server network settings. In case of miss-configuration of IP settings, you can run the command /opt/vmware/share/vami/vami_config_net through vCC Server console which will start network setup wizard.
PS: This command should be entered by root account. Else, the script will fail during run due to account privileges and settings won't change.

2. Configure vCC Server.


You need to browse vCC Server web console using the URL: https://#vCC-Server-IP#:5840. Login with default username (admin) and password (vmware).

From there, you need to do basic setup including:

a. Time Zone and Basic Network Settings (IP, DNS, Hostname, Proxy Servers, etc).

Note: Time-Zone changes will reflect on logs after reboot

b. Change Password, Licensing, manage vCC Certificates in case signed ones are imported, Configure Logging Settings and Export Logs.
c. Register vCC Server with vCenter Server (vSphere Client Tab), vcloud.vmware.com, or both. In case vCC Server is registered with vCenter Server, vCC UI can be access using vSphere Client. However, if vCC Server is registered with vcloud.vmware.com, vCC UI can be access by browsing http://vcloud.vmware.com.

PS: For registration with vcloud.vmware.com, vCC-Server should have internet reachability to http://vcloud.vmware.com.

Important

vCC Server can register with one vCenter Server at a time. To register with another vCenter Server, unregister from the current one and register with the new one.  Same is applicable to vcloud.vmware.com, vCC Server can register with only vcloud.vmware.com at a time. To register with another one, unregister from the current one and register with the new one.

On the other hand, only one vCC Server can register with vCenter Server at a time. To register new vCC Server, select the Overwrite existing registration option while registering. However, multiple vCC Servers can register with vcloud.vmware.com using same account. You can select the one you want to manage as below.
In case vCC Server is deployed behind vSE or Physical firewall, the following communication should be allowed:

- TCP 443: For communication between vCC Server and Node and between Nodes. This port is used when SSL is enabled; when SSL is disabled, port 80 is used. 
- TCP 5480: For communication with the vCC Server Admin Web console, for example during the registration process with vcloud.vmware.com.

3. Install vCC Node

As mentioned earlier, vCC Node is required in each entity to be managed by vCC Server. This entity can be vSphere environment, vCD Private cloud, or vCD Public Cloud. In vCD environments, its important to know that vCC Node is mutli-tenant aware.

vCC Node can be installed in a similar way to vCC Server by deploying vCC Node VA using either vSphere Client or vCD. Again during the provisioning, network settings should be configured. In case you want to re-run network setup wizard, you can execute the command /opt/vmware/share/vami/vami_config_net in vCC Node console using root account.

 What is next?

After doing the basic setup which was done as well in vCC Server, here you go.

a. Register vCC-Node with its Cloud. In this registration, vCC-Node will poll all required info about the associated cloud and will be managing it. 

Connect to vCC-Node web console using the URL https://#vCC-Node-IP#:5480. Use the default username (admin) and password (vmware). Navigate to Node tab and configure as below.
b. Register vCC-Node with vCC-Server. After this registration, vCC Server will be sending commands to vCC Node which will execute them on the associated cloud. 

Connect to vCC Server web console using the URL https://#vCC-Server-IP#:5480. Navigate to Nodes tab and configure the parameters as below.
PS: The Local Content Directory Node always appears by default. This node is for Content Sync. Do not edit this Node.

If you have a NODE registered with vCD Cloud and trying to register this NODE with vCC Server, make sure that REST API URL in vCD isn't including "/cloud". Else, vCC-Node won't register with vCC-Server because vCC-Node won't be able to connect to vCD REST API to poll cloud details and provide those details to vCC-Server (URL mismatch)

c. Copy operations between clouds rely on temporary storage resides in vCC-Node before moving actual data into SAN/NFS. The default size of vCC-Node transfer storage is 40 GB. You may increase it for heavier operations.
After changing the transfer storage size, you need to connect to vCC-Node CLI and run the following command (this is required to reflect the new size in the OS):

sudo /opt/vmware/hcagent/scripts/resize_disk.sh

d. Increase the number of concurrent activities from vCC-Node web console.
4. Add Clouds to vCC UI. This is the last step to start management of clouds using vCC.

a. Connect to vCC UI either through vSphere Client or http://vcloud.vmware.com.
b. Add new cloud as shown below. The drop -down menu will list all the clouds registered with vCC Server through their respective vCC Nodes. vCC UI should be able to connect to cloud IP which can be vCenter Server IP or vCD Server IP, NOT vCC NODE.
c. The username/password should be valid on vCD or vCenter Server. The privileges of this account controls what operations can be performed from vCC UI on the cloud. For example, with 'vApp User' privilege, the administrator can't perform copy operation from vCC UI on vCD Cloud (this privilege can't download vApp Template). To have full access, use 'Organization Admin' privilege.

Friday, 25 January 2013

vCloud Director Service Down ... /opt Directory Full


Recently, I had my vCD service down suddenly. After some troubleshooting identified that its due to /opt directory full. Here I am briefing the concept behind it.

Consider an example when you upload media to your cloud organization using vCD. This media will be located physically in the datastore of the OvDC which is providing resources to this organization and will be counted from the storage quota of this organization. To be more detailed, during the upload process, the media will written to the following path in vCD Server:

/$VCLOUD_HOME/data/transfer

From this path it will be rewritten to OvDC datastore.

Example,

[root@EHDF-VCLOUD-01 transfer]# du -sh /opt/vmware/vcloud-director/data/transfer/*
64M     /opt/vmware/vcloud-director/data/transfer/a73ea460-6be7-4c22-a4dd-1520c687da64    !!!... The cookie representing media name
4.0K    /opt/vmware/vcloud-director/data/transfer/cells
16K     /opt/vmware/vcloud-director/data/transfer/lost+found

Typically, $VCLOUD_HOME corresponds to opt/vmware/vcloud-director. This can be verified by browsing the file /etc/profile.d/vcloud.sh

[root@EHDF-VCLOUD-01 transfer]# cat  /etc/profile.d/vcloud.sh
export VCLOUD_HOME=/opt/vmware/vcloud-director
export VCLOUD_MAX_FD=65535

The problem is that vCD will keep the uploaded media in this path for 24hrs before quarantining it. It won't delete it immediately after writing it to datastore. This is mentioned in vCloud Director Installation and Configuration Guide

"Uploads and downloads occupy this storage for a few hours to a day. Transferred images can be large, so allocate at least several hundred gigabytes to this volume"

Now, assume that your opt directory is having 12GB size and you are hosting 200 clouds (organizations) where 50 of them tried to upload 4GB media simultaneously. We are talking about 200GB which is much more that opt size.

This will cause vCD service to fail, i.e. the whole cloud will be down (definitely it won't impact running VMs).

In fact this vCD Transfer Storage is used for the following purposes:

1. Media Upload/Download 
2. Import/Upload vApp Templates

The proper solution for this is to mount vCD transfer storage to an external NFS or other shared storage to provide much more space. Here are the steps.

1. Add new vDisk to vCD server (say 100 GB) 
2. Login to vCD server using SSH 
3. Stop vCD Service


4. Verify the name of your new vDisk (the system already has sda/sdb. Therefore, sdc is new)
 
5. Format the new vDisk.
 
6. Create File System in the new partition
 
7. Mount the new partition to vCD transfer storage
 
You can verify the mounting as follow:
 
8. Edit /etc/fstab directory and add the red line to make sure that vCD server mount the new partition to transfer storage at each boot.

9. Modify the permissions to allow vCD Service to write to the new location.
   
10. Start vCD Service