Category Archives: VMware

Vmware

Configuring VMware Capacity Planner for IP subnet discovery (2014366)

I had trouble discovering servers on a different subnet so after a little digging, I found the below which did the trick.

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2014366

 

After installing the VMware Capacity Planner Datamanager and configuring the system to receive new systems, you can perform these steps to discover systems on an IP subnet:

  1. Open Capacity Planner Data Manager.
  2. Navigate to Admin > Options.
  3. Go to Modules > Discover > Settings.
  4. Deselect the Discover Groups / Domains option.
  5. Under the Node Discovery tab, deselect Lan Manager Systems and Active Directory Systems.
  6. Select DNS Names and IP Address.
  7. Under the System Discovery tab, select the Test Connection to system during discovery option.
  8. Under the IP Subnets tab, click Add.
  9. Add the appropriate Subnet Range by Class or CIDR Notation.
  10. Click Apply.
  11. Click Tasks > Run Manual Tasks > Run Discover Task.The IP subnet that has been configured are pinged and systems are discovered.

Powershell Issues

I was recently working on a script to send an SMTP email reporting on VM’s that only have a certain amount of free guest VM disk space left.

It turns out to use ‘Connect-VIServer’ cmdlet in your PowerShell script. You have to follow two steps:

1) Install VMware PowerCLI in your machine. It will register itself in PowerShell during the installation.

2) Use ‘Add-PSSnapIn VMware.VimAutomation.Core’ to add the registered PowerShell VMware Snapin to the current session.

So essential, the first lines of your script should have the ‘Add-PSSnapIn VMware.VimAutomation.Core’ entry otherwise the native Microsoft PowerShell

will fail due to unknown commands.

For those interested, the script looked like the below.

Add-PSSnapIn VMware.VimAutomation.Core

Connect-VIServer <servername>

$treshold = 10GB

$report = Get-VM | where { $_.PowerState -eq “PoweredOn” -and $_.Guest } | Get-VMGuest | %{
$vm = $_
$_.Disks | where {$_.FreeSpace -le $treshold} | `
select @{N=”VMName”;E={$vm.VMName}},Path,@{N=”DiskFreeGB”;E={[math]::Round((($_.FreeSpace)/1GB),2)}}
}

$emailFrom = <from-email-addr>
$emailTo = <to-email-addr>
$subject = “Free disk space less than ” + ($treshold/1GB) + “GB”
$body = $report | ft -AutoSize | Out-String
$smtpServer = <smtp-server>
$smtp = new-object Net.Mail.SmtpClient($smtpServer)
$smtp.Send($emailFrom, $emailTo, $subject, $body)
A big thank you to LucD on the VMware Communities Forum.

You cannot view the Inventory in the vSphere Web Client?

I recently encountered an issue with the vCenter Web Client not displaying any inventory from the vCenter Server.

The vSphere client was displaying the inventory correctly, I could see the Datacentre, cluster, hosts and VMs but the web interface saw none of this. I knew the permissions were 100% correct as I could see the vCenter server.

VMware Knowledge Base Article:

Empty Folder(s) in Datastore after Replication job runs?

This behavior is caused when Veeam is configured with a different target datastore then what the replica VM currently resides on. Veeam creates a folder to hold the data on that datastore because it does not see anything there. Once it goes to write the data and work with the replica, it finds the correct location via the reference ID of the replica VM.

The solution to this is to verify the datastore location in VMware, and then edit the replication job to match this. After doing this, you can delete the empty folders, though it is highly advised to check each folder individually before deletion to verify they are in fact all empty.

source: http://www.veeam.com/kb1793

vSphere 4.x Support

vSphere 4.x Support

vSphere 4.x moves from the General Support phase into the Technical Guidance phase on May 21, 2014. During Technical Guidance, customers receive support but VMware does not offer new security patches or bug fixes. For information on what is provided during Technical Guidance, see the VMware support lifecycle policy. VMware has developed a vSphere 4.x Extended Support offering for customers who require triage of Severity 1 issues and new security fixes after May 21, 2014.

So your production VMFS LUNs appear in the disk management snap-in of the Veeam Backup & Replication server?

While the drive path is visible in Windows, do not try to initialize or format these LUNs within the disk management snap-in, as this could corrupt or overwrite data stored on the VMFS LUNs. Further, note that the Veeam Backup & Replication v5 and above will automatically disable the automount feature of Windows. Automount allows for automatically mounting and assigning configuration to newly connected volumes. If you add a VMFS datastore to the Veeam Backup & Replication server, with automount enabled, the operating system may initialize and re-signature the volume. This would make it unrecognizable by the ESX(i) hosts.

Having these LUNs visible within disk management can ensure that all of the required LUNs are available to Veeam Backup & Replication, including viewing the target and LUN IDs as presented from the storage processor. Conversely, if all VMFS LUNs are not visible; there may be a zoning issue.

Direct SAN access processing mode allows Veeam Backup & Replication to communicate directly with the storage for the highest backup job performance. Further, if the backup target supports iSCSI or fibre channel; direct SAN access mode also enables a completely LAN-free backup implementation.

Source: http://www.veeam.com/blog/using-the-iscsi-initiator-within-veeam-backup-replication-in-a-vm.html

http://www.jpaul.me/?p=334

VMFS LUN sizing, Which is best?

VMFS LUN sizing notes
Creating a one LUN to one VM
You can get the best performance with 1 VM to 1 LUN/VMFS mapping. There is no competition between machines on the same VMFS, each load is separated and all is good. 
The problem is that you are going to manage an ungodly amount of LUNs, may hit supported maximum limits, face headaches with VMFS resizing and migration, have underutilized resources (those single percentage point free space on VMFS adds up) and generally create a thing that is not nice to manage.
Creating one LUN for all VM
The other extreme is one big VMFS designated to host everything. You’ll get best resources utilization that way, there will be no problem with deciding what do deploy where and problems with VMFS X being a hot spot while VMFS Y is idling. Maintenance is an issue since bringing down the one and only LUN means taking down the only storage available to the VMware environment, all the eggs are in one basket so to speak.
 
The accepted practice is to create datastores large enough to host a number of VMs and divide the available storage space into appropriately sized chunks. What the number of VMs is depended on the nature of the VMs. You may want a single or a couple of critical production data bases on a VMFS, but allow three or four dozen of test and development machines onto the same datastore. 

VMware Tools for ESXi

http://labs.vmware.com/flings/vmware-tools-for-nested-esxi

Finally, VMware tools are available for VMware ESXi 5.0 onwards. This is great for those running nested ESXi.

This solves one of the big problems regarding automatic guest shutdown requires VMware Tools since it did not exist for ESXi, lack of VMware tools turned this basic task into a very painful, multi-step process. But not anymore! This new VMware fling brings what many of us have been waiting for so long > VMware Tools for Nested ESXi (requires ESXi 5.0 or later).

This VIB package provides a VMware Tools service (vmtoolsd) for running inside a nested ESXi virtual machine. The following capabilities are exposed through VMware Tools:

  • Provides guest OS information of the nested ESXi Hypervisor (eg. IP address, configured hostname, etc.).
  • Allows the nested ESXi VM to be cleanly shut down or restarted when performing power operations with the vSphere Web/C# Client or vSphere APIs.
  • Executes scripts that help automate ESXi guest OS operations when the guest’s power state changes.
  • Supports the Guest Operations API (formally known as the VIX API).