VMware Data Protection with Data Domain and DD Boost

VMware Data Protection with Data Domain and DD Boost


VMware now provides a very powerful Data Protection suite as part of all versions of vSphere. So, you don’t need to invest in another backup solution..

VMware Data Protection can be integrated with EMC DataDomain, EMC DDBoost, and EMC Data Protection Advisor to provide a central integrated backup solution for Virtual Environments.. Data Domains can be used to replicated over WANs.

VMwre Data Protection is Application-aware backups: Application-level backups for Microsoft Exchange, Microsoft SharePoint and Microsoft SQL Server are performed using a lightweight in-guest agent that ensures application-consistent backups and provides granular recovery.

vSphere Data Protection has the capability to properly back up and restore Exchange Server, SQL Server, and SharePoint application databases. SQL Server clusters and Exchange Server database availability groups are also supported. A vSphere Data Protection application agent is installed in the guest OS of each virtual machine running these applications. It is also possible to install these agents on physical machines to protect Exchange Server, SQL Server, and SharePoint application databases. Agents enable application-consistent backup and recovery and provide support for other options such as full, differential, or incremental backups; multistream backups; and database log management.

EMC Data Domain Boost for Enterprise Applications integrates seamlessly with Oracle RMAN, Microsoft SQL Server, SAP, SAP HANA, and IBM DB2 to provide application owners and database administrators with complete control of their own backups, using their native application utilities. This empowers applications owners with the control they desire and eliminates storage silos for application protection.

With vSphere Data Protection, it is possible to restore individual files, folders, and directories within a virtual
machine. An FLR operation is performed using a Web-based tool called vSphere Data Protection Restore Client.The process enables end users to conduct restores on their own, without the assistance of an administrator, by selecting a restore point and browsing the file system as it looked at the time that backup was done. They locate the item(s) to be recovered, select a destination for the restored item(s), and start the recovery. The progress ofthe restore job can be monitored in vSphere Data Protection Restore Client.

Data Domain enables scale beyond the 8TB limit imposed by a Vmware Data Protection appliance.


Limitations of VMware Data Protection

This is a much better solution than Veeam – http://blogs.vmware.com/virtualreality/2014/04/debunking-myths-vsphere-data-protection.html

Bill of Materials:

  • VMware Data Protection
  • Data Domain Appliances
  • EMC DD Boost
  • EMC Data Protection Advisor
  • EMC Data Domain Management Center
  • EMC Data Domain Extended Retention
  • EMC Data Domain Replicator
  • Riverbed WAN Optimization
  • EMC Data Domain Retention Lock
  • EMC SourceOne

HowTo: Install Nested ESXi 5.5

HowTo: Install Nested ESXi 5.5


  1. Enable Promiscuous Mode on the vSwitch
  2. Edit ESXi : echo ‘vhv.allow = “TRUE”‘ >> /etc/vmware/config
  3. Use the Web Client to Create a VM:
    • 2 vCPUS
    • 16 GB RAM
    • 4 GB HD (LSI Logic Parallel)
    • 2 NICS (Driver E1000)
    • Options CPU/MMU Vitulization (Select 4 the option)
    • Guest OS : Other / VMware ESXi 5.x
    • Hardware virtualization : Expose hardware assisted virtualization to the guest OS

    • To enable virtualized HV, use the web client and navigate to the processor settings screen. Check the box next to “Expose hardware-assisted virtualization to the guest operating system.” This setting is not available under the traditional C# client.

  4. Edit the .vmx and echo ‘vhv.enable = “TRUE”‘ >> *.vmx
  5. Check all the settings if you get the following error at ESXi Install:<HARDWARE_VIRTUALIZATION WARNING: Hardware Virtualization is not a feature of the CPU, or is not enabled in the BIOS>
  6. update ESXi
  7. Install VMware Tools VIB
    • </li>
      	<li>esxcli system maintenanceMode set -e true
      esxcli software vib install -v /vmfs/volumes/[VMFS-VOLUME-NAME]/esx-tools-for-esxi-9.7.0-0.0.00000.i386.vib -f
      esxcli system shutdown reboot -r “Installed VMware Tools”</li>
    • </li>
      	<li>esxcli network firewall ruleset set -e true -r httpClient
      esxcli software vib install -v http://download3.vmware.com/software/vmw-tools/esxi_tools_for_guests/esx-tools-for-esxi-9.7.0-0.0.00000.i386.vib -f</li>
  8. Inject the VMware Tools VIBs into the ISO
  9. Clone to Template
    1. </li>
      <li>esxcli system settings advanced set -o /Net/FollowHardwareMac -i 1</li>
      <li>open <b>esx.conf </b>and delete the entire <b>/system/uuid</b></li>
      <li> run /sbin/auto-backup.sh</li>
      <li>esxcli storage vmfs snapshot resignature -l [VMFS-VOLUME]</li>



Reference : –

VMware SDDC Networking Features

VMware SDDC Networking Features


Here is a quick summary of some very new features for VMware vSphere Networking and Multi-Tenant Security..


Cisco Nexus 1000V provides L2 functionality and combines the fabric level networking management seamlessly into the virtual layer for North-South Layer 2/3 Communication. All compatible Cisco Nexus can be managed from the same VSM. This makes it easy for the existing Networking team to be able to manage the networking inside vSphere.  VMware NSX provides granular policy based L4-L7 Networking features integrated into the hypervisor to secure and manage East-West Communication.

  • VMware vDS
  • Cisco Nexus 1000V and Cisco Nexus 1010
  • NSX
  • Network I/O Control (NIOC)
  • vShield



VM Disk Alignment scripts

VM Disk Alignment scripts





Reference :- http://www.vmware.com/files/pdf/techpaper/Storage_Protocol_Comparison.pdf

FCoE ‘versus’ iSCSI – The Mystery is Solved_2014-02-03_15-34-09

Reference :- http://blogs.cisco.com/datacenter/fcoe-versus-iscsi-the-mystery-is-solved/

Other Articles:


How to Resize VDI Virtual Disks with PowerCLI Script

How to Resize VDI Virtual Disks with PowerCLI Script

To resize virtual disk in multiple Virtual Machines, a scripted process is recommended as it will make the job quicker and manageable.

This document is based on the following technology:

  • VMware ESX 4.1.0
  • Windows 7 Enterprise 32-bit SP0 Virtual Machine

If different version is used, the manual process and scripted process might need to be adjusted as some PowerCLI functions might have been deprecated.

Tools for Scripted Process

The script has been created to help the whole process quicker and less error prone. The script is developed with VMware vSphere PowerCLI, sysinternal and DiskPart.

There are 2 major steps to resize a disk in Virtual Machine:

  1. Resize the virtual disk of the Virtual Machine
  2. Extend the Guest Operating Systems’ volume

Those 2 steps are explained in more details on the section below

Resize the virtual disk of the Virtual Machine

To resize multiple or potentially hundred or thousand Virtual Machine, doing the manual resizing can definitely lead to error, not to mention the time it is going to take to do it.
A Script has been created to do this particular task. The script is based on VMware vSphere PowerCLI and sysinternal tools. PowerCLI is used to call the vSphere Web Service API to resize the virtual disk on each Virtual Machine. Sysinternal is used to execute the diskpart utility remotely on each Virtual Machine
Script Requirements

To be able to execute the script, the following file(s) and application are required:

  1. vSphere PowerCLI installed on the machine that is going to be used to run the PowerCLI script
  2. PsExec.exe file, downloaded from microsoft.com
  3. A text file contains a list of the computer name per line
  4. A text file contains a list of command for DiskPart

Extend the Guest Operating Systems’ volume

PsExec.ext @C:\Temp\Computers.txt –u DOMAIN\Username –h diskpart /s \\computer\share\Diskpart.txt

list volume

Select Volume 2

vSphere PowerCLI VDI-Extend.PS1

#Get the vCenter Server Name
[System.Reflection.Assembly]::LoadWithPartialName('Microsoft.VisualBasic') | Out-Null
$vC = [Microsoft.VisualBasic.Interaction]::InputBox("Enter the vCenter computer name", "Computer", "$env:computername")

#Connect to vCenter
Connect-VIServer -Server $vC

#Prompt File Function
function PromptFor-File
 [String] $Type = "Open",
 [String] $Title = "Select Computer File (One Computer Name per Line)",
 [String] $Filename = $null,
 [String[]] $FileTypes,
 [switch] $RestoreDirectory,
 [IO.DirectoryInfo] $InitialDirectory = $null


 if ($FileTypes)
 $FileTypes | % {
 $filter += $_.ToUpper() + " Files|*.$_|"
 $filter = $filter.TrimEnd("|")
 $filter = "All Files|*.*"

 switch ($Type)
 $dialog = New-Object System.Windows.Forms.OpenFileDialog
 $dialog.Multiselect = $false
 $dialog = New-Object System.Windows.Forms.SaveFileDialog

 $dialog.FileName = $Filename
 $dialog.Title = $Title
 $dialog.Filter = $filter
 $dialog.RestoreDirectory = $RestoreDirectory
 $dialog.InitialDirectory = $InitialDirectory.Fullname
 $dialog.ShowHelp = $true

 if ($dialog.ShowDialog() -eq [System.Windows.Forms.DialogResult]::OK)
 return $dialog.FileName
 return $null

#File Content
$file = PromptFor-File
$content = Get-Content $file

#Get Hard Disk Name
$hdd = [Microsoft.VisualBasic.Interaction]::InputBox("Enter the VM Disk Name", "VM Disk Name", "Hard disk 1")

#Get Hard Disk Size
$hddsGb = [Microsoft.VisualBasic.Interaction]::InputBox("Enter the Disk Size in GB", "VM Disk Size", "")
$hddsKb = [int]$hddsGb * 1024 * 1024

foreach($c in $content)
 #Extend the vmdk file
 Get-VM -Name $c | Get-HardDisk | Where-Object {$_.Name -eq $hdd} | Set-HardDisk -CapacityKB $hddsKb
 Write-Host "This VM: " + $c + " is not recognized" -ForegroundColor Red

HP BladeSystem – EFUSE

HP BladeSystem – EFUSE

The term e-Fuse, e-fuse, Efuse or EFUSE is used when a Blade system has to be reset virtually (without physically reseating the server).

An e-fuse reset causes the server blade to lose power momentarily as the e-fuse is tripped and reset.
CAUTION: Use this command with caution. This will result in any activity on the server operating system being lost.
To perform an e-fuse reset:

  1. Telnet to OA IP Address
  2. Login to Onboard Administrator with Administrator privileges using the OA CLI.
  3. Enter the command RESET SERVER X where [X = bay number].
  4. Confirm that the user wants to reset the server blade.
This will reset/restart the Blade Server in that particular bay.


Don’t be Scared 🙂

Command Prompt - MetaFrame Presentation Server Client_2014-02-20_14-40-24

Installing Intel NIC Drivers on VMware ESXi for System x3650 M3

Installing Intel NIC Drivers on VMware ESXi for System x3650 M3


This procedure describes, how to setup base Networking configuration for ESXi 4.1 on IBM System X 3650 M3 Servers:

 This is a high level overview and reference only. This is not a step-by-step guide. You will need to have 3rd experience in VMWare to use this guide. This should be used with ESXi build guides and Detailed design.

Installing Intel NIC Drivers on VMware ESXi for System x3650 M3:-

Known Issue:-

 After installing VMware ESXi 4X on IBM System X 3650 M3 Servers, ESXi does not see Intel Quad-Port or Dual-Port NIC – Intel Ethernet Dual-Port or Quad-Port Server Adapter: – http://www-947.ibm.com/support/entry/portal/docdisplay?lndocid=migr-5089106 (Intel Ethernet Dual Port Server Adapter I340-T2 for IBM System x (49Y4230)  See below screenshot:

  • Download the IBM customised ESXi (.iso)


  • Download the appropriate driver (.iso) from VMware.com only (unless otherwise indicated):-



  • Download the latest ESX update: – http://www.vmware.com/patchmgr/download.portal
  • Use existing ESXi Base Build Guides and install ESXi.
  • Enable Local Tech support mode in ESXi and use ALT-F1 and ALT-F2 to Enter and Exit local console:


  • Insert NIC Driver CD or Mount NIC Driver ISO via Remote RSA/IMM (iLo/DRAC)
  • Issue the following command to ensure CD-ROM is detected:

-       ls –lash /dev/cdrom

-       esxcfg-mpath -l

-       esxcfg-mpath -b | grep "CD-ROM"

-       (You should see an mpx.vmhbaXX:C0:T0:L0-type filename (note the vmhba#) & a vml.-type filename)

  • If CD-ROM is detected, issue the following commands to load the iso9960 module and mount the CD-ROM:

-       vmkload_mod iso9660

-       vim-cmd hostsvc/maintenance_mode_enter

-       vsish -e set /vmkModules/iso9660/mount mpx.vmhba<b>33</b>:C0:T0:L0

-       vim-cmd hostsvc/maintenance_mode_exit

-       vim-cmd hostsvc/hostsummary|grep -i maintenance

-       (Note: the number in RED may differ but is taken from the /dev/cdrom directory listing from the previous command.)

  • The CD-ROM should be mounted and available here:
-       ls –lash /vmfs/volumes/CDROM
  • Change to the offline bundle (.zip file) directory
-       cd /vmfs/volumes/CDROM/OFFLINE_ (or something similar)
  • Place the host into Maintenance Mode:
-       vim-cmd hostsvc/maintenance_mode_enter
  • Execute bundle/update:
-       esxupdate --bundle NameOfBundle.zip update
  • Issue the following command to unmount the CD-ROM:
-       vsish -e set /vmkModules/iso9660/umount mpx.vmhba33:C0:T0:L0
  • Issue the following command to fix incompatibility with ESXi 4.1 and IBM x3650 M3



-       esxcfg-advcfg -k TRUE iovDisableIR

-       esxcfg-info –c

  • Reboot the ESXi Server
  • Verify the NICs are visible to ESXi and document the wiring information to the switch ports:

-       cat /etc/vmware/esx.conf | grep “net”

-       esxcfg-nics –l

-       ethtool –i vmnic0

-       esxcfg-vswitch –l

-       ethtool –p vmnic0 10 (Start Blinking NIC LEDs for 10 seconds)

-       vm-support

-       Check /bootbank/oem/oem.tx

-       esxcli network nic up/down –n vmnicX

  • Configure IP addresses for the Management NIC and select the correct NICs:
  • LACP Configuration notes:

As the switch ports will already be configured to LACP or Etherchannel with Load Balancing configurations. Example: IP Hash.

It is not possible to configure all required details for LACP via the command line or ESXi console GUI.

You will need to configure a management IP address on the Management NIC first and then connect to this IP via vSphere Client GUI to complete LACP and Etherchannel settings.

You can also configure an IP address on one of the NICs configured for LACP and VLAN. (Don’t select all LACP NICs, only a single NIC)

Then connect to this IP via vSphere Client GUI to complete the Networking configuration as per Build and Detailed Design.

  • Document the Physical Port wiring:

IBM DSA on VMware ESXi

Open Port 5989
ibm_utl_dsa_dsyt85t-3.40_portable_windows_x86-64.exe -vmwesxi user:password@ip-address -v -c -d c:\temp\dsa
-f -v -f --ffdc
  • VMware Portabl command:
transfer the file via CP / ssh or WinSCP  or USB to /tmp
cp /mnt/key/ibm_utl_dsa_212p_rhel3_i386.bin /tmp
chmod +x ibm_utl_dsa_212p_rhel3_i386.bin