Wednesday, December 7, 2016

vCenter High Availability Illustrated

vCenter High Availability is a new feature of vSphere 6.5. It only works with the linux vCenter. By the time you are done, you end up with an active, a passive and a witness nodes. In 6.5 there is an RTO of about 5 minutes; which varies depending on loads and hardware. File level replication is done through Linux rsync. Native postgres replication handles VCdb and VUMdb replication. SSH needs to be enabled on the vCenter prior to implementing vCenter HA.

Here is the idea illustrated:


Here are the configuration steps:

Step 1: Create a vCenter Server and a separate PSC. This is a must; they can't be one vm.


Step 2: Select your vCenter, go to Configure and select vCenter HA.


Step 3: Click on Configure on the upper right corner.


Step 4: Select the heartbeat ip address for the active vCenter.


Step 5: Specify the ip addresses of the backup vCenter and the heartbeat vCenter.


Step 6: Review the information and click on next.


Step 7: Click on Finish and wait.


Step 8: Monitor the recent tasks. This deployment will take a while.


Step 9: Verify the settings of the backup vCenter appliance.


Tuesday, December 6, 2016

vSphere Auto Deploy 6.5 (Web Client Steps)

Testing the new Auto Deploy GUI with vSphere 6.5

Step 1: Log into the vCenter and go to the Home page. Notice the plugin is not showing.


Step 2: One the left side, click on System Configuration.


Step 3: On the left side, click on Services.


Step 4: Right click on the Auto Deploy service and enable/start it.


Step 5: Log out and log in again. The plugin now appears.


Step 6: Click on the Auto Deploy icon.


Step 7: Go to the Software Depot tab.


Step  8: Clicking on the Up Arrow icon to create an offline depot. Nameit and point to the zip file downloaded from the VMware site. Click on Upload.


Step 9: Create a custom depot. Give it a name.


Step 10: Select the Original Depot and under Image Profiles, click on clone.


Step 11: Name the clone, specify the vendor and point to your custom depot.


Step 12: Verify that your custom depot is using the cloned image profile.


Step 13: Go to Image Profiles, select the new one and export it to a zip file by clicking on the Down Arrow icon.


Step 14: Click on Generate Image and wait a few seconds.


Step 15: Now download the new image and click on close.


Step 16: Click on the Deploy Rules tab. Notice you have no rules yet.


Step 17: Click on New Deploy Rule. Name your rule. This rule could be for all your hosts (All hosts) or for individual hosts. In this case, it is for a future esxi host with a particular IP.


Step 18: select the Image Profile to use and click on Next.


Step 19: Bind this rule a a particular Host Profile.


Step 20: Verify and click on Finish.


Step 21: Congratulations, you just created your first rule. Notice the rule is inactive.


Step 22: Click on Activate/Deactivate Rules.


Step 23: Select the rule, click on activate. Then click on Next and Finish.


Step 24: Wait a few seconds and verify that the rule was activated.


Final Steps: Now configure a DHCP and TFTP server and test your new host.

Sunday, November 27, 2016

Powercli 6.5 and Virtual SAN

Storage Module Updates

The PowerCLI Storage module has been a big focus on this release. A lot of functionality has been added around vSAN, VVOLs, and the handling of virtual disks. The vSAN cmdlets have been bolstered to more than a dozen cmdlets which are focused on the entire lifecycle of a vSAN cluster. The entire vSAN cluster creation process can be automated with PowerCLI as well as running tests, updating the HCL database, and much more!
  • Get-VsanClusterConfiguration
  • Get-VsanDisk
  • Get-VsanDiskGroup
  • Get-VsanFaultDomain
  • Get-VsanResyncingComponent
  • Get-VsanSpaceUsage
  • New-VsanDisk
  • New-VsanDiskGroup
  • New-VsanFaultDomain
  • Remove-VsanDisk
  • Remove-VsanDiskGroup
  • Remove-VsanFaultDomain
  • Set-VsanClusterConfiguration
  • Set-VsanFaultDomain
  • Test-VsanClusterHealth
  • Test-VsanNetworkPerformance
  • Test-VsanStoragePerformance
  • Test-VsanVMCreation
  • Update-VsanHclDatabase
vSphere 6.5 introduces a new way to handle the management of virtual disks. Instead of managing a VM’s hard disks through the VM, they can now be managed independently with new PowerCLI cmdlets. This allows the handling of a virtual disk’s lifecycle to be decoupled from the lifecycle of a VM. This adds a ton of flexibility!
  • Copy-VDisk
  • Get-VDisk
  • Move-VDisk
  • New-VDisk
  • Remove-VDisk
  • Set-VDisk
From: http://blogs.vmware.com/PowerCLI/2016/11/new-release-powercli-6-5-r1.html

Friday, November 25, 2016

Virtual SAN and iSCSI

How to configure iSCSI Luns with Virtual SAN 6.5

Step 1: Connect to your VSAN cluster , select the cluster, go to Configure and iSCSI Targets. The iSCSI service is disabled by default.


2. Click on Edit and enable the Virtual SAN iSCSI service. Select the iscsi network, the port to use and your authentication preferences (CHAP or no CHAP). Also, select the policy to use (FTT=0 or something else).


3.  Click on the Green Plus Sign to add your first iscsi target. Notice you don't have any once iSCSI is enabled.


4. Select the lun ID (0 is the default) and the size of the lun (10gbs in this case).


5.  Verify your work and test iscsi from a different physical server. That's it.


Thursday, November 24, 2016

How to configure a Nested Virtual SAN Cluster 6.5

How to configure a Virtual SAN cluster for your home lab

1. Install esxi on a physical host. In this case, the server uses the ip 10.1.1.2 (10.1.1.1 had my vCenter appliance). Connect to it using the new host client. Just type the ip or the hostname and log in as root.


2. On the physical host, create two internal standard switches. Do not connect them to an uplink. The first one will eventually be used for vMotion between the nested esxi hosts and the second one for virtual san traffic. Enable Promiscous Mode (critical!!!) on vSwitch1 and vSwitch2.



3. On vSwitch1, create a port group called vMotion. On vSwitch2, create a port group called vsan. This is what my configuration looked like by the time I was finished.



4. Using the host client or the web client), create three virtual machines with 2 vcpus, 6 gbs of memory, three vnics and three disks (4 gbs is not enough for virtual san). Connect the vnics to VM Network, vMotion and vsan. Make the disks 10 gbs, 5gbs and 50gbs. I ended up changing memory to 8gbs though).




5. Once you create the 3 future nested esxi hosts, install them one by one. Do not clone them. Then, proceed to use the dcui and change their hostnames and ip addresses. This is what mine looked like.




6. Connect to your vCenter server using the web client (not the new html client) and create a datacenter and add the three hosts. It should look something like this by the time you finish. Some
of my hosts had ssh enabled; that explains the warnings.



7. Create the vmkernel ports for virtual san and vMotion for the three nested esxi hosts. I used the 10 network for management, the 11 network for vMotion and the 12 network for vsan.  Make sure to test every network with ping once you finish. 


8. Got to your esxi hosts and make the 5gb drive as an SSD drive. 


9. Right click on the datacenter and create a cluster. Name it and enable virtual san. Do not enable anything else for now. Then drag and drop the three hosts. You have the choice of Automatic or Manual (I went with automatic in this case). Otherwise, create the disk groups manually once you finished.


10. Now take a look, if you did it right, the vsanDatastore should be around 150gbs (3 x 50gbs).


Final Note: Here are some captures and commands (including how to log in to the rvc)