Thursday, February 9, 2017

Virtual SAN and PowerCLI

Creating a vsan cluster with manual disk claim mode:

new-cluster -name cluster_name -vsanenabled -location datacenter_name -vsanenabled

Viewing disk groups:

get-vsandiskgroup

Creating a disk group:

new-vsandiskgroup -vmhost hostname -ssdcanonicalname disk_name -datadiskcanonicalname  disk_name,disk_name

Removing a disk group:

remove-vsandiskgroup -vsandiskgroup disk_group_name -confirm:$false

Removing a disk from a disk group:

remove-vsandisk -vsandisk disk_name -confirm:$false

Adding  a disk to a disk group:

new-vsandisk -canonicalname disk_name -vsandiskgroup disk_group_name

Viewing attributes of a vsan disk:

get-vsandisk -canonicalname disk_name -vmhost esxi_name

Viewing vsan storage policies:

get-spbmstoragepolicy -name policy_name

Creating a vsan storage policy:

new-spbmstoragepolicy -name policy_name -anyofrulesets

Removing a vsan storage policy:

remove-spbmstoragepolicy -storagepolicy policy_name

Exporting a vsan storage policy:

export-spbmstoragepolicy -storagepolicy policy_name -filepath C:\location -force

Importing a vsan storagepolicy:

import-spbmstoragepolicy -storagepolicy policy_name -filepath C:\location

Viewing fault domains:

get-vsanfaultdomain -cluster cluster_name

Creating a vsan fault domain:

new-vsanfaultdomain -name domain_name -vmhost hostname1, hostname2

Removing a vsan fault domain:

remove-vsanfaultdomain -vsanfaultdomain domain_name -confirm:$false

Viewing vsan disk space usage:

get-vsanspaceusage -cluster cluster_name




Friday, January 27, 2017

Virtual SAN 6.5 Licensing


Virtual SAN 6.5 Licensing


6.5 vs 6.2


http://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/products/vsan/vmware-vsan-65-licensing-guide.pdf

Thursday, January 12, 2017

D.R.S. 6.5 Enhancements

D.R.S. Enhancements in vSphere 6.5

Predictive DRS

Disabled by default, predictive DRS works together with VROPS to make DRS proactive instead of
reactive.The way that it works, VROPs computes and forecasts vm utilization (both cpu and ram) based on data received from the vCenter server. In turn, DRS receives the forecasted metrics 60 minutes ahead of time in order to balance the loads prior to contention.



VM distribution, Memory Metric for Load Balancing and CPU-Overcommitment.

VM distribution is used to prevent less vms from restarting due to a failure by distributing the
number of vms more evenly. If DRS detects a severe performance inbalance, it will correct the
performance at the expense of the even distribution of vms.

Memory Metric for Load Balancing: By default, DRS uses active memory + 25% as the primary
metric. This new feature dictates that consumed memory should be used intead.

CPU over-commitment: Use this option to enforce a maximum vcpu vs lcpu ratio. Once the limit is reached, no new vms are allowed to power on on that particular host. According to the documentation, 500 is equals 5 vcpus x 1 lcpu.


Sunday, December 25, 2016

vCenter Migration from Windows to Linux

Step 1:  Download the vCenter Appliance and copy the software into the windows vCenter. Execute the VMware-Migration-Assistant script.




Step 2:  Launch the appliance installer and double click on the Migrate icon to start Stage 1. Go through the wizard.












Step 3:  Start Stage 2 of the migration and verify the new vCenter once the operation is finished.










Friday, December 23, 2016

vCenter Appliance Partitions


From:
http://www.virtuallyghetto.com/2016/11/updates-to-vmdk-partitions-disk-resizing-in-vcsa-6-5.html

Vimtop Main Options

What is vimtop?  Vimtop is a command found on the vCenter appliance that can give you lots of performance related information. Just run vimtop without any options to run it interactively. This gives you cpu, memory and process information.


Vimtop has an "h" option for help. Press Escape once you are done looking at these options.


Vimtop has a "k" option for disk related information. Look for Read and Write operations.


Vimtop has an "o" option for network related information. Look for dropped packets.


Final Note:

"P" option = pauses the screen
"S" option = sets the refresh rate in seconds
"Q" option = quit vimtop

Monday, December 19, 2016

VMFS-6 Improvements

Paths

ESXi hosts running version 6.5 can now support up to 2,000 paths in total. This is an increase from the 1024 paths that were supported in previous versions of vSphere.

Devices

ESXi hosts running version 6.5 can now support up to 512 devices. This is a two-fold increase from previous versions of ESXi where the number of devices supported per host was limited to 256.

512e Advanced Format Device Support

The storage industry is hitting capacity limits with 512N (native) sector size used currently in rotating storage media. To address this issue, the storage industry has proposed new Advanced Format (AF) drives which use a 4K native sector size. These AF drives allows disk drive vendors to build high capacity drives which also provide better performance, efficient space utilization and improved reliability and error correction capability.
Given that legacy applications and operating systems may not be able to support 4KN drives, the storage industry has proposed an intermediate step to support legacy applications by providing 4K sector size drives in 512 emulation (512e) mode. These drives will have a physical sector size of 4K but the logical sector size of 512 bytes and are called 512e drives. These drives are now supported on vSphere 6.5 for VMFS and RDM (Raw Device Mappings).
File Block Format:
VMFS-6 introduces two new block sizes, referred to as small file block (SFB) and large file block (LFB). While the SFB size can range from 64KB to 1MB for future use-cases, VMFS-6 in vSphere 6.5 is utilizing an SFB size of 1MB only. The LFB size is set to 512MB.
Thin disks created on VMFS-6 are initially backed with SFBs. Thick disks created on VMFS-6 are allocated LFBs as much as possible. For the portion of the thick disk which does not fit into an LFB, SFBs are allocated.
These enhancements should result in much faster file creation times. This is especially true with swap file creation so long as the swap file can be created with all LFBs. Swap files are always thickly provisioned.
VMFS Creation:
Using these new enhancements, the initialization and creation of new VMFS datastore has been significantly improved in ESXi 6.5. For a 32 TB volume, VMFS creation time was halved. In the example shown below, the creations of a 32TB VMFS-6 volume on ESXi 6.5 only takes half the time of creating a 32TB VMFS-5 volume on ESXi 6.0U2.
Concurrency Improvements
This next feature introduces lock contention improvements and improved resignaturing and scanning. Some of the lock mechanisms on VMFS were largely responsible for some of the biggest delays in parallel device scanning and filesystem probing on ESXi. Since Sphere 6.5 has higher limits on number of devices and paths, a big factor in enabling this support was to redesign device discovery and filesystem probing to be highly parallel.
These improvements are significant for Site Recover Manager, especially with a failover event, as the changes here lead to improved resignature and rescan/device discovery.
There are also benefits to Thin provisioning operations. Previous versions of VMFS only allowed one transaction at a time per host on a given filesystem. VMFS-6 supports multiple concurrent transactions at a time per host on a given filesystem. This results in improved IOPS for multi-threaded workloads on thin files.
Upgrading to VMFS-6
Datastore filesystem upgrade from VMFS-5 (or previous versions) to VMFS-6 is not supported. Customers upgrading from older versions of vSphere to 6.5 release should continue to use VMFS-5 datastores (or older version) until they create new VMFS-6 datastores. 
Since there is no direct ‘in-place’ upgrade of filesystem supported, customers should use Virtual Machine migration techniques such as Storage vMotion to move VMs from the old datastore to the new VMFS-6 datastore.
Hot Extend VMDK Beyond 2TB
Prior to ESXi 6.5, thin virtual disks could only be extended if their size was below 2TB when the VM was powered on. If the size of a VMDK was 2TB or larger, or the expand operation caused it to exceed 2TB, the hot extend operation would fail. This required administrators to typically shut down the virtual machine to expand it beyond 2TB. The behavior has been changed in vSphere 6.5 and hot extend no longer has this limitation.