Thursday, January 12, 2017

D.R.S. 6.5 Enhancements

D.R.S. Enhancements in vSphere 6.5

Predictive DRS

Disabled by default, predictive DRS works together with VROPS to make DRS proactive instead of
reactive.The way that it works, VROPs computes and forecasts vm utilization (both cpu and ram) based on data received from the vCenter server. In turn, DRS receives the forecasted metrics 60 minutes ahead of time in order to balance the loads prior to contention.



VM distribution, Memory Metric for Load Balancing and CPU-Overcommitment.

VM distribution is used to prevent less vms from restarting due to a failure by distributing the
number of vms more evenly. If DRS detects a severe performance inbalance, it will correct the
performance at the expense of the even distribution of vms.

Memory Metric for Load Balancing: By default, DRS uses active memory + 25% as the primary
metric. This new feature dictates that consumed memory should be used intead.

CPU over-commitment: Use this option to enforce a maximum vcpu vs lcpu ratio. Once the limit is reached, no new vms are allowed to power on on that particular host.


Sunday, December 25, 2016

vCenter Migration from Windows to Linux

Step 1: 
Download the vCenter Appliance and copy the software into the windows vCenter. Execute the VMware-Migration-Assistant script.




Step 2: 
Launch the appliance installer and double click on the Migrate icon to start Stage 1. Go through the wizard.












Step 3: 
Start Stage 2 of the migration and verify the new vCenter once the operation is finished.










Friday, December 23, 2016

vCenter Appliance Partitions


From:
http://www.virtuallyghetto.com/2016/11/updates-to-vmdk-partitions-disk-resizing-in-vcsa-6-5.html

Vimtop Main Options


Vimtop runs interactive. Just run vimtop without any options. This gives you cpu, memory and process information.


Vimtop has an "h" option for help. Press Escape once you are done looking at these options.


Vimtop has a "k" option for disk related information. Look for Read and Write operations.


Vimtop has an "o" option for network related information. Look for dropped packets.


Final Note:

"P" option = pauses the screen
"S" option = sets the refresh rate in seconds
"Q" option = quit vimtop

Monday, December 19, 2016

VMFS-6 Improvements

Paths

ESXi hosts running version 6.5 can now support up to 2,000 paths in total. This is an increase from the 1024 paths that were supported in previous versions of vSphere.

Devices

ESXi hosts running version 6.5 can now support up to 512 devices. This is a two-fold increase from previous versions of ESXi where the number of devices supported per host was limited to 256.

512e Advanced Format Device Support

The storage industry is hitting capacity limits with 512N (native) sector size used currently in rotating storage media. To address this issue, the storage industry has proposed new Advanced Format (AF) drives which use a 4K native sector size. These AF drives allows disk drive vendors to build high capacity drives which also provide better performance, efficient space utilization and improved reliability and error correction capability.
Given that legacy applications and operating systems may not be able to support 4KN drives, the storage industry has proposed an intermediate step to support legacy applications by providing 4K sector size drives in 512 emulation (512e) mode. These drives will have a physical sector size of 4K but the logical sector size of 512 bytes and are called 512e drives. These drives are now supported on vSphere 6.5 for VMFS and RDM (Raw Device Mappings).
File Block Format:
VMFS-6 introduces two new block sizes, referred to as small file block (SFB) and large file block (LFB). While the SFB size can range from 64KB to 1MB for future use-cases, VMFS-6 in vSphere 6.5 is utilizing an SFB size of 1MB only. The LFB size is set to 512MB.
Thin disks created on VMFS-6 are initially backed with SFBs. Thick disks created on VMFS-6 are allocated LFBs as much as possible. For the portion of the thick disk which does not fit into an LFB, SFBs are allocated.
These enhancements should result in much faster file creation times. This is especially true with swap file creation so long as the swap file can be created with all LFBs. Swap files are always thickly provisioned.
VMFS Creation:
Using these new enhancements, the initialization and creation of new VMFS datastore has been significantly improved in ESXi 6.5. For a 32 TB volume, VMFS creation time was halved. In the example shown below, the creations of a 32TB VMFS-6 volume on ESXi 6.5 only takes half the time of creating a 32TB VMFS-5 volume on ESXi 6.0U2.
Concurrency Improvements
This next feature introduces lock contention improvements and improved resignaturing and scanning. Some of the lock mechanisms on VMFS were largely responsible for some of the biggest delays in parallel device scanning and filesystem probing on ESXi. Since Sphere 6.5 has higher limits on number of devices and paths, a big factor in enabling this support was to redesign device discovery and filesystem probing to be highly parallel.
These improvements are significant for Site Recover Manager, especially with a failover event, as the changes here lead to improved resignature and rescan/device discovery.
There are also benefits to Thin provisioning operations. Previous versions of VMFS only allowed one transaction at a time per host on a given filesystem. VMFS-6 supports multiple concurrent transactions at a time per host on a given filesystem. This results in improved IOPS for multi-threaded workloads on thin files.
Upgrading to VMFS-6
Datastore filesystem upgrade from VMFS-5 (or previous versions) to VMFS-6 is not supported. Customers upgrading from older versions of vSphere to 6.5 release should continue to use VMFS-5 datastores (or older version) until they create new VMFS-6 datastores. 
Since there is no direct ‘in-place’ upgrade of filesystem supported, customers should use Virtual Machine migration techniques such as Storage vMotion to move VMs from the old datastore to the new VMFS-6 datastore.
Hot Extend VMDK Beyond 2TB
Prior to ESXi 6.5, thin virtual disks could only be extended if their size was below 2TB when the VM was powered on. If the size of a VMDK was 2TB or larger, or the expand operation caused it to exceed 2TB, the hot extend operation would fail. This required administrators to typically shut down the virtual machine to expand it beyond 2TB. The behavior has been changed in vSphere 6.5 and hot extend no longer has this limitation.

vCenter Appliance Backup

Backing Up and Restoring the vCenter Appliance

Step 1: 

Deploy the new vCenter Appliance and connect to port 5480. Backing up the appliance is fairly simple.  Log in as root and click on Backup. Here is a capture.


Step 2: 

Select the protocol to use. As you can see, you can use ftp, scp and http among others. Specify a user and location for the backup.


Step 3: 

To restore, simply deploy a new one using the iso. Towards the bottom, you will see the restore choice.


Step 4: 

Once the restore is complete, connect to the new vCenter server and verify functionality.


Saturday, December 10, 2016

VMFSsparse vs SEsparse

VMFSsparse is a virtual disk format used when a VM snapshot is taken or when linked clones are created off the VM. VMFSsparse is implemented on top of VMFS and I/Os issued to a snapshot VM are processed by the VMFSsparse layer. VMFSsparse is essentially a redo-log that grows from empty (immediately after a VM snapshot is taken) to the size of its base VMDK (when the entire VMDK is re-written with new data after the VM snapshotting). This redo-log is just another file in the VMFS namespace and upon snapshot creation the base VMDK attached to the VM is changed to the newly created sparse VMDK.

Because VMFSsparse is implemented above the VMFS layer, it maintains its own metadata structures in order to address the data blocks contained in the redo-log. The block size of a redo-log is one sector size (512 bytes). Therefore the granularity of read and write from redo-logs can be as small as one sector. When I/O is issued from a VM snapshot, vSphere determines whether the data resides in the base VMDK (if it was never written after a VM snapshot) or if it resides in the redo-log (if it was written after the VM snapshot operation) and the I/O is serviced from the right place. The I/O performance depends on various factors, such as I/O type (read vs. write), whether the data exists in the redo-log or the base VMDK, snapshot level, redo-log size, and type of base VMDK.

I/O type: After a VM snapshot takes place, if a read I/O is issued, it is either serviced by the base VMDK or the redo-log, depending on where the latest data resides. For write I/Os, if it is the first write to the block after the snapshot operation, new blocks are allocated in the redo-log file, and data is written after updating the redo-log metadata about the existence of the data in the redo-log and its physical location. If the write I/O is issued to a block that is already available in the redo-log, then it is re-written with new data.

Snapshot depth: When a VM snapshot is created for that first time, the snapshot depth is 1. If another snapshot is created for the same VM, the depth becomes 2 and the base virtual disks for snapshot depth 2 become the sparse virtual disks in snapshot depth 1. As the snapshot depth increases, performance decreases because of the need to traverse through multiple levels of metadata information to locate the latest version of a data block.

I/O access pattern and physical location of data: The physical location of data is also a significant criterion for snapshot performance. For a sequential I/O access, having the entire data available in a single VMDK file would perform better compared to aggregating data from multiple levels of snapshots such as the base VMDK and the sparse VMDK from one or more levels.

Base VMDK type: Base VMDK type impacts the performance of certain I/O operations. After a snapshot, if the base VMDK is thin format [4], and if the VMDK hasn’t fully inflated yet, writes to an unallocated block in the base thin VMDK would lead to two operations (1) allocate and zero the blocks in the base, thin VMDK and (2) allocate and write the actual data in the snapshot VMDK. There will be performance degradation during these relatively rare scenarios.

SEsparse SEsparse is a new virtual disk format that is similar to VMFSsparse (redo-logs) with some enhancements and new functionality. One of the differences of SEsparse with respect to VMFSsparse is that the block size is 4KB for SEsparse compared to 512 bytes for VMFSsparse. Most of the performance aspects of VMFSsparse discussed above—impact of I/O type, snapshot depth, physical location of data, base VMDK type, etc.—applies to the SEsparse format also.

In addition to a change in the block size, the main distinction of the SEsparse virtual disk format is space efficiency. With support from VMware Tools running in the guest operating system, blocks that are deleted by the guest file system are marked and commands are issued to the SEsparse layer in the hypervisor to unmap those blocks. This helps to reclaim space allocated by SEsparse once the guest operating system has deleted that data. SEsparse has some optimizations in vSphere 5.5, like coalescing of I/Os, that improves its performance of certain operations compared to VMFSsparse.

http://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/sesparse-vsphere55-perf-white-paper.pdf