Friday, September 23, 2016

Installing a Standalone Platform Services Controller (PSC)

Step 1: If not installed already, install the client integration plugin.

2. Double click on vcsa-setup.

3. Double click on Install.

4. Agree to the EULA.

5. Specify the name of the esxi host that will be used for the PSC installation.

5.  Give it a vm name.

6. Select the PSC option only.

7. Specify the administrator password.

8. Verify and proceed.

9. Select the datastore to be used.

10. Give it a hostname (needs to be resolvable) and an IP.

11. Click on Finish and wait a few minutes for the installation to complete.

Wednesday, September 7, 2016

Project Photon: Installation and Configuration

What is Project Photon:

Photon OS is a tech preview of an open source, Linux container host runtime optimized for vSphere. Photon OS is extensible, lightweight, and supports the most common container formats including Docker, Rocket (rkt) and Garden. Photon OS includes a small footprint, yum-compatible, package-based lifecycle management system – called “tdnf”- and, alternatively, supports an rpm-ostree image-based system versioning. When used with development tools and environments such as VMware Fusion, VMware Workstation, HashiCorp (Vagrant and Atlas) and production runtime environment (vSphere, vCloud Air), Photon OS allows seamless migration of container based apps from development to production.

Installation and Configuration Steps:

1. Download the ova.

2. Deploy from ovf.

3. Power on the container.

4. Log in as root//changeme.

5. Initialize the docker engine and enable after reboots.

6.  Lastly, pull/start the container of your choice.

7. There are many options as shown in the URL listed below. (

Tuesday, September 6, 2016

vSphere Storage DRS illustrated

Storage DRS:

Back in 5.0, VMware introduced Storage DRS. If DRS can be considered the automation of vMotion, Storage DRS can then be considered as the automation of Storage vMotion.

In 6x, Storage DRS can be configured with up to 64 datastores, with up to 256 Storage DRS datastore clusters x vCenter. A single Storage DRS cluster has a maximum of 9000 virtual disks.

Default Values:

Manual Mode (default) versus Fully Automated.
80% space utilization triggers Storage vMotion reccomendations.
15ms latency values trigger reccommendations
5 minute intervals used by default to check space utilization (Can't be modified).
8 hour intervals used by default to check latency results (Can be modified).

New Features in 6.0:

1. vSphere Replication integration.
2. Site Recovery Manager integration.
3. Virtual Machine Storage Policy integration.
4. VASA integration.

Great Information about the new features found in these links:

Configuration Steps:

1. Go to the Storage View and create a few vmfs datastores. This also works with nfs. VMFS and NFS cannot be combined. 4 identical datastores were created in this example. They were called d1, d2, d3 and d4. All were 10gbs.

2. Right Click on the Datacenter and create a datastore cluster.

3. Take a look at the settings. All four datastores are currently empty. Notice that there are no reccomendations at this time.

4. Create a couple of vms, by default they will be placed in different datastores to balance space. Notice that they were placed in d3 and d4.

5. Storage vMotion one of the vms so that they both reside on the same datastore. If combined they exceed 80% utilization, an alarm will be triggered and a reccomendation will appear.

6. If Automation is changed to Fully Automated, the vCenter will automatically migrate one of the vms to balance space utilization and the reccomendation will vanish.

NFS version 4.1 Server Configuration

NFS v41 Server Configuration Steps:

1. Deploy a generic linux vm or physical server. Lubuntu was used in this case. A generic installation will do. If you want, download the ISO from or If you have never installed linux, youtube is your friend :)

2. Create a folder to share. /nfs1 will be used in this case.

3. If you want to, change the permissions to 777 so everybody can write to it. Notice that the directory is currently empty.

4. Install the nfs-kernel-server package.

5. Install the vi editor (sudo apt-get install vim) and run sudo vi /etc/exports. The file should look like this by the time you are done.

6. Start the nfs services by running the /etc/init.d/nfs-kernel-server start command.

7. Run the showmount -a command to see which directories are being shared.

8. Since this nfs server vm was built with VMware Player, I changed the ip of the ubuntu server to a static address to be able to interact with the esxi host. To do so, edit /etc/network/interfaces file, shut down the vm and modify the settings from NAT to Bridge. Lubuntu defaults to DHCP so this may or may not be needed in your environment.

9. Go to your esxi host and mount the directory. This can be done with the web client or c.l.i.

10. Make sure that you can store data in the datastore.