Upgrading / Migrating from vSphere 5.x to 6.x (6.5 , 6.7) best practices & Approach

Migrating and Upgrading physical systems is night mare, However with VMWare vSphere its not that complicated, a proper planning will always leads to successful migration from vSphere 5.x to 6.x without any downtime for the VM’s. Some cases might need downtime for VM’s which is described in this post. I would recommend to go through the complete post as this is the same process or approach for all the vSphere migrations irrespective of the versions.

vSphere 5.0 and 5.1 support already ended on 24 August 2016, vSphere 5.5 support going to end on 19 sept 2018. But there are many environments still running on 5.0 and 5.1 as well. Its the high time to upgrade to latest vSphere 6.x now. We will detail every single step to be considered for a successful migration in this post.

Know the Existing Environment

For the successful migration we need to know the existing environment completely. Gathering the existing vSphere environment details is very important. Thanks to RVTools which will do this in just couple of minutes. After getting the information from RVTools we need to analyze the info and gather below information.

  • Existing vCenter Version and Build no
  • Existing ESXi Hosts Version and Build no
  • Existing Server hardware model, Make along with NIC and HBA cards info.( this is important if same hardware is used for upgrade)
  • Standard switch or Distributed switch in use with its port groups, uplinks and vLan details
  • Cluster information with HA, DRS rules
  • EVC Mode information and server maximum supported EVC mode.
  • All VM’s Name,OS, IP, port group, Datastore details
  • Hardware version of all VM’s, VMWare tools installation status.
  • VM RDM Lun details – with the SCSI ID mapping for each lun, RDM type (physical/virtual) , pointer location.
  • USB drive mapping for VM’s if any.
  • Integrations with vCenter server like backup, SRM and other.

Verify compatibility and upgrade matrix

Verifying the VMWare products compatibility and hardware compatibility is very important.

VMWare products compatibility  can be verified here.

Hardware compatibility  can be verified here.

Hardware & Storage compatibility

Its very important to check the Servers, Storage, NIC Cards  and HBA Cards compatibility before planning for the upgrade or implementation of ESXi. NIC and HBA compatibility is covered in drivers section below.

Verifying Hardware compatibility & BIOS firmware

Open vmware compatibility guide – Just select the Partner Name (vendor) – in the keyword provide server Model – click update and view results.

Search for the exact server Model with CPU as shown below and find if the desired ESXi version is compatible or not. As shown below One model supports till 6.5 U1 while other supports 6.7 as well.

Click on the ESXi version to find supported hardware firmware details.

Recommended BIOS and hardware firmware details will be shown as below. Install them before installing ESXi.

Verifying Storage and SAN devices compatibility and Drivers

Open the vmware compatibility guide – select the Storage/SAN as shown below.

Select the Vendor and provide the storage model in keywords – click update

Search for the Exac storage Model – Select the storage and click on the ESXi Version

Note: if an ESXi version is not showing in the list, its either not supported or not yet validated by vmware.

All the supported drivers and firmwares list will be shown for the storage.

vCenter to ESXi compatibility

vCenter server support with ESXi host is very important when it comes to migration as most of the cases migration will happen while VM’s are up and running on existing ESXi hosts.

  • ESXi 5.0 or 5.1 with updates cannot be directly upgraded to 6.5 an intermediate upgrade to 5.5 or 6.0 is required.
  • ESXi 5.5 or later can be directly upgraded to 6.5 or U1
  • ESXi 5.x  cannot be directly upgraded to 6.7 an intermediate upgrade to 6.0 is required.
  • ESXi 6.0 or later can be directly upgraded to 6.7

Supported vCenter Upgrade Path

  • vCenter 5.0 or 5.1 with updates cannot be directly upgraded to 6.5 an intermediate upgrade to 5.5 or 6.0 is required.
  • vCenter 5.5 or later can be directly upgraded to 6.5 or U1
  • vCenter 5.x  cannot be directly upgraded to 6.7 an intermediate upgrade to 6.0 is required.
  • vCenter 6.0 or later can be directly upgraded to 6.7

Supported ESXi Upgrade Path

  • ESXi 5.0 or 5.1 with updates cannot be directly upgraded to 6.5 an intermediate upgrade to 5.5 or 6.0 is required.
  • ESXi 5.5 or later can be directly upgraded to 6.5 or U1
  • ESXi 5.x  cannot be directly upgraded to 6.7 an intermediate upgrade to 6.0 is required.
  • ESXi 6.0 or later can be directly upgraded to 6.7

Decide the vSphere 6.x version to be upgraded to

Based on the available Hardware either new or reusing existing hardware , its compatibility verified as described in above new vSphere 6.x version and Build no needs to be decided. Suppose if the Hardware is compatible with ESXi 6.5 U1. Then vCenter and ESXi upgrades needs to planned for vSphere 6.5 U1, the detailed steps are listed below.

Note that not just hardware, but NIC cards, HBA cards compatibility is also very important if you are reusing existing or new hardware.

vSphere 6.x License and Support

VMWare vSphere license for ESXi and vCenter is based on version , meaning to say if you are an existing customer having ESXi 5.x and vCenter 5.x license it will support only 5.x ( 5.0,5.1,5.5). However if you already have an SA , ESXi and vCenter 5 licenses can be upgraded to 6.x, contact your local vmware partner for support.

Supported Drivers & Firmware for Hardware

Once the vSphere version for the available hardware is decided, all the necessary drivers for NIC cards, FCOE, FC (HBA) and multi path needs to be decided.

Here will explain how to find the exact driver and download the driver for targeted ESXi version.

Support and Driver for Network NIC Cards

Step 1: Run below command to get all NIC cards available on the Host.

esxcli network nic list

Example:
[root@localhost:~] esxcli network nic list
Name PCI Device Driver Admin Status Link Status Speed Duplex MAC Address MTU Description
—— ———— —— ———— ———– —– —— —————– —- ———————————————————
vmnic0 0000:01:00.0 bnx2 Up Down 0 Half a4:ba:db:0e:cc:9c 1500 QLogic Corporation QLogic NetXtreme II BCM5716 1000Base-T
vmnic1 0000:01:00.1 bnx2 Up Up 1000 Full a4:ba:db:0e:cc:9d 1500 QLogic Corporation QLogic NetXtreme II BCM5716 1000Base-T

Step 2: Get the Vendor ID (VID), Device ID (DID), Sub-Vendor ID (SVID), and Sub-Device ID (SDID) using the vmkchdev command:
vmkchdev -l |grep vmnic#

Example:
[root@localhost:~] vmkchdev -l |grep vmnic0
0000:01:00.0 8086:10fb 103c:17d3 vmkernel vmnic0
[root@localhost:~]

Vendor ID (VID) = 8086
Device ID (DID) = 10fb
Sub-Vendor ID (SVID)= 103c
Sub-Device ID (SDID) = 17d3

Step 3: Get the driver and firmware in use for NIC card
esxcli network nic get -n vmnic#

Example:
[root@localhost:~] esxcli network nic get -n vmnic0
Advertised Auto Negotiation: true
Advertised Link Modes: 10BaseT/Half, 10BaseT/Full, 100BaseT/Half, 100BaseT/Full, 1000BaseT/Full
Auto Negotiation: true
Cable Type: Twisted Pair
Current Message Level: 0
Driver Info:
Bus Info: 0000:01:00.0
Driver: bnx2
Firmware Version: 5.0.13 bc 5.0.11 NCSI 2.0.5
Version: 2.2.4f.v60.10
Link Detected: false
Link Status: Down
Name: vmnic0
PHYAddress: 1
Pause Autonegotiate: true
Pause RX: false
Pause TX: false
Supported Ports: TP
Supports Auto Negotiation: true
Supports Pause: true
Supports Wakeon: true
Transceiver: internal
Virtual Address: 00:50:56:56:6f:75
Wakeon: MagicPacket(tm)

Step 4: Find supported driver and download driver

Select IO Devices in the vmware compatibility guide then provide all the VID, DID SVID, SDID got in step 2 – click on  update and view results – All the supported ESXi versions for the NIC cards will show as below.

Vendor ID (VID) = 8086
Device ID (DID) = 10fb
Sub-Vendor ID (SVID)= 103c
Sub-Device ID (SDID) = 17d3

Click on the required driver for ESXi version as shown below beside the NIC driver say 6.5 U1

Expand the driver version and the link to download the driver will be shown as below.

 

Support and Driver for Storage HBA Cards

Step 1: Get Host Bus Adapter Driver currently in use

# esxcfg-scsidevs -a
Output will show something like vmhba0 mptspi or vmhba1 lpfc

Step 2: Get to HBA driver version currently in use
# vmkload_mod -s HBADriver |grep Version

For example, run this command to check the mptspi driver:
# vmkload_mod -s mptspi |grep Version

Step 3: Get HBA Vendor ID (VID), Device ID (DID), Sub-Vendor ID (SVID), and Sub-Device ID (SDID) using the vmkchdev command:
vmkchdev -l |grep vmhba#
Example:
[root@localhost:~] vmkchdev -l |grep vmhba0
0000:01:00.0 1077:2031 0000:0000 vmkernel vmhba0
[root@localhost:~]

Vendor ID (VID) = 1077
Device ID (DID) = 2031
Sub-Vendor ID (SVID)= 0000
Sub-Device ID (SDID) = 0000

Step 4: Find supported driver and download driver

Select IO Devices in the vmware compatibility guide then provide all the VID, DID SVID, SDID got in step 3 – click on  update and view results – All the supported ESXi versions for the NIC cards will show as below.

Vendor ID (VID) = 1077
Device ID (DID) = 2031
Sub-Vendor ID (SVID)= 0000
Sub-Device ID (SDID) = 0000

Click on the required driver for ESXi version as shown below beside the HBA driver say 6.5 U1

verify the VID and all, Select the ESXi version – Expand the driver – download link will be given as shown below.

How to install/Update the Driver on ESXi

Upload the driver to the ESXi host. Use below command to install if driver not present or update if an old version of driver is present. Some cases you might need to remove the old driver if host is already having higher version of driver than supported. all the commands are given below.

Remove the existing VIB:

Find the vib name from below command:

esxcli software vib list

remove vib using the name of vib got from above.

esxcli software vib remove –vibname=nameofvib

Update VIB driver using below command:

esxcli software vib update -d “/vmfs/volumes/Datastore/DirectoryName/PatchName_VIBname.zip”

Install VIB driver using below command:
esxcli software vib install -d “/vmfs/volumes/Datastore/DirectoryName/PatchName_VIBname.zip”

Migration Approach & Steps

vCenter Upgrade / Install

vCenter appliance has come long way and its very stable now. So no need to rely on windows base vCenter any more. Without any doubt vCenter appliance can be used. However based on the environment, Size , integration with other vmware products vCenter topologies will differ. All the supported vCenter topologies can be found here, However there are three most commonly used topologies are high lighted below.

VCenter Topologies

Standard Topology 1: For Small deployments with 5-10 Hosts if there are no integrations with other vmware products like NSX or VRA , Embedded is the best topology.

1 Single Sign-On domain
1 Single Sign-On site
1 vCenter Server with Platform Services Controller on same machine
Limitations
Does not support Enhanced Linked Mode
Does not support Platform Service Controller replication

Standard Topology 2:For Medium to large deployments with integrations with other vmware products or  multiple vCenter servers for different purposes like one vCenter for production hosts another for VDI hosts. below is the best topology.

1 Single Sign-On domain
1 Single Sign-On site
2 or more external Platform Services Controllers
1 or more vCenter Servers connected to Platform Services Controllers using 1 third-party load balancer

Standard Topology 3:  For Medium to large deployments with DR, integrations with other vmware products and multiple vCenter servers below is the best and recommended topology.

1 vSphere Single Sign-On domain
2 vSphere Single Sign-On sites (Prod , DR site)
2 or more external Platform Services Controllers per Single Sign-On Site ( 2 in Prod, 2 in DR)
1 or more vCenter Server with external Platform Services Controllers
1 third-party load balancer per site

vCenter Upgrade / Install

First thing to be migrated or upgrading is the vCenter server, after that only ESXi and VM’s will be migrated. After the Topology is decided then the first thing to think of the the Upgrade path from existing vCenter server to the new One.

Upgrade from windows based vCenter to appliance is supported , But if it is small or medium environment suggested to build a fresh vCenter appliance based on the topology best suitable for your infra, with the same configuration for cluster, standard or distributed switch. For large environments with lots of distributed switches and port groups consider upgrade.

Upgrading vCenter server from 5.0 or 5.1 to 6.5 or 6.5 U1

Direct upgrade is not supported hence an intermediate upgrade to 5.5 or 6.0 any update as during the upgrade we need to consider that existing all ESXi hosts need to be supported by vCenter server.

Upgrading vCenter server from 5.5 or later to 6.5 or 6.5 U1

Direct upgrade is supported hence upgrade option can be used while installing new vcenter server it require an temporary IP while copying the data from old vcenter to the new vcenter.

Upgrading vCenter server from 5.x to 6.7

Direct upgrade is not supported hence an intermediate upgrade to 6.0 any update as during the upgrade we need to consider that existing all ESXi hosts need to be supported by vCenter server.

Upgrading vCenter server from 6.0 or later to 6.7

Direct upgrade is supported hence upgrade option can be used while installing new vcenter server it require an temporary IP while copying the data from old vcenter to the new vcenter.

Dont forget to notify / check the compatibility of your backup as most of the backup vendors integrate with vCenter server.

ESXi Upgrade / Install

Based on the upgrade path and hardware compatibility ESXi upgrade or fresh installation can be done. Its always good to do upgrade for existing servers as no need to do all configurations like vmotion, dns , Ip’s and all. For new Hardware of course fresh installation is the only approach.

  • ESXi 5.0 or 5.1 with updates cannot be directly upgraded to 6.5 an intermediate upgrade to 5.5 or 6.0 is required.
  • ESXi 5.5 or later can be directly upgraded to 6.5 or U1
  • ESXi 5.x  cannot be directly upgraded to 6.7 an intermediate upgrade to 6.0 is required.
  • ESXi 6.0 or later can be directly upgraded to 6.7

New Servers : always use the latest OEM provided ESXi build CD as it will have all the Drivers necessary for the server. Minor updates and patches can be manually installed after installing ESXi with OEM provided custom built ESXi CD.

Old Servers for Upgrade: Its Better to use OEM custom CD for ESXi update, But as per experience its always good to update using the fresh ESXi image provided by vmware not OEM and install necessary supported drivers , updates and patches manually later. As OEM CD will always install the latest Drivers available however most of the old hardware may not support based on VMWare compatibility matrices.

So the Standard rules are below:

  1. verify the compatibility of the NIC, HBA and other for vSphere version.
  2. Install necessary supported BIOS and firmware version for hardware.
  3. Install/Update with OEM CD or fresh ESXi image from vmware.
  4. Check the correct version of drivers are present ( esxcli software vib list | grep -i driver_name )
  5. Install / Update Necessary compatible hardware drivers ( esxcli software vib install/update -d path_of_driver)
  6. Install necessary updates and security patches, i recommend using cli not update manager. cli takes few seconds to install.

Distributed switch migration

Distributed switch is a major thing to consider when it comes to migration, As most of migrations will happen without downtime.

Fresh Installation of vCenter without Upgrade.

In this case first we need to manually configure the distributed switch in the new vCenter server however there is export and import option available for distributed switch which is made easy to export the switch and import as all the port groups and config will appear as is in the new vCenter else a manual config is required.

In this case while migrating the ESXi host from Old vCenter server to new vCenter server below to be considered.

  1. Select a Jump ESXi host residing in Old vCenter server ( this will be used for all VM’s migration)
  2. Remove one physical uplink from distributed switch on each ESXi host or the Jump ESXi host (in old vcenter)
  3. Create a Standard switch and port groups with same vLan info as distributed switch with the physical uplink removed added to it.
  4. Migrate the VM’s to the Jump ESXi Host using vMotion (old vCenter)
  5. Migrate the VM’s from Distributed to Standard on the Jump ESXi host (old vCenter)
  6. Move the Jump host from Distributed switch in old vCenter.
  7. Add the Jump ESXi host (Say ESXi 5.5) with VM’s running on it to the new vCenter server say 6.5 U1
  8. Jump Host ESXi host (Say ESXi 5.5) with VM’s running on it will be added to new vCenter say 6.5 U1 and shows as disconnected in old vCenter server say 5.5.
  9. Add the Jump host to Distributed switch in new vCenter.
  10. Migrate all VM’s on Jump host from Standard to Distributed switch.
  11. Move all VM’s from ESXi host say 5.5 to New ESXi 6.5 Hosts ( already added to DS) in new vCenter server.
  12. Disconnect the Jump host from New vCenter and add it back to Old vCenter , follow this until all VM’s are moved.

vCenter server Upgrade from existing VC 5.x.

In this case we no need to worry much as all the existing vCenter server 5.x information and configurations with ESXi hosts will come to the new vCenter server 6.x. Only thing to be considered is both Old ESXi hosts and New ESXi hosts are added to distributed switch after vCenter upgrade.

VM Migration

The important part during the migration is to make sure the VM is up and running. People always say no downtime, thanks to vmware migrations are easy. during the migration in most of the cases either of below or both will take place.

Migrating VM’s from Old vCenter to NEW vCenter ( no VC upgrade)

This is the case when a new vCenter server is built (not upgraded) and old vCenter is hosting the ESXi hosts and VM’s. As long as the vCenter supports ESXi version, we can add the ESXi host with VM’s running on it in the new vCenter server. This process will automatically disconnects from old VCenter and adds to new vCenter without interrupting the VM’s running on it.

If distributed switch is in use, move the VM’s from distributed switch to standard switch. this part is covered in Distributed switch section.

Note: when doing this if the ESXi host moved to new vCenter is having VM’s with RDM which are clustered with another VM. Try to move both hosts hosting the 2 Clustered RDM vm’s one after another and try to keep them on same vCenter server. Don’t leave one Clustered RDM VM on old vCenter and another on new vCenter. This will cause storage flapping issues.

Migrating VM’s between ESXi hosts of Different Versions under same vCenter server.

As long as both old version of ESXi host and new version of ESXi host are under the same vCenter server, Both hosts can host the VM resources like storage, RDM luns, Network port group and all. Its the matter of vMotion from the Old ESXi host to new ESXi host provided vMotion is configured. This way migration is possible even without a ping drop.

if distributed switch is there both old and new ESXi hosts needs to be added to the Distributed switch. If EVC mode is configured on the Cluster Hosting old ESXi host, make sure same EVC mode is configured on new ESXi host if wanted to do vMotion across them.

Things to verify:

  • Old ESXi host ( say ESXi 5.5 ) and New ESXi Host ( Say 6.5 U1) have visibility to all Datastores, RDM LUNs where VM’s are hosted.
  • Same Standard switch port groups are available on Old and New ESXi host.
  • Old and New ESXi host joined to Distributed switchs VM’s are using.
  • Same EVC Mode is configured on Cluster level of Old and New ESXi hosts for live vMotion.

Migrating VM’s with RDM (Physical / Virtual)

VM’s with RDM’s can also be migrated without downtime provided the destination host can see the RDM luns and the LUN’s where the pointers are saved.

However please note if the SCSI controller is shared by multiple RDM luns vMotion is not possible. Meaning to say suppose RDM lun 1 is on SCSI 1:0 and RDM Lun 2 is on SCSI 1:1 ports vMotion is not possible. This is the reason its always recommended to map each RDM lun with different SCSI controllers, like RDM lun 1 with SCSI 1:0 and RDM Lun 2 with SCSI 2:0.

Hope this post is useful, leave your suggestions and comments below.

Siva Sankar

Siva Sankar works as Solution Architect in Abu Dhabi with primary focus on SDDC, Automation,Network Virtualization, Digital Workspace, VDI, HCI and Virtualization products from VMWare, Citrix and Microsoft.

63 thoughts on “Upgrading / Migrating from vSphere 5.x to 6.x (6.5 , 6.7) best practices & Approach

  • May 2, 2018 at 9:17 pm
    Permalink

    Hi Siva, Appreciate your efforts.
    You have added bunch of information in a single Article, it is difficult to read/ find. I would recommend to split the topics individually like vCenter upgrade approach, ESXi upgrade approach etc., there you can talk about that particular product/ appliance that will help us to read/ follow.

    All the best to write more. Thanks

    Reply
  • May 4, 2018 at 8:16 am
    Permalink

    Much needed steps for vsphere upgrade…Appreciate your knowledge sharing!!

    Reply
  • May 23, 2018 at 5:22 am
    Permalink

    Good work. Thanks for sharing the knowledge.

    Reply
  • Pingback:Upgrading / Migrating from vSphere 5.x to 6.x (6.5 , 6.7) best practices & Approach – hashimblog

  • July 31, 2018 at 8:07 am
    Permalink

    This article helps me alot

    Reply
  • August 3, 2018 at 6:21 pm
    Permalink

    thanks for sharing

    Reply
  • August 24, 2018 at 7:52 pm
    Permalink

    Thank you Very much… Good Article

    Reply
  • August 27, 2018 at 3:45 am
    Permalink

    Thanks for your article. Why do you have instructions for upgrading NIC drivers, doesn’t the ESXi upgrade upgrade relevant drivers?

    Reply
    • August 28, 2018 at 7:44 pm
      Permalink

      Dear

      In most of the cases it wont upgrade to right driver version for nic and hba cards if the hardware is old. I have seen many cases with this regard. while in production after upgrade network related issues will come.

      thanks,
      siva sankar

      Reply
  • August 30, 2018 at 12:44 am
    Permalink

    Thanks so much for this article. It will really help me get my system upgraded.

    Reply
  • September 19, 2018 at 12:56 am
    Permalink

    Thank you for sharing
    If we have vCenter server 6.0 GA and it is supported to migrate to vCenter Appliance 6.5
    Would you recommend to patch the existed vCenter server 6.0 (for example to update 3) then migrate
    or migrate directly from GA

    Thank you in advance

    Reply
  • September 28, 2018 at 4:42 pm
    Permalink

    Great article. Thanks for sharing.

    Reply
  • October 23, 2018 at 9:01 pm
    Permalink

    Great work Siva! Much appreciated.

    Reply
  • November 9, 2018 at 12:37 am
    Permalink

    very very good documents excellent …I was impressed

    Reply
  • November 21, 2018 at 11:35 am
    Permalink

    Really good Article .
    explain everything in the best practice way!
    Well done

    Reply
  • December 5, 2018 at 6:28 pm
    Permalink

    great article. Thanks for sharing

    Reply
  • January 17, 2019 at 9:20 pm
    Permalink

    Great share, for esxi 5.5-6.5 direct upgrade, do I need to shut or move out the VMs in the host prior to in place upgrade?

    Reply
    • January 18, 2019 at 12:36 am
      Permalink

      Dear
      While upgrading host from 5.5 to 6.5 we need to move VM’s from that host using vMotion and put in maintenance mode.
      Before doing ESXi upgrade , vCenter upgrade needs to be done.
      Then one after another ESXi hosts can be upgraded and VM’s can be moved across the 5.5 and 6.5 hosts using vMotion without any downtime.
      Hope i had answered your question.
      Thanks,
      Siva

      Reply
  • January 21, 2019 at 10:19 pm
    Permalink

    You have explained it very well,
    I just have a question , I have come across a scenario where NIC type is e1000 (Networking), which obviously need to be changed to VMXNET3 (in case of upgrading from 5.5 to 6.0). So as this logical change and will have temporary disruptions to the IOPS . Is there any best practice to make it completely non-disruptive to the apps (may be over a weekend with minimal IOPS period) or does it require and outage ?

    Reply
    • January 26, 2019 at 10:25 pm
      Permalink

      Dear
      There will be service disruption while changing from e1000 to VMXNET3. The time depends on how quick you can do it. As per my experience minimum 1-2 min per VM.
      1. Install VMWare tools as VMXNET3 will work only if tools are installed.
      2. Add new VMXNET3 adapter to VM.
      3. Login to console swap IP’s form old NIC to new VMXNET3 NIC and disable e1000 nic.
      4.Once the testing is done, remove the e1000 NIC.

      Note: some apps rely on MAC address, in that case note IP, MAC details, remove NIC and add VMXNET3 with static MAC.

      There are ways to play around with .vmx file, but it will complicate things rather than simplifying it.

      Hope it helps,
      Siva Sankar

      Reply
  • January 25, 2019 at 6:58 pm
    Permalink

    Hi,
    Great article, thanks for sharing.
    1 topic needs a little more attention: Distributed switch migration with freshly install of new vCenter.
    When you freshly install a new vCenter you need to pay attention to the version of distributed switch you create. If you bring in an old 5.5 host you cannot connect it to the distributed switch when its version is set too high (let’s say DS version 6.5 under vCenter 6.5)
    2 possible solutions:
    – You create under the new vCenter distributed switches with a version compatible to the lowest version of host you want to use as jump host. After the migration you can upgrade the distributed switch version.
    Or:
    – Under the new vCenter you can configure 1 host as a ‘second’ jumphost: this host is running for example esxi 6.5 and 1 side of the nics are part of the distributed switch, the other are configured into standard switches. When you bring in the old host with standard switches you can vmotion the VM’s to the second jumphost over standard switches and after this you can migrate virtual machine networking from standard to distributed switch.
    When migration is finished you can reassign the nics on the second jumphost from the standard switch into the distributed switch and delete the standard switches.

    Reply
  • January 31, 2019 at 8:25 am
    Permalink

    Good work,Please keep it up and share more info about VRA/VRO.

    Reply
  • February 12, 2019 at 8:38 pm
    Permalink

    Wow, very nice to see the detailed information on one place. I would appreciate if you can document the process of Vcenter upgrade from 5.5 to 6.0 or 6.5 if old vcenter is running in Linked mode.

    Thanks again for such a nice article shared.

    Regards
    Mohammad Mustafa

    Reply
  • February 19, 2019 at 1:08 pm
    Permalink

    very Helpful article.

    Reply
  • February 25, 2019 at 5:08 am
    Permalink

    Siva,

    Great Job!! Thank you for including the commands to find NIC and HBA driver and firmware versions using the examples that you provided. I have B200 M3, M4 and M5 Blades in our environment, but in our case, we’re not going to a vDS right after adding an ESXi 5.5 U3 host into vCenter 6.5 U1 but will do the vDS after building existing hardware with VMware custom image for UCS 6.5 U1. After all hosts are rebuilt by putting each one in maintenance mode and then in-place fresh install of 6.5 U1 on all hosts in the same cluster, we also have Intel EVC on for our clusters, we will then add each host to the vDS for version 6.5 and will leave all the ESXi 5.5 host with vSS on Switch 0 with both uplink NIC’s and port groups defined. Else we would of had to create a vDS for 5.5 for our ESXi 5.5 hosts and don’t need to do vDS for 5.5, since we’re rebuilding our M3’s, M4’s and M5’s with ESXi 6.5 U1, I will then joing to vDS 6.5 U1 and after all this is done, onto 6.7 U1 upgrade and I will do a green field deployment of vCSA 6.7 U1 and make sure our UCS M3, M4 and M5’s are good with HCL for vmware for ESXi 6.7 U1. Again, you did a great job but a vSS can be used when migrating to vCenter 6.5 U1 without having to go to a vDS immediately. I really appreciate your time and effort on this awesome article!

    Reply
  • February 26, 2019 at 2:08 am
    Permalink

    Hi Siva Sankar,

    For “How to install/Update the Driver on ESXi” section, is this step perform after upgrade the VMware environment (vCenter & ESXi) or before upgrade the VMware environment? Or is it possible in both ways?

    Regards;

    Jovian

    Reply
    • March 4, 2019 at 10:17 am
      Permalink

      Dear
      Drivers needs to be upgraded after upgrading the ESXi. we need to check new ESXi version compatibility with hardware and install necessary drivers using below commands. Download the drivers and upload them to datastore.
      esxcli software vib install -d “/vmfs/volumes/Datastore/DirectoryName/PatchName.zip

      Thanks,
      Siva

      Reply
  • April 2, 2019 at 2:43 pm
    Permalink

    Hi Siva Sankar,
    very useful document. Thanks Mr. Siva Sankar.

    Can plz assist for migration process from 05 AMD nodes VMWare cluster 5.5 with vCenter5.5 to 03 node Intel VMware cluster 6.0 with vCenter 6.5????

    We have 05 nodes VMware cluster, all servers are AMD and want to replace 03 nodes from HPE BL465c G7 (AMD-processor) to DL380 Gen10(Intel Processor).
    We bought 03 DL380 Gen10 (intel-CPU) servers including VMware vSphere & vCenter standard. I want to remove 03 BL465 G7 from cluster and add DL380 Gen10. Since Intel and AMD are two different type CPU families, so two kind of cluster will be creating. One for Intel for newly bought 3 x DL380 Gen10 server and second one for AMD cluster for 2 x DL385p Gen8 servers.

    Question is how VMs will be migrating from AMD based cluster to Intel based cluster environment? Below is snapshot of datastore where 62 VMs are running at different datastores.

    Thanks,
    Salman

    Reply
    • April 4, 2019 at 1:12 pm
      Permalink

      Dear Salman,

      1. create new cluster for intel based servers and map the datastores where vm are residing to these new hosts.
      2. shutdown VM – edit settings – expand CPU – CPUID – advanced – reset all.
      3. vmotion the vm to new intel host, power on. (if any cpu related error appear do the CPUID reset all as above on intel server and power on)

      Note: live migration not possible.

      Thanks
      siva

      Reply
  • April 24, 2019 at 5:58 am
    Permalink

    Thanks a lot, much appreciated

    Reply
  • May 9, 2019 at 10:43 pm
    Permalink

    We’re moving from old Vcenter 6.0 with esxi host 5.0 to a new VXrail 6.7+. So how would you recommend with minimum downtime (would like to vmotion to new Vcenter/esxi host if possible). Old systems stored on fiber channel VNX SAN, the new system doesn’t have fiber channel. I didn’t see that as an option.

    Reply
    • May 10, 2019 at 12:59 pm
      Permalink

      Dear

      As your old environment ESXi 5.0 better to create a jump ESXi host with 6.0, move VM’s from old host to this and to a standard switch.
      After this you will have 2 options to move from the jump host to new vxrail.
      1. have FC connectivity to any node in vxrail and do compute and storage vmotion ( this is rare case and require some manupulation of hardware)
      2. Use compute and storage vmotion together which is supported for normal VM’s without RDM disks from the jump server to vxrail nodes.

      only thing is to move the jump server to and fro between old and new vcenter and need more vmotions, but option 2 is safe.

      Thanks
      siva

      Reply
  • May 16, 2019 at 5:48 am
    Permalink

    Thank you so much for writing! We have 5 ESXi 5.5 hosts on a Windows vCenter Server 5.5 with only Standard Switches and only 26 VMs. All hosts are on Optical fiber common Storage and all VMs can be vMotioned. We need to upgrade to vSphere 6.5 and to a different network. Should we deploy a new VCSA 6.5 on an ESXi 5.5 before upgrading any of the ESXis? If yes, should we remove that ESXi 5.5 from vCenter 5.5 before installing the new VCSA 6.5 on it? Also do you think it is easier to change the vSPhere 5.5’s network to the other subnet prior to deploying VCSA and upgarding the ESXis, or deploy/upgrade first before changing IPs/subnet.

    Reply
    • May 20, 2019 at 7:00 am
      Permalink

      Dear

      I assume you are changing only ESXi and vCenter network not VM’s. To avoid downtime you can create two jump environments one 6.0 and another 6.5 as 5.5 to 6.5 direct not possible.

      second option would be below.

      1. take one host out , upgrade esxi to 6.5 change network.
      2. deploy vcenter 6.5 on this new host, add this host to vcenter.
      3. shutdown vm in 5.5 vcenter, unregister ( optional)
      4. browse thevm in datastore on 6.5 host. register VM – power on – upgrade vmware tools.

      do this for all vm’s and move hosts one by one to support compute.

      Thanks
      siva

      Reply
  • May 16, 2019 at 8:05 am
    Permalink

    You have written about all versions.
    If we talk about vsphere 5.5 to 6.0 upgradation process what are the Pre-check and Post Checks and steps as well.

    Reply
    • May 20, 2019 at 6:42 am
      Permalink

      Dear

      5.5 to 6.0 you can upgrade straight. first vcenter and then esxi hosts.
      please check the compatabiity guide which is detailed in the blog

      Thanks
      siva

      Reply
  • May 17, 2019 at 5:18 am
    Permalink

    Thank you for a great comprehensive article.

    I’ve inherited a 5.5 VM farm and I’m in the process of identifying steps needed to upgrade to version 6.*
    In documenting the environment I’ve discovered that many of the existing old Dell PowerEdge ESXi 5.5 hosts have two or more nic cards with no driver available when searching at https://www.vmware.com/resources/compatibility/search.php?deviceCategory=io&details=1&VID=14e4&DID=163a&SVID=1028&SSID=045f&page=1&display_interval=10&sortColumn=Partner&sortOrder=Asc
    Here’s a typcial example:
    vmkchdev -L | grep vmnic | sort -k 5
    0000:02:00.0 14e4:163a 1028:045f vmkernel vmnic0 XXX
    0000:02:00.1 14e4:163a 1028:045f vmkernel vmnic1 XXX
    0000:04:00.0 14e4:164f 14e4:1123 vmkernel vmnic2
    0000:04:00.1 14e4:164f 14e4:1123 vmkernel vmnic3
    vmnic0&1 have no driver. vmnic2&3 do have a driver available.

    Should I interpret this to mean that we need to upgrade our hardware in order move up from 5.5?

    many thanks,
    oliver

    Reply
    • May 20, 2019 at 7:04 am
      Permalink

      Dear

      if your host is in the supported list and vmnic driver is not available online. try download a OEM image from DELL and install on the ESXi host. 99% it should pick the right drivers. As long as the NIC card is also on the compatability list no issues.
      Hardware upgrade is not required. DELL should have the vmnic drivers, they will give you if you raise a support case with them, this is what i do most of the time.

      Thanks
      siva

      Reply
  • Pingback:HA/DRS/Upgrade/ | LimitedTech

  • June 14, 2019 at 6:21 pm
    Permalink

    Excellent article! If I upgrade ESXi 5.1 to 6.0 on a host with forgotten root password, will the upgrade process give me an opportunity to set a new root password?

    Reply
    • September 7, 2019 at 8:24 pm
      Permalink

      Hi,

      You will be able to reset the root password, pls follow the below steps

      1. if you able to access the hosts from VC
      2. Create ESX Admins security group in AD and add the user to this group
      3. login VC ( Web client) and join the host to AD
      4. https://FQDN or IP ADDR of host open in web client
      5. login with AD credentials and reset the password of root

      hope this will help

      Reply
      • September 18, 2019 at 1:41 pm
        Permalink

        Thanks for help

        Reply
  • July 17, 2019 at 12:10 am
    Permalink

    Dear Siva

    Thanks you very much I really value your expertise and inputs for this document

    Reply
  • July 26, 2019 at 6:30 am
    Permalink

    Siva a great article on VMWare.
    My question is i have 11 sites with all different flavors of VMWare ranging from 5.1 to 6.0
    My goal is to upgrade them all to the latest 6.7 U3 ver. (Of course i will have to check the HW compatibility)
    All are in their own location sites standalone. 3 out of 11 have vcenters each at their site.
    My goal is to have one vcenter for all and then have a 2nd vcenter for redundancy.
    Which is the best path you would suggest.
    No Distribution switches

    Reply
    • July 27, 2019 at 4:04 pm
      Permalink

      Dear

      Its a good idea to manage all the locations form one vCenter provided you don’t have much latency from the vCenter and ESXi hosts and decent bandwidth across sites.

      An ESXi host can be managed by only one vCenter server, Primary/ redundancy vCenter scenario is not possible. Backup your vCenter server and recover in the event of failure.

      Please refer to this blog even though this is very old, it helps. https://communities.vmware.com/docs/DOC-11492

      Thanks,
      Siva

      Reply
      • July 31, 2019 at 4:54 am
        Permalink

        Siva thanks for the info. With vCenter I can have on center per region and then configure link mode to see all within one console

        Reply
  • September 4, 2019 at 10:45 am
    Permalink

    It is very good document to start with . Every minute detail is added in the document. Thanks for sharing.

    Reply
  • November 1, 2019 at 8:11 pm
    Permalink

    Thanks Siva, great document. very detailed information in single place

    Reply
  • November 5, 2019 at 2:07 pm
    Permalink

    Hello Shiva,
    I just read out your many articles and its pretty simple to understanding the concepts.
    I have on confusion regarding Storage IOPS
    Let us we have multiple virtual machines running in cluster and resources are well satisfied on cluster.
    Let suppose we have storage issue how will identified?
    What kind of values I need to check from where Identify the storage issue?

    Reply

Leave a Reply to Brian Cancel reply

Your email address will not be published. Required fields are marked *

Show Buttons
Hide Buttons