VMWare vSAN 7 Design, implementation and configuration
VMWare vSAN is the leading HCI platform which can be implemented with couple of clicks and driven completely by the policies we define. vSphere supports various storage options and functionalities in traditional and software-defined storage environments. A high-level overview of vSphere storage elements and aspects helps you plan a proper storage strategy for your virtual data center. In addition to abstracting underlying storage capacities from VMs, as traditional storage models do, software-defined storage abstracts storage capabilities.
Contents of the Post
New Features in vSAN Update 1
vSAN Data Persistence platform: The vSAN Data Persistence platform provides a framework for providers of modern stateful services, such as object storage and NoSQL databases, to build deep integration with the underlying virtual infrastructure, leverages the Kubernetes operator method and vSphere Pod Service. The integration allows you to run modern stateful applications with lower TCO and simplified operations.
HCI Mesh : Enable a unique, software-based approach for disaggregation of compute and storage resources. This native, cross-cluster architecture brings together multiple independent vSAN clusters to disaggregate resources and enable utilization of stranded capacity.
Capacity Optimizations: Reduce the reserve capacity (or “slack space”) required for cluster operations by up to 50%. Clusters with only eight nodes can unlock up to 7% of total capacity, while the largest clusters (48 nodes or greater) will see the greatest improvement.
Shared Witness for Two-Node vSAN Deployments : Enable multiple 2-Node vSAN deployments to share a common witness instance, with up to 64 clusters max per single shared witness host.
vSphere Lifecycle Manager (vLCM): With support for NSX-T updates, you can now update vSphere, vSAN and NSX-T with a single tool. vLCM will monitor for desired image compliance continuously and enable simple remediation in the event of any compliance drift.
File Services: Avoid the expense and complexity of purpose-built filers and adopt an enterprise-ready solution using the most common NFS and SMB protocols, with SMB v3 and v2.1 now added to native file services.
Note: these features are collected from the vmware website. please go through the website for more details.
VMWare vSAN Design
Please refer to official vmware vSAN 7 Design guide.
Pre-requisites
This section will list out the pre-requisites for deploying vmware vSAN.
- Physical servers having one SSD for cache disk and 2 or more capacity disks as per the storage requirement. (For vSAN in lab, will show you later in this post how to mark disk as flash)
- Separate dedicate L3 vLan for vSAN traffic. Even though it can work over L2, if you need to deploy multi site or stretched cluster need gateway for the vSAN traffic to be routed to other clusters.
- Separate Port group for vSAN traffic.
- Create VMkernel on each esxi host for vSAN and enable vSAN ( this will cover later in this blog)
- vCenter server deployed and ESXi hosts added and necessary vSAN and vSphere licenses added on vCenter.
Additional pre-reqs for Nested ESXi and Labs
Below 2 steps are only for Nested ESXi and Lab environments not for production
Step 1: Add advanced Options
- Login to parent ESXi host – Shutdown nested ESXi – Edit settings – Options – Advanced configuration – Click add and provide below name and value as 1.
- Provide the name scsi0:1.virtualSSD value 1 ( 1 means enabled)
Step 2: Marking HDD as flash for Cache tier ( for Labs)
- Browse to the host in the vSphere Web Client object navigator.
- Click the Configure tab.
- Under Storage, click Storage Devices.
- From the list of storage devices, select one or several HDD devices to mark as flash devices and click the Mark as Flash Disks (
) icon.
- Click Yes to save your changes.
VMWare vSAN Cluster configuration
Verify that vSAN is enabled on the VM Kernel adapter – in my case i am using management network for vSAN also, for production its recommended to have separate vLan and vmkernel for vSAN.
Enable the vSAN , verify the MTU. If your TOR switches and hardware NIC support jumbo frames use 9000 – Click on IP Settings
Provide the vSAN VM Kernel IP details and click ok
Verify that you can see the SSD (Flash) for Cache and HDD for capacity.
keep the servers in maintenance mode – Click on cluster – configure – vSAN – Click configure
Select Single site or two host or stretched cluster as per your requirement.
Select dedup , compression and encryption if required.
Select the disk and claim it for Cache
Select the Disk and claim for Capacity
It will show as below.
review and click next as this will apply to all the 3 esxi hosts.
review and finish.
Once the vSan is enabled, remove the servers from maintenance mode.
click on data store and verify that vSan data store is created with desired capacity
Now try to deploy a VM and you can see the data store and default storage policy. we will create a separate blog/video explaining the vSan policies.
Test VM is up and running on the vSAN cluster.
Hope this post is helpful. This post gives only basic idea of vSAN. However, vSan power lies on the policies.
Thanks for going through this post.
How to unmount steps once testing is done ?
In case primary cluster is only having vsan datastore and now remote DS clsuter have vCLF vm’s on remote mount datastore which not allowing to unmount, even there is no VM present 🙂 Using retreat mode its possible but harfull to do in production so what else.