vSAN ISCSI Target a.k.a VIT Deep Dive (Part-1)

Use the iSCSI target service to enable hosts and physical workloads that reside outside the vSAN cluster to access the vSAN datastore.

This feature enables an iSCSI initiator on a remote host to transport block-level data to an iSCSI target on a storage device in the vSAN cluster.

This blog post will be divided into many categories and I will discuss each & every aspect of VIT. However, I have already posted a blog for VIT basic configuration but still I want to start the flow from scratch.

Enable VIT:

In order to use vSAN iSCSI target feature, we have to enable VIT service in webclient. Select vSAN Cluster ⇒ Configure ⇒ General and you can see vSAN iSCSI Target Service.

Click on edit, it will pop-up new wizard. Select the checkbox to enable VIT. There are options to select such as iSCSI network, TCP port & authentication. I have selected vmk0 which is mgmt port for iSCSI traffic, default iSCSI port is 3260 and authentication is none.

I have selected customized policy for home object.

Click on OK. It will update the vSAN configuration and enable the VIT service

In above screenshot you can see VIT service status is enabled and Home object has been created. We will discuss about the home object later in this post.

Next step is to create iSCSI Target

Create ISCSI Target:

Once VIT service is enabled you can go-ahead and create a iSCSI Target. To create iSCSI Target select vSAN Cluster ⇒ Configure ⇒ iSCSI Target and click on +

I have entered all the details and clicked on OK. Here, I have not added the LUN yet. You can add LUN while creating the target or uncheck the box and add it later.

You can see that IO Owner host is blr2.vhabit.com for this target and storage policy ( Thick Policy -VIT) has been applied successfully.

So, next step is to create a LUN

Create a LUN:

To create a LUN, select vSAN Cluster ⇒ Configure ⇒ iSCSI Target ⇒ LUN and click on +

ID will be generated automatically based on the LUNs. I have put alias, policy and size.

Now, vSAN has created iscsilun for iscsitarget. Both target & LUN are compliant by storage policy

Next task is to create initiator group.

Create Initiator Group:

To create initiator group select vSAN Cluster ⇒ Configure ⇒ iSCSI Initiator Group & click on +

By default, all the initiators are allowed to connect with target. We need to create initiator group to restrict the access of VIT target

To create initiator group we need iSCSI initiator IQN. I am using windows machine as iSCSI initiator, copy the iqn from initiator and create initiator group.

In the above screenshot, I have entered the copied details and click on OK. Now, VIT will validate the initiator and add it into the group.

Next step is to add the Target IQN for initiator group

Now, we can see that from web-client initiator and target details have been added. It’s time for target portal discovery from client machine.

Target portal discovery:

Go to windows machine, open iSCSI initiator wizard. Click on Targets tab & add the any ESXi host IP address. I have added the details and clicked on quick connect. It will discover the IQN of iSCSI target and will be connected

Now, iSCSI target has been connected and in disk management 2GB LUN is available to create volume for this machine.

This is all about the configuration of iSCSI target in vSphere webclient and windows machine. Now we will discuss about the internals of VIT.

Let’s discuss about object placements:

Image credits: https://storagehub.vmware.com

This pictures explains the object placement inside vSAN. As we can see iSCSI-Config ( Home Object) is created under /vmfs/volume/vsanDatastore because all hosts need access to this configuration. All the vSAN hosts are acting as target portal but the only difference is out of all targets one is active and others are passive. You must have remembered IO target owner, which is the active one.

Under iSCSI-Config two folders are created:

1.) etc

2.) targets

/etc includes vit.conf file which is the most important configuration file of VIT. This file is created upon enabling VIT & updated by webclient or esxcli commands during any re-configuration.

/targets are other namespace objects and do not depend upon home namespace object (iscsi-config). This includes target uuid and lun backed .vmdk

I have taken screenshots from my lab

[root@blr1:~] cd /vmfs/volumes/vsanDatastore/
[root@blr1:/vmfs/volumes/vsan:52feeb4a80706ab5-8e4fe8da7230373d] cd .iSCSI-CONFIG/
[root@blr1:/vmfs/volumes/vsan:52feeb4a80706ab5-8e4fe8da7230373d/2fc37e5b-9d0e-04a6-b8c7-0050560151cf] ls -ltr
total 16
drwx------ 1 root root 420 Aug 26 13:12 targets
drwx------ 1 root root 420 Aug 26 13:16 etc

[root@blr1:/vmfs/volumes/vsan:52feeb4a80706ab5-8e4fe8da7230373d/2fc37e5b-9d0e-04a6-b8c7-0050560151cf/etc] ls -ltr
total 0
-rw-rw---- 1 root root 857 Aug 26 13:16 vit.conf
[root@blr1:/vmfs/volumes/vsan:52feeb4a80706ab5-8e4fe8da7230373d/2fc37e5b-9d0e-04a6-b8c7-0050560151cf/etc] cat vit.conf
generation 21
initiator-group iscsiinitiator {
initiator iqn.1991-05.com.microsoft:jumpserver.vhabit.com
}
auth-group default {
auth-type none
}
auth-group 2ea7825b-8ff5-bef7-f158-005056015198 {
auth-type none
initiator-group iscsiinitiator
}
portal-group default {
discovery-auth-group no-authentication
listen vmk0:3260
}
portal-group pg-vmk0-3260 {
discovery-auth-group no-authentication
listen vmk0:3260
}
target iqn.1998-01.com.vmware:5a258a8b-fc34-a175-6d85-765c44a1e2f2 {
alias "iscsitarget"
portal-group pg-vmk0-3260
auth-group 2ea7825b-8ff5-bef7-f158-005056015198
option uuid 2ea7825b-8ff5-bef7-f158-005056015198
option owner-id 2ea7825b-8ff5-bef7-f158-005056015198
lun 0 {
backend vmdk
path 2ea7825b-8ff5-bef7-f158-005056015198/a3a7825b-2ce4-9c2d-bf5f-0050560151aa.vmdk
size 4194304
option lun-alias "iscsilun"
}
}

[root@blr1:/vmfs/volumes/vsan:52feeb4a80706ab5-8e4fe8da7230373d/2fc37e5b-9d0e-04a6-b8c7-0050560151cf] cd targets/
[root@blr1:/vmfs/volumes/vsan:52feeb4a80706ab5-8e4fe8da7230373d/2fc37e5b-9d0e-04a6-b8c7-0050560151cf/targets] ls
[root@blr1:/vmfs/volumes/vsan:52feeb4a80706ab5-8e4fe8da7230373d/2fc37e5b-9d0e-04a6-b8c7-0050560151cf/targets] cd 2ea7825b-8ff5-bef7-f158-0050
56015198/
[root@blr1:/vmfs/volumes/vsan:52feeb4a80706ab5-8e4fe8da7230373d/2ea7825b-8ff5-bef7-f158-005056015198] ls -ltr
total 0
-rw-------    1 root     root           497 Aug 26 13:14 a3a7825b-2ce4-9c2d-bf5f-0050560151aa.vmdk

VIT high availability:

Image credits: https://storagehub.vmware.com

In VIT, high availability is important concept to understand when IO owner of the target fails. If target IO owner fails, initiator retry to connect to existing target portal and it redirects the request to newly become target owner.

Let’s simulate a failure:

In my lab screenshots, you can see target portal added in initiator is blr1.vhabit.com and current IO owner is blr2.vhabit.com.

As we discussed target portals (ESXi hosts) will act as standby and active owner. In my scenario, active owner is blr2 and passive is blr1. I have reset blr2 and now we will see the behavior of initiator. As the initiator active session broke down, it contacted target portal blr1 (192.168.2.101) again for connection and blr1 has redirected this request to another target owner which is now blr3.vhabit.com ( 192.168.2.103) using iSCSI redirect with minimal disruption. you can refer below logs and screenshot.

2018-08-26T13:30:37Z vitd[3057382]: VITD: Thread-0xac677f6700 192.168.2.101 (iqn.1991-05.com.microsoft:jumpserver.vhabit.com): VitdGetTargetAddr: target owner for target iqn.1998-01.com.vmware:5a258a8b-fc34-a175-6d85-765c44a1e2f2 is 5b6d9702-0ffc-a594-302d-00505601519e
2018-08-26T13:30:37Z vitd[3057382]: VITD: Thread-0xac677f6700 192.168.2.101 (iqn.1991-05.com.microsoft:jumpserver.vhabit.com): Got redirect IP address and port number: 192.168.2.103:3260,
2018-08-26T13:30:37Z vitd[3057382]: VITD: Thread-0xac677f6700 192.168.2.101 (iqn.1991-05.com.microsoft:jumpserver.vhabit.com): The connection is redirected. Drop the connection!

 

Next, we will talk about MPIO with HA scenarios in my upcoming post.

 

Thank you for reading!! Please share with your friends if you like the post.

 

 

 

 

4 Comments

  1. Excelent article! But, what about a scenario of 2 Oracle RAC physical nodes with database on the vSAV accessing via iSCSI Target? I can assume the same behavior or its necessary some additional configuration ?

    • Sure, I will write an article on that as well. However, it is the same only the backend vmdk is thick provisioned.

      Thanks for viewing the articles.

Leave a Reply

Your email address will not be published.


*