Securing COMSTAR and VMware iSCSI connections

Connecting VMware iSCSI sessions to COMSTAR or any iSCSI target provider securely is required to maintain a reliable system. Without some level of initiator to target connection gate keeping we will eventually encounter a security event. This can happen from a variety of sources, for example a non-cluster aware OS can connect to an unsecured VMware shared storage LUN and cause severe damage to it since the OS has no shared LUN access knowledge.  All to often we make assumptions that security is about confidentiality when it is actually more commonly about data availability and integrity which will both be compromised if an unintentional connection were to write on a shared LUN.

At the very minimum security level we should apply non-authenticated named initiator access grants to our targets. This low security method defines initiator to target connection states for lower security tolerant environments. This security method is applicable when confidentiality is not as important and security is maintained with the physical access control realm. As well it should also coincide with SAN fabric isolation and be strictly managed by the Virtual System or Storage Administrators. Additionally we can increase access security control by enabling CHAP authentication which is a serious improvement over named initiators. I will demonstrate both of these security methods using COMSTAR iSCSI Providers and VMware within this blog entry.

Before we dive into the configuration details lets examine how LU’s are exposed. COMSTAR controls iSCSI target access using several combined elements. One of these elements is within the COMSTAR STMF facility where we can assign membership of host and target groups. By default if we do not define a host or target group any created target will belong to an implied ALL group. This group as we would expect grants any connecting initiator membership to the ALL group assigned LUN’s. These assignments are called views in the STMF state machine and are a mapping function of the Storage Block Driver service (SBD) to the STMF IT_nexus state tables.

This means that if we were to create an initiator without assigning a host group or host/target group combination, an initiator would be allowed unrestricted connectivity to any ALL group LUN views and possibly without any authentication at all. Allowing this to occur would of course be very undesirable from a security perspective in almost all cases. Conversely if we use a target group definition then only the initiators that connect to the respective target will see the LUN views which are mapped on that target definition instance.

While target groups do not significantly improve access security it does provide a means controlling accessibility based on the definition of interface connectivity classes which in turn can be mapped out on respective VLAN priority groups, bandwidth availability and applicable path fault tolerance capabilities which are all important aspects of availability and unfortunately are seldom considered security concepts in many architectures.

Generally on most simple storage configurations the use of target groups is not a requirement. However they do provide a level of access control with LUN views. For example we can assign LUN views to a target group which in turn frees us from having to add the LUN view to each host group within shared LUN configurations like VMware stores. With combination’s of host and target groups we can create more flexible methods in respect to shared LUN visibility. With the addition of simple CHAP authentication we can more effectively insulate target groups. This is primarily due to the ability to assign separate CHAP user and password values for each target.

Lets look at this visual depiction to help see the effect of using target and host groups.

COMSTAR host and target view depiction

In this depiction any initiator that connects to the target group prod-tg1 will by default see the views that are mapped to that target groups interfaces. Additionally if the initiator is also a member of the host group prod-esx1 those view mapping will also be visible.

One major difference with target groups verses the all group is that you can define LU views on mass to an entire class of initiator connections e.g. a production class. This becomes an important control element in a unified media environment where the use of VLANs separates visibility. Virtual interfaces can be created at the storage server and attached to VLANs respectively. Target groups become a very desirable as a control within a unified computing context.

Named Initiator Access

Enabling named initiator to target using unauthenticated access with COMSTAR and VMware iSCSI services is a relatively simple operation. Let’s examine how this method controls initiator access.

We will define two host groups, one for production esx hosts and one for test esx hosts.

# stmfadm create-hg prod-esx1

# stmfadm create-hg test-esx1

With these host groups defined we individually assign LU’s views to the host groups and then we define any initiator to be a member of one of the host groups to which it would only see the views which belong to the host group and additionally any views assigned to the default all group.

To add a host initiator to a host group, we must first create it in the port provider of choice which in this case is the iSCSI port provider.

# itadm create-initiator iqn.1998-01.com.vmware:vh1.1

Once created the defined initiator can be added to a host group.

# stmfadm add-hg-member -g prod-esx1 iqn.1998-01.com.vmware:vh1.1

An ESX host initiator with this iqn name can now attach to our COMSTAR targets and will see any LU views that are added to the prod-esx1 host group. But there are still some issues here, for example any ESX host with this initiator name will be able to connect to our targets and see the LUs. This is where CHAP can help to improve access control.

Adding CHAP Authentication on the iSCSI Target

Adding CHAP authentication is very easy to accomplish, we simply need to set a chap user name and secret on the respective iSCSI target. Here is an example of its application.

# itadm modify-target -s -u tcuid1 iqn.2009-06.target.ss1.1

Enter CHAP secret:
Re-enter secret:

The CHAP secret must be between 12 and 255 characters long. The addition of CHAP allows us to further reduce any risks of a potential storage security event. We can define an additional target and they can have a different chap user names and or secrets.

CHAP is more secure when used in a mutual authentication back to the source initiator which is my preferred way to implement it on ESX 4 (ESX 3 does not support mutual chap). This mode does not stop a successful one-way authentication from an initiator to the target, it allows the initiator to request that the target host system iSCSI services must authenticate back to the initiator which provides validation that the target is indeed the correct one. Here is an example of the target side initiator definition that would provide this capability.

# itadm modify-initiator -s -u icuid1 iqn.1998-01.com.vmware:vh1.1

Enter CHAP secret:
Re-enter secret:

Configuring the ESX 4 Software iSCSI Initiator

On the ESX 4 host side we need to enter our initiator side CHAP values.

ESX 4 iSCSI Mutual CHAP

 

Be careful here, there are three places we can configure CHAP elements. The general tab allows a global point of admin where any target will inherit those entered values by default where applicable e.g. target chap settings. The the dynamic tab can override the global settings and as well the static tab overrides the global and dynamic ones. In this example we are configuring a dynamically discovered target to use mutual (aka bidirectional) authentication.

In closing CHAP is a reasonable method to ensure that we correctly grant initiator to target connectivity assignments in an effort to promote better integrity and availability. It does not however provide much on the side of confidentially for that we need more complex solutions like IPSec.

Hope you found this blog interesting.

Regards,

Mike

Share
  • Share

Tags: , , , , , , , , ,

Site Contents: © 2009  Mike La Spina

10 Comments

  • Roman says:

    Hello Mike,

    Comstar plus iscsi authentication is an interesting topic.
    There is also tpg option for itadm. Do you know how it works?

    itadm create-tpg tpg-name IP-address[:port] [IP-address[:port]] [...]


    Roman

  • Hi Roman,

    Portal groups are not really designed to provide any form of initiator authentication. They act as a control function to designate what interfaces on the target side will be active participants. The iscsit state engine is aware of sessions across the group and handles scsi command ordering and other attributes.

    For example if we have 2 x 10Gbe and 2 x 1Gbe interfaces on the storage host we could define a portal group named Prod-tpg for the 10Gbe interfaces and Test-tpg for the 1Gbe interfaces and then define iSCSI targets to each by way of a tpg membership.

    Regards,

    Mike

  • Roman says:

    So, itadm can restrict access by target portal groups – thus by target ip interfaces?
    And stmfadm can restrict access to targets only by initiator names, grouping them into hg and tg?

    What would be the steps in stmfadm and itadm for the following:
    a. 2 interfaces on target server: 172.1.1.10 and 172.1.2.10
    b. 2 luns created as raw volumes
    c. 2 win should access the volumes using different interfaces on the target
    d. win servers should see only 1 “own” volume

    Thank you,
    Roman

  • Hi Roman,

    So, itadm can restrict access by target portal groups – thus by target ip interfaces?

    Not exactly, target portal groups define what ip addresses will participate in an i_t nexus session. It controls data flow. For example if a series of write command descriptors for one lu occur over the TCP transports on two ip addresses within a tpg group the iSCSI protocol will make sure they are re-assembled correctly for a single i_t nexus. Access is fundamentally controlled by authentication, for example using chap or ipsec.

    You can of course can direct traffic using tgp groups which seems to be what your defining.

    Here is and example of the steps.

    1 Define two tpg’s
    itadm create-tgp tpg1 172.1.1.10
    itadm create-tgp tpg2 172.1.2.10

    2 Define targets and assign a tpg to each
    itadm create-target -n iqn.1986-03.com.sun:02:ss1.1 -t tpg1 target1
    itadm create-target -n iqn.1986-03.com.sun:02:ss1.2 -t tpg2 target2

    3 Define the remote initiators to the storage head
    itadm create-initiator iqn.1991-05.com.microsoft.win1 win1
    itadm create-initiator iqn.1991-05.com.microsoft.win2 win2

    4 Define a target group so you may assign common lu’s if required. (Any lu views you map to the tg will be commonly visible to initiators of the tg.)
    stmfadm create-tg target-group1
    stmfadm create-tg target-group2

    5 Add the targets to the target groups
    stmfadm add-tg-member -g target-group1 iqn.1986-03.com.sun:02:ss1.1
    stmfadm add-tg-member -g target-group2 iqn.1986-03.com.sun:02:ss1.2

    6 Create two host groups to control lu mappings
    stmfadm create-hg win1
    stmfadm create-hg win2

    7 Add the required initiators to the respective host group
    stmfadm add-hg-member -g win1 iqn.iqn.1991-05.com.microsoft.win1
    stmfadm add-hg-member -g win2 iqn.iqn.1991-05.com.microsoft.win2

    8 Create your lu and lu views and map the exclusive one to the host group and any common ones to the target host group
    stmfadm create-lu /dev/zvol/rdsk/rp1/boot-win1
    stmfadm add-view -h win1 -n 0 600144F01EB3862C0000494B55CD0001
    stmfadm create-lu /dev/zvol/rdsk/rp1/boot-win2
    stmfadm add-view -h win2 -n 0 600144F01EB3862C0000494786CD0001
    stmfadm create-lu /dev/zvol/rdsk/rp1/shared-vol0
    stmfadm add-view -t target-group1 -n 1 600144F01EB3862C0000494444D0001
    stmfadm add-view -t target-group2 -n 1 600144F01EB3862C0000494444D0001

    Hope that helps.

    Regards,

    Mike

  • [...] Virtual Fibre Channel (vFC) Interfaces vPC (Virtual Port Channel) and the Nexus Platform Securing COMSTAR and VMware iSCSI Connections HA in Cisco UCS Menlo [...]

  • noon says:

    Hi everyone this is a good article but i have a problem. I have an opensolaris who share in nfs and iscsi for two esx server. And the performance in iscsi are very low (nearly 30mo/s copare nfs 70 mb/s). Have you the same problem or i miss something. When i try iscsi on windows initiator i have 100 mo/s. My opensolaris is an snv122 with zfs upgrad v 18 and i enabled the write cache

  • gavinramm says:

    Great post, very helpfull now my nexentastor box is complete. cheers

  • Rickard Nobel says:

    Thanks for a good article about iSCSI. However this part: “The CHAP secret must be between 12 and 255 characters long. The secret is sent from the initiator to the target in clear text” – is not really correct. The CHAP secret is not sent in clear text, that is the main feature of CHAP. It might not be the strongest authentication around, but CHAP uses a challenge technique which makes every login different. Since it is not resiliant to offline dictionary attacks if the traffic is sniffed it is still very recommended to use a separate iSCSI network.

  • Thanks for the note, not sure what I was thinking that day but its definetly not in the clear.

  • [...] Ici je détail la procédure pour installer très rapidement, et donc en laissant de coté la sécurité un serveur de stockage sous OmniOS (un autre dérivé de OpenSolaris) Pour la sécurité allez voir ici [...]

Leave a Reply

XHTML: You can use these tags:
<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>