Multi Protocol Storage Provisioning with COMSTAR
In this example instance we are re-provisioning an existing storage system with an OpenSolaris COMSTAR configuration running on a commodity white box which functions as a storage server head that can compress, scrub, thin provision, replicate, snapshot and clone the existing block storage attachments. The example FC based storage could also be comprised of a JBOD FC array directly attached to the OpenSolaris storage head if we so desired or many other commonly available SCSI attachment methods. The objective here is to extend and enhance any block storage system with high performance transports and virtualization features. Of course we could also formalize the white box to an industrial strength host once we are satisfied that the proof of concept is mature and optimal.
The reality is that many older existing FC storage systems are installed without these features primarily due to the excessive licensing costs of them. And even when these features are available, its use is probably restricted to like proprietary systems thus obsolescing the entire lot of any useful future functionality. But what if you could re-purpose an older storage system to act as a DR store or backup cache system or maybe a test and development environment. With today’s economy this is from a cost perspective, very attractive and can be accomplished with very little risk on the investment side.
One of the possible applications for this flexible storage service is the re-provisioning of existing LUN’s from an existing system to newer more flexible SCSI transport protocols. This is particularly useful when we need to re-target the existing storage system from FC to iSCSI or the likes of. We can begin by exploring this functionality and explain how COMSTAR can provide us with this service.
First we need to understand the high level functionality of the COMSTAR service layers. Virtual LUN’s on COMSTAR are provisioned with a service layer named the LU provider. This layer maps backing stores of various types to a storage GUID assignment and additionally defines other properties like the LUN ID and size dimensions. This layer allows us to carve out the available block storage devices that are accessible on our OpenSolaris storage host. For example if we attached an FC Initiator to an external storage system we can then map the accessible SCSI block devices to the LU provider layer and then present this virtualized LUN to the other COMSTAR service layers for further processing.
Once we have defined the LU’s we can present this storage resource to the SCSI Target Mode Framework Service (STMF) layer which acts as the storage gate keeper. At this layer we define which clients (initiators) can connect to the LU’s based on Membership of Target Groups and Host Groups that are assigned logical views of the LU(s). The STMF layer routes the defined LU(s) as SCSI targets over a multiprotocol interface connection pool to a Port Provider. Port Providers are the protocol connection service instances which can be the likes of FC, iSCSI, SAS, iSER, FCoE and so on.
With these COMSTAR basics in mind let us begin by diving into some of the details of how this can be applied.
Sun has detailed howto setup COMSTAR at dlc.sun.com so no need to re-invent the wheel here.
Just as a note SXCE snv_103 and up integrate the COMSTAR FC and iSCSI port provider code. With the COMSTAR software components and FC target setup we can demonstrate the re-provisioning of an existing FC based storage server. Since I don’t have the luxury of having a proprietary storage server at home I will emulate this storage using an additional COMSTAR white box to act as the FC storage target to be re-provisioned.
On the existing FC target system we need to create Raid0 arrays of three disks each which will total up to a set of six trios. We will use these six non-fault tolerant disk groups as vdevs for a ZFS raidz2 group. This will allow us to create fault tolerant arrays from the existing storage server. The reasons for sets of three Raid0 groupings are to reduce the possibility of reaching the LUN maximums of the proprietary storage system and also we do not want to erode the performance by layering Raid 5 groups. As well we can tolerate a disk failure in two of the trios since we have Raidz2 across the Raid0 trio groups. Additionally using these Raid0 disk groups actually lowers the array failure probability rate. For example if a second disk were to failure in a single Raid0 set there would be no additional impact to other trios, thus reducing the overall failure rate.
To create the emulated FC storage system I have defined the following 16G ZFS sparse volumes respectively named trio1 through trio6 each as a representation of the 3 disk Raid0 spanned LUN on a source storage host named ss1.
root@ss1:~# zfs create sp1/gw
root@ss1:~# zfs create -s -V 16G sp1/gw/trio1
root@ss1:~# zfs create -s -V 16G sp1/gw/trio2
root@ss1:~# zfs create -s -V 16G sp1/gw/trio3
root@ss1:~# zfs create -s -V 16G sp1/gw/trio4
root@ss1:~# zfs create -s -V 16G sp1/gw/trio5
root@ss1:~# zfs create -s -V 16G sp1/gw/trio6
Once these mockup volumes are created they are then defined as backing stores using the sbdadm utility as follows.
root@ss1:~# sbdadm create-lu /dev/zvol/rdsk/sp1/gw/trio1
Created the following LU:
GUID DATA SIZE SOURCE
——————————– ——————- —————-
600144f01eb3862c0000494b55cd0001 17179803648 /dev/zvol/rdsk/sp1/gw/trio1
All the backing stores were added to the LU provider service layer to which in turn were assigned to the STMF service layer. Here we can see the automatically generated GUID’s that are assigned to the ZFS backing stores.
root@ss1:~# sbdadm list-lu
Found 6 LU(s)
GUID DATA SIZE SOURCE
——————————– ——————- —————-
600144f01eb3862c0000494b56000006 17179803648 /dev/zvol/rdsk/sp1/gw/trio6
600144f01eb3862c0000494b55fd0005 17179803648 /dev/zvol/rdsk/sp1/gw/trio5
600144f01eb3862c0000494b55fa0004 17179803648 /dev/zvol/rdsk/sp1/gw/trio4
600144f01eb3862c0000494b55f80003 17179803648 /dev/zvol/rdsk/sp1/gw/trio3
600144f01eb3862c0000494b55f50002 17179803648 /dev/zvol/rdsk/sp1/gw/trio2
600144f01eb3862c0000494b55cd0001 17179803648 /dev/zvol/rdsk/sp1/gw/trio1
A host group was defined named GW1 and respectively these LU GUID’s were added to the GW1 host group as LU views assigning LUN 0 to 5.
Just as a note the group names are case sensitive.
root@ss1:~#stmfadm create-hg GW1
Here we assigned the GUID’s a LUN value on the GW1 host group with the -n parm.
root@ss1:~# stmfadm add-view -h GW1 -n 0 600144F01EB3862C0000494B55CD0001
root@ss1:~# stmfadm add-view -h GW1 -n 1 600144F01EB3862C0000494B55F50002
root@ss1:~# stmfadm add-view -h GW1 -n 2 600144F01EB3862C0000494B55F80003
root@ss1:~# stmfadm add-view -h GW1 -n 3 600144F01EB3862C0000494B55FA0004
root@ss1:~# stmfadm add-view -h GW1 -n 4 600144F01EB3862C0000494B55FD0005
root@ss1:~# stmfadm add-view -h GW1 -n 5 600144F01EB3862C0000494B56000006
With the LU’s now available in a host group view we can add the COMSTAR re-provisioning gateway server FC wwn’s to this host group and it will become available as a storage resource on the re-provisioning gateway server named ss2. We need to obtain the wwn from the gateway server using the fcinfo hba-port command.
root@ss2:~# fcinfo hba-port
HBA Port WWN: 210000e08b100163
Port Mode: Initiator
Port ID: 10300
OS Device Name: /dev/cfg/c8
Manufacturer: QLogic Corp.
Model: QLA2300
Firmware Version: 03.03.27
FCode/BIOS Version: BIOS: 1.47;
Serial Number: not available
Driver Name: qlc
Driver Version: 20080617-2.30
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb
Current Speed: 2Gb
Node WWN: 200000e08b100163
NPIV Not Supported
Using the stmfadm utility we add the gateway server’s wwn address to the GW1 host group.
root@ss1:~# stmfadm add-hg-member -g GW1 wwn.210000e08b100163
Once added to ss1 we can see that it is indeed available and online.
root@ss1:~# stmfadm list-target -v
Target: wwn.2100001B320EFD58
Operational Status: Online
Provider Name : qlt
Alias : qlt2,0
Sessions : 1
Initiator: wwn.210000E08B100163
Alias: :qlc1
Logged in since: Fri Dec 19 01:47:07 2008
The cfgadm command will scan for the newly available LUN’s and now we can access the emulated (aka boat anchor) storage system using our gateway server ss2. Of course we could also set up more initiators and access it over a multipath connection.
cfgadm -a
root@ss2:~# format
Searching for disks…done
AVAILABLE DISK SELECTIONS:
0. c0t600144F01EB3862C0000494B55CD0001d0 <DEFAULT cyl 2086 alt 2 hd 255 sec 63>
/scsi_vhci/disk@g600144f01eb3862c0000494b55cd0001
1. c0t600144F01EB3862C0000494B55F50002d0 <DEFAULT cyl 2086 alt 2 hd 255 sec 63>
/scsi_vhci/disk@g600144f01eb3862c0000494b55f50002
2. c0t600144F01EB3862C0000494B55F80003d0 <DEFAULT cyl 2086 alt 2 hd 255 sec 63>
/scsi_vhci/disk@g600144f01eb3862c0000494b55f80003
3. c0t600144F01EB3862C0000494B55FA0004d0 <DEFAULT cyl 2086 alt 2 hd 255 sec 63>
/scsi_vhci/disk@g600144f01eb3862c0000494b55fa0004
4. c0t600144F01EB3862C0000494B55FD0005d0 <DEFAULT cyl 2086 alt 2 hd 255 sec 63>
/scsi_vhci/disk@g600144f01eb3862c0000494b55fd0005
5. c0t600144F01EB3862C0000494B56000006d0 <DEFAULT cyl 2086 alt 2 hd 255 sec 63>
/scsi_vhci/disk@g600144f01eb3862c0000494b56000006
Now that we some FC LUN connections configured from to the storage system to be re-provisioned we can create a ZFS based pool which grants us the ability to carve out the block storage in a virtual manner. As discussed previously we will use raid dp a.k.a. raidz2 to provide a higher level of availability with the zpool create raidz2 option command.
root@ss2:~# zpool create gwrp1 raidz2 c0t600144F01EB3862C0000494B55CD0001d0 c0t600144F01EB3862C0000494B55F50002d0 c0t600144F01EB3862C0000494B55F80003d0 c0t600144F01EB3862C0000494B55FA0004d0 c0t600144F01EB3862C0000494B55FD0005d0 c0t600144F01EB3862C0000494B56000006d0
A quick status check reveals all is well with the ZFS pool.
root@ss2:~# zpool status gwrp1
pool: gwrp1
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
gwrp1 ONLINE 0 0 0
raidz2 ONLINE 0 0 0
c0t600144F01EB3862C0000494B55CD0001d0 ONLINE 0 0 0
c0t600144F01EB3862C0000494B55F50002d0 ONLINE 0 0 0
c0t600144F01EB3862C0000494B55F80003d0 ONLINE 0 0 0
c0t600144F01EB3862C0000494B55FA0004d0 ONLINE 0 0 0
c0t600144F01EB3862C0000494B55FD0005d0 ONLINE 0 0 0
c0t600144F01EB3862C0000494B56000006d0 ONLINE 0 0 0
Let’s carve out some of this newly created pool as a 32GB sparse volume. The -p option creates the full path if it does not currently exist.
root@ss2:~# zfs create -p -s -V 32G gwrp1/stores/lun0
root@ss2:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
gwrp1 220K 62.6G 38.0K /gwrp1
gwrp1/stores 67.9K 62.6G 36.0K /gwrp1/stores
gwrp1/stores/lun0 32.0K 62.6G 32.0K –
With a slice of the pool created we can now assign a GUID within the LU Provider layer using the sbdadm utility.
root@ss2:~# sbdadm create-lu /dev/zvol/rdsk/gwrp1/stores/lun0
Created the following LU:
GUID DATA SIZE SOURCE
——————————– ——————- —————-
600144f07ed404000000496813070001 34359672832 /dev/zvol/rdsk/gwrp1/stores/lun0
The LU Provider layer can also provision sparse based storage. However in this case the ZFS backing store is already thin provisioned. If this were a physical disk backing store it would be prudent to use the LU Provider’s sparse/thin provisioning feature. At this point we are ready to create the STMF Host Group and View that will be used to demonstrate a real world example of the multi protocol capability with the COMSTAR OpenStorage ss2 host. In this case I will use VMware ESX as a storage consumer. To reflect the host group type we will name it ESX1 and then we need to add a view for the LU GUID of the virtualized storage.
root@ss2:~# stmfadm create-hg ESX1
root@ss2:~# stmfadm add-view -h ESX1 -n 1 600144f07ed404000000496813070001
root@ss2:~# stmfadm list-view -l 600144F07ED404000000496813070001
View Entry: 0
Host group : ESX1
Target group : All
LUN : 1
With a view defined for the VMware hosts let’s add an ESX host FC HBA wwn membership to the defined ESX1 host group. We need to retrieve the wwn from the VMware server using either the console or a Virtual Infrastructure Client GUI. Personally I like the console esxcfg-info tool, however if it’s an ESXi host then the GUI will serve the info just as well.
[root@vh1 root]# esxcfg-info -s | grep ‘Adapter WWN’
|—-Adapter WWNN…………………………20:00:00:e0:8b:01:f7:e2
root@ss2:~# stmfadm add-hg-member -g ESX1 wwn.210000e08b01f7e2
And the result of this change after we issue a rescan on vmhba1 and create a VMFS volume named ss2-cstar-zs0.0 with the re-provisioned storage is reflected here.
root@ss2:~# svcadm enable iscsi/target
Now we need to create an iSCSI target and iSCSI initiator definition so that we can add the iSCSI initiator to the ESX1 host group. As well we should define a target portal group so we can control what host IP(s) will service this target.
root@ss2:~# itadm create-tpg 2 10.0.0.1
root@ss2:~# itadm create-target
root@ss2:~# itadm create-target -n iqn.1986-03.com.sun:02:ss2.0 -t 2
Target iqn.1986-03.com.sun:02:ss2.0 successfully created
By default the iqn will be created as a member of the All targets group.
If we left out the parameters the itadm utility would create an iqn GUID and use the default target portal group of 1. And yes for those familiar with the predecessor iscsitadm utility we can now create a iqn name at the command line.
At this point we need to define the initiator iqn to the iSCSI port provider service and if required additionally secure it using CHAP. We need to retrieve the VMware initiator iqn name from either the Virtual Infrastructure Client GUI or console command line. Just as a note if we did not specify a host group when we defined our view the default would allow any initiator FC, iSCSI or otherwise to connect to the LU and this may have a purpose but generally it is a bad practice to allow in most configurations. Once created the initiator is added to the ESX1 host group thus enables our second access protocol to the same LU.
[root@vh1 root]# esxcfg-info -s | grep ‘iqn’
|—-ISCSI Name……………………………………..iqn.1998-01.com.vmware:vh1.1
|—-ISCSI Alias…………………………………….iqn.1998-01.com.vmware:vh1.1
root@ss2:~# itadm create-initiator iqn.1998-01.com.vmware:vh1.1
root@ss2:~# stmfadm add-hg-member -g ESX1 iqn.1998-01.com.vmware:vh1.1
After adding the ss2 iSCSI interface IP to VMware’s Software iSCSI initiator we now have a multipath multiprotocol connection to our COMSTAR storage host.
This is simply the most functional and advanced Open Source storage product in the world today. Here we have commodity white boxes serving advanced storage protocols in my home lab, can you imagine what could be done with Data Center class server hardware and Fishworks. You can begin to see the advantages of this future proof platform. As protocols like FCoE, Infiniband and iSER (iSCSI without the TCP session overhead) already working in COMSTAR the Sun Software Engineers and OpenSolaris community are crafting outstanding storage products.
Site Contents: © 2009 Mike La Spina