The Illumos Project Launches

If you use or are interested in OpenSolaris then you should check out the Illumos Project which was announced today by Garrett D’Amore of Nexenta. It’s an excellent development project which initially is working toward delivering a compatible, fully open sourced version of the closed OpenSolaris binaries.  At first I thought this was going to be a pure fork of OpenSolaris, however its not really a fork. The Illumos project t maintains close compatibility and functionally with it parent OpenSolaris code stream while granting more innovative development freedom and full community control.  All good things in my books.

http://www.illumos.org/projects/site/wiki/Announcement

Regards,

Mike

Site Contents: © 2010  Mike La Spina

Protecting Active Directory with Snapshot Strategies

Using snapshots to protect Active Directory (AD) without careful planning will most definitely end up in a complete disaster. AD is a loosely consistent distributed multi-master database and it must not be treated as a static system.  Without carefully addressing how AD works with Time Stamps, Version Stamps, Update Sequence Numbers (USNs), Globally Unique Identification numbers (GUIDs), Relative Identification numbers (RIDs),  Security Identifiers (SIDs) and restoration requirements the system could quickly become unusable or severally damaged in the event of an incorrectly invoked snapshot reversion.

There are many negative scenarios that can occur if we were to re-introduce an AD replica to service from a snapshot instance without special handling. In the event of a snapshot based re-introduction the RID functional component is seriously impacted. In any AD system RIDs are created in range blocks and assigned for use to a participating Domain Controller (DC) by the RID master DC AD role. RIDs are used to create SIDs for all AD objects like Group or User objects and they must all be unique. Lets take a closer look at the SID to understand why RIDs are such a critical function.

A SID is composed with the following symbolic format: SRIASARID:

  • S: Indicates the type of value is a  SID.
  • R: Indicates the revision of the SID.
  • IA: Indicates the issuing authority. Most are the NT Authority identity number 5.
  • SA: Indicates the sub-authority aka domain identifier.
  • RID: Indicates the Relative ID.

Now looking at some real SID example values we see that on a DC instance only the RID component of the SID is unique as show here in red text.

DS0User1      = S1521-3725033245-1308764377-1800888333212
DS0UserGroup1 = S1521-3725033245-1308764377-1800888337611

When an older snapshot image of a DC is reintroduced it’s assigned RID range will likely have RID entries that were already used to generate SIDs. Those SIDs would have replicated to the other DCs in the AD forest. When the reintroduced DC starts up it will try to participate in replication and servicing authentications of accounts. Depending on the age and configuration of its secure channel the DC could be successfully connected. This snapshot reintroduction event should be avoided since any RID usage from the aged DC will very likely result in duplicated SID creations and is obviously very undesirable.

Under normal AD recovery methods we would either need to restore AD or build a new server and perform a DC promo on it and possibly seize DC roles if required . The most important element of an normal AD restore process is the DC GUID reinitialization function. The DC GUID value reinitialization operation  allows the restoration of an AD DC to occur correctly. A  newly generated GUID becomes part of the Domain Identifier and thus the DC can create SIDs that are unique despite the fact that the RID assignment range it holds may be from a previously used one.

When we use a snapshot image of a guest DC VM none of the required Active Directory restore requirements will occur on  system startup and thus we must manually bring the host online in DSRM mode without a network connection and then set the NTDS restore mode up. I see this as a serious security risk as there a is significant probability that the host could be brought online without these steps occurring and potentially create integrity issues.

One mitigation to this identified risk is to perform the required changes before a snapshot is captured and once the capture is complete revert the change back to the non-restore state. This action will completely prevent a snapshot image of a DC from coming online from a past time reference.

In order to achieve this level of server state and snapshot automation we would need to provision a service channel from our storage head to the involved VMs or for that matter any storage consumer. A service channel can provide other functionality beyond the NDTS state change as well. One example is the ability to flush I/O using VSS or sync etc.

We can now look at a practical example of how to implement this strategy on OpenSolaris based storage heads and W2K3 or W2K8 servers.

The first part of the process is to create the service channel on a VM or any other windows host which can support VB or Power Shell etc. In this specific case we need to provision an SSH Server daemon that will allow us to issue commands directed towards the storage consuming guest VM from the providing storage head. There are many possible products available that can provide this service. I personally like MobaSSH which I will use in this example. Since this is a Domain Controller we need to use the Pro version which supports domain based user authentication from our service channel VM.

We need to create a dedicated user that is a member of the domains BUILTINAdministrators group. This poses a security risk and thus you should mitigate it by restricting this account to only the machines it needs to service.

e.g. in AD restrict it to the DCs or possibly any involved VM’s to be managed and the Service Channel system itself.

Restricting user machine logins

A dedicated user allows us to define authentication from the storage head to the service channel VM  using a trusted ssh RSA key that is mapped to the user instance on both the VM and OpenSolaris storage host. This user will launch any execution process that is issued from the OpenSolaris storage head.

In this example I will use the name scu, which is short for Service Channel User.

First we need to create the scu user on our OpenSolaris storage head.

root@ss1:~# useradd -s /bin/bash -d /export/home/scu -P ‘ZFS File System Management’ scu
root@ss1:~# mkdir /export/home/scu
root@ss1:~# cp /etc/skel/* /export/home/scu
root@ss1:~# echo PATH=/bin:/sbin:/usr/ucb:/etc:. > /export/home/scu/.profile
root@ss1:~# echo export PATH >> /export/home/scu/.profile
root@ss1:~# echo PS1=$’${LOGNAME}@$(/usr/bin/hostname)’~#’ ‘ >> /export/home/scu/.profile

root@ss1:~# chown –R scu /export/home/scu
root@ss1:~# passwd scu

In order to use an RSA key for authentication we must first generate an RSA private/public key pair on the storage head. This is performed using ssh-keygen while logged in as the scu user. You must set the passphrase as blank otherwise the session will prompt for it.

root@ss1:~# su – scu

scu@ss1~#ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/export/home/scu/.ssh/id_rsa):
Created directory ‘/export/home/scu/.ssh’.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /export/home/scu/.ssh/id_rsa.
Your public key has been saved in /export/home/scu/.ssh/id_rsa.pub.
The key fingerprint is:
0c:82:88:fa:46:c7:a2:6c:e2:28:5e:13:0f:a2:38:7f scu@ss1
scu@ss1~#

We now have the public key available in the file named id_rsa.pub the content of this file must be copied to the target ssh instance file named .ssh/authorized_keys. The private key file named id_rsa MUST NOT be exposed to any other location and should be secured. You do not need to store the private key anywhere else as you can regenerate the pair anytime if required.

Before we can continue we must install and configure the target Service Channel VM with MobaSSH.

Its a simple setup, just download MobaSSH Pro to the target local file system.

Execute it.

Click install.

Configure only the scu domain based user and clear all others from accessing the host.

e.g.















Moba Domain Users















Once MobaSSH is installed and restarted we can connect to it and finalize the secured ssh session. Don’t forget to add the scu user to your AD domains BUILTINAdministrators group before proceeding.  Also you need to perform an initial NT login to the Service Channel Windows VM using the scu user account prior to using the SSH daemon, this is required to create it’s home directories.

In this step we are using  putty to establish an ssh session to the Service Channel VM and then secure shelling to the storage server named ss1. Then we transfer the public key back to our self using scp and exit host ss1. Finally we use cat to append the public key file content to our  .ssh/authorized_key file in the scu users profile. Once these steps are complete we can establish an automated prompt less secured encrypted session from ss1 to the Service Channel Windows NT VM.

[Fri Dec 18 – 19:47:24] ~
[scu.ws0] $ ssh ss1
The authenticity of host ‘ss1 (10.10.0.1)’ can’t be established.
RSA key fingerprint is 5a:64:ea:d4:fd:e5:b6:bf:43:0f:15:eb:66:99:63:6b.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘ss1,10.10.0.1’ (RSA) to the list of known hosts.
Password:
Last login: Fri Dec 18 19:47:28 2009 from ws0.laspina.ca
Sun Microsystems Inc.   SunOS 5.11      snv_128 November 2008

scu@ss1~#scp .ssh/id_rsa.pub ws0:/home/scu/.ssh/ss1-rsa-scu.pub
scu@ws0’s password:
id_rsa.pub           100% |*****************************|   217       00:00
scu@ss1~#exit

[Fri Dec 18 – 19:48:09]
[scu.ws0] $ cat .ssh/ss1-rsa-scu.pub >> .ssh/authorized_keys

With our automated RSA key password definition completed we can proceed to customize the MobaSSH service instance to run as the scu user. We need to perform this modification in order to enable VB script WMI DCOM impersonate caller rights when instantiating objects. In this case we are calling a remote regedit object over WMI and modifying the NTDS service registry start up values and thus this can only be performed by an administrator account. This modification essentially extends the storage hosts capabilities to reach any Windows host that need integral system management function calls.

On our OpenSolaris Storage head we need to invoke a script which will remotely change the NTDS service state and then locally snapshot the provisioned storage  and lastly return the NTDS service back to a normal state.  To accomplish this function we will define a cron job. The cron job needs some basic configuration steps as follows.

The solaris.jobs.user is required to submit a cron job, this allows us to create the job but not administer the cron service.
If an /etc/cron.d/cron.allow file exists then this RBAC setting will be overridden by the files existence and you will need to add the user to that file or convert to the best practice methods of RBAC.

root@ss1~# usermod -A solaris.jobs.user scu
root@ss1~# crontab –e scu
59 23 * * * ./vol1-snapshot.sh

Hint: crontab uses vi – http://www.kcomputing.com/kcvi.pdf  “vi cheat sheet”

The key sequence would be hit “i” and key in the line then hit “esc :wq” and to abort “esc :q!”

Be aware of the timezone the cron service runs under, you should check it and adjust it if required. Here is a example of whats required to set it.

root@ss1~# pargs -e `pgrep -f /usr/sbin/cron`

8550:   /usr/sbin/cron
envp[0]: LOGNAME=root
envp[1]: _=/usr/sbin/cron
envp[2]: LANG=en_US.UTF-8
envp[3]: PATH=/usr/sbin:/usr/bin
envp[4]: PWD=/root
envp[5]: SMF_FMRI=svc:/system/cron:default
envp[6]: SMF_METHOD=start
envp[7]: SMF_RESTARTER=svc:/system/svc/restarter:default
envp[8]: SMF_ZONENAME=global
envp[9]: TZ=PST8PDT

Let’s change it to CST6CDT

root@ss1~# svccfg -s system/cron:default setenv TZ CST6DST

Also the default environment path for cron may cause some script “command not found” issues, check for a path and adjust it if required.

root@ss1~# cat /etc/default/cron
#
# Copyright 1991 Sun Microsystems, Inc.  All rights reserved.
# Use is subject to license terms.
#
#pragma ident   “%Z%%M% %I%     %E% SMI”
CRONLOG=YES

This one has no default path, add the path using echo.

root@ss1~# echo PATH=/usr/bin:/usr/sbin:/usr/ucb:/etc:. > /etc/default/cron
# svcadm refresh cron
# svcadm restart cron

With a cron job defined to run the script named vol1-snapshot.sh in the default home directory of the scu user we are now ready to create the script content. Our OpenSolaris storage host needs to call a batch file on the remote Service Channel VM and it will execute  a vbscript from there to set the NTDS start up mode . To do this from a unix bash script we will use the following statements in the vol1-snapshot.sh file.

ssh -t ws0 NTDS-PreSnapshot.bat
snap_date=”$(date +%d-%m-%y-%H:%M)”
pfexec zfs snapshot rp1/san/vol1@$snap_date
ssh -t ws0 NTDS-PostSnapshot.bat
exit

Here we are running a secure shell call to the MobaSSH daemon with a -t option which runs the tty screen locally and this allows use to issue an “exit” from the remote calling script closing the secure shell. On the Service Channel VM the followng batch file vbscript calls are executed using the pre and post batch files illustrated as follows.

scu Batch Files

NTDS-PreSnapshot.bat
cscript NTDS-SnapshotRestoreModeOn.vbs DS0
exit

NTDS-PostSnapshot.bat
cscript NTDS-SnapshotRestoreModeOff.vbs DS0
exit

NTDS-SnapshotRestoreModeOn.vbs

strComputer = Wscript.Arguments(0)  
const HKLM=&H80000002
Set oregService=GetObject(“WinMgmts:{impersonationLevel=impersonate}!\” & strComputer & “rootdefault:stdRegProv”)
oregService.SetDWordValue HKLM, “SYSTEMCurrentControlSetServicesntdsparameters”, “Database restored from   backup”, 1
Set oregService=Nothing

NTDS-SnapshotRestoreModeOff.vbs

strComputer = Wscript.Arguments(0)  
const HKLM=&H80000002
Set oregService=GetObject(“WinMgmts:{impersonationLevel=impersonate}!\” & strComputer & “rootdefault:stdRegProv”)
oregService.SetDWordValue HKLM, “SYSTEMCurrentControlSetServicesntdsparameters”, “Database restored from   backup”, 0
Set oregService=Nothing

We now have Windows integrated storage volume snapshot functionality that allows an Active Directory domain controller to be securely protected using a snapshot strategy. In the event we need to fail back to a previous point in time there will be no danger that the snapshot will cause AD corruption. The integration process has other highly desirable capabilities such as the ability to call VSS snapshots and any other application backup preparatory function calls.  We could also branch out using more sophisticated PowerShell calls to VMware hosts in a fully automated recovery strategy using ZFS replication and remote sites.

Hope you enjoyed this entry.

Seasons Greetings to All.

Regards,

Mike



Site Contents: © 2009  Mike La Spina

SUN Delivers De-duplication on ZFS

Today marks yet another great milestone for OpenSolaris and OpenStorage. SUN has as promised, delivered a much anticipated de-duplication feature for us to explore and use.  I must say that I am very excited about it and with no doubt this is a very cool feature indeed The ideas for how to use it are running around in my head like neurons do and your sure to see some of those ideas surface in a blog or two.

Now before we get too excited we need to keep in mind that this is the first release of this feature to the public space and we are sure to find the odd bump or two along the road while seeing this new very great file system feature mature. I’m sure that we will be more than pleased with the new feature and the many other capabilities that are sure to come.

If your interested in experimenting with the development releases you should be able to get your hands on the feature in about 3-4 weeks through IPS or SXCE. Or if your an advanced kernel type IT pro you could build it using the source code now….right…so then, for the rest of us.

To try it out the easy way when it becomes available just download and install OpenSolaris with the LiveCD (I recommend an x64 CPU with 4G of ram)

http://dlc.sun.com/osol/opensolaris/2009/06/osol-0906-x86.iso

Then set your repository publisher to the dev IPS image server and issue the pkg image-update command

e.g.

$ pfexec pkg set-publisher -O http://pkg.opensolaris.org/dev opensolaris.org

$ pfexec pkg image-update

And explore!


Jeff Bonwick, Bill Moore and company are definitely thinking up some brilliant technical and practical applications of their knowledge bringing us a powerful new storage direction that has changed the game.

Thanks go to the ZFS team.

You rock!

Regards,

Mike

Site Contents: © 2009  Mike La Spina

OpenSolaris Storage Summit 2009

The OpenSolaris Storage Summit was really cool to attend this year. Mike Shapiro presented an interesting view of what is transpiring in storage hardware and where storage vendors need to focus on in order to be successful in the next few years. As always his presentation is a pleasure to follow. He talked about the s7000 series development and where it fits in terms of the current commodity hardware advancements. It was exciting to hear that we will see COMSTAR integration in the next firmware release coming in the second quarter on 2009. With the inclusion of COMSTAR we will have a very comprehensive storage provisioning solution that is fully supported by SUN.

        I also had the pleasure of hearing Don MackAskill speak on his experiences with OpenSolaris and the voyage that brought him to success on the s7000 product. His content was brilliant as usual and hopefully he will share more on SmugMug’s Blog site.

        I presented in the afternoon and talked about using COMSTAR to re-provision existing storage systems in an effort to enhance the performance and capacity of these aging products and retain their value. As well to bring some desired features to them like compression, snapshots and replication without the having the high cost licenses the on the native systems. I also created a couple of video frame stop demos. The first one demonstrated the ability to attach existing storage systems with Fiber Channel and reprovision LU’s which can then be transitioned from one storage head to another without impacting a storage consumer connection. In this case the consumer was VMware and was attached over both Fiber Channel and iSCSI in a multi path multi protocol configuration. The second demo revealed the cool world of encapsulation by virtualizing an OpenSolaris storage server within a VMware and then replicating ZFS from an X4500 to the virtulized OpenSolaris VM. Once in a virtual state we exposed replicated iSCSI targets to the underlying VMware ESXi server and attached to theVMFS volumes presented on the LU’s. 

        Ben Rockwood also presented in the afternoon and it was a great pleasure to see. He discussed his knowledge on ZFS. Specifically some of the things he has discovered as best practices and the use of tools. It was very informative and I wish he had much more time because the content was exceptional. All of the presenters both mentioned and not were really great I would like to thank all of them for giving us their valuable time in the OpenSolaris community efforts.

 

If your interested in the content please visit OpenSolaris Storage Summit

 

Regards,

 

Mike

Site Contents: © 2009  Mike La Spina

Multi Protocol Storage Provisioning with COMSTAR

 COMSTAR is a new breed of open source storage product available to the world. What was traditionally a closed and proprietary storage capability is now available to our open source communities. With OpenSolaris and COMSTAR the ability to freely provision virtual storage services over very mature high end protocols on standard commodity server hardware is now a reality. High performance transports are integral within the feature sets of COMSTAR and Sun’s open source portfolio of projects. The COMSTAR product is revolutionary in its method of provisioning storage virtualization and transport services to storage resource consumers.
COMSTAR provisions virtualized SCSI block storage over multiple SCSI transport protocols. While this function class is not new to us the ease of implementation using COMSTAR certainly is. All the complexities of using a multi protocol target services platform are cleaned up. It is simple to use and facilitates advanced high performance storage provisioning at the block level.
The services within this product have multiple common storage provisioning applications. One very interesting application is a storage gateway server and this blog demonstrates howto build a Fiber Channel (FC) storage gateway using the COMSTAR service layers and as well provision some additional features using the target services.

 

 COMSTAR FC Gateway Architecture by Mike La Spina


In this example instance we are re-provisioning an existing storage system with an OpenSolaris COMSTAR configuration running on a commodity white box which functions as a storage server head that can compress, scrub, thin provision, replicate, snapshot and clone the existing block storage attachments. The example FC based storage could also be comprised of a JBOD FC array directly attached to the OpenSolaris storage head if we so desired or many other commonly available SCSI attachment methods. The objective here is to extend and enhance any block storage system with high performance transports and virtualization features. Of course we could also formalize the white box to an industrial strength host once we are satisfied that the proof of concept is mature and optimal.  

The reality is that many older existing FC storage systems are installed without these features primarily due to the excessive licensing costs of them. And even when these features are available, its use is probably restricted to like proprietary systems thus obsolescing the entire lot of any useful future functionality. But what if you could re-purpose an older storage system to act as a DR store or backup cache system or maybe a test and development environment. With today’s economy this is from a cost perspective, very attractive and can be accomplished with very little risk on the investment side.

One of the possible applications for this flexible storage service is the re-provisioning of existing LUN’s from an existing system to newer more flexible SCSI transport protocols. This is particularly useful when we need to re-target the existing storage system from FC to iSCSI or the likes of. We can begin by exploring this functionality and explain how COMSTAR can provide us with this service.

First we need to understand the high level functionality of the COMSTAR service layers. Virtual LUN’s on COMSTAR are provisioned with a service layer named the LU provider. This layer maps backing stores of various types to a storage GUID assignment and additionally defines other properties like the LUN ID and size dimensions. This layer allows us to carve out the available block storage devices that are accessible on our OpenSolaris storage host. For example if we attached an FC Initiator to an external storage system we can then map the accessible SCSI block devices to the LU provider layer and then present this virtualized LUN to the other COMSTAR service layers for further processing.

Once we have defined the LU’s we can present this storage resource to the SCSI Target Mode Framework Service (STMF) layer which acts as the storage gate keeper. At this layer we define which clients (initiators) can connect to the LU’s based on Membership of Target Groups and Host Groups that are assigned logical views of the LU(s). The STMF layer routes the defined LU(s) as SCSI targets over a multiprotocol interface connection pool to a Port Provider. Port Providers are the protocol connection service instances which can be the likes of FC, iSCSI, SAS, iSER, FCoE and so on. 

With these COMSTAR basics in mind let us begin by diving into some of the details of how this can be applied.  

Sun has detailed howto setup COMSTAR at dlc.sun.com so no need to re-invent the wheel here.

Just as a note SXCE snv_103 and up integrate the COMSTAR FC and iSCSI port provider code. With the COMSTAR software components and FC target setup we can demonstrate the re-provisioning of an existing FC based storage server. Since I don’t have the luxury of having a proprietary storage server at home I will emulate this storage using an additional COMSTAR white box to act as the FC storage target to be re-provisioned.

On the existing FC target system we need to create Raid0 arrays of three disks each which will total up to a set of six trios. We will use these six non-fault tolerant disk groups as vdevs for a ZFS raidz2 group. This will allow us to create fault tolerant arrays from the existing storage server. The reasons for sets of three Raid0 groupings are to reduce the possibility of reaching the LUN maximums of the proprietary storage system and also we do not want to erode the performance by layering Raid 5 groups. As well we can tolerate a disk failure in two of the trios since we have Raidz2 across the Raid0 trio groups. Additionally using these Raid0 disk groups actually lowers the array failure probability rate. For example if a second disk were to failure in a single Raid0 set there would be no additional impact to other trios, thus reducing the overall failure rate. 

To create the emulated FC storage system I have defined the following 16G ZFS sparse volumes respectively named trio1 through trio6 each as a representation of the 3 disk Raid0 spanned LUN on a source storage host named ss1. 

root@ss1:~# zfs create sp1/gw
root@ss1:~# zfs create -s -V 16G sp1/gw/trio1
root@ss1:~# zfs create -s -V 16G sp1/gw/trio2
root@ss1:~# zfs create -s -V 16G sp1/gw/trio3
root@ss1:~# zfs create -s -V 16G sp1/gw/trio4
root@ss1:~# zfs create -s -V 16G sp1/gw/trio5
root@ss1:~# zfs create -s -V 16G sp1/gw/trio6

Once these mockup volumes are created they are then defined as backing stores using the sbdadm utility as follows.

root@ss1:~# sbdadm create-lu /dev/zvol/rdsk/sp1/gw/trio1

Created the following LU:

              GUID                    DATA SIZE           SOURCE
——————————–  ——————-  —————-
600144f01eb3862c0000494b55cd0001      17179803648      /dev/zvol/rdsk/sp1/gw/trio1

All the backing stores were added to the LU provider service layer to which in turn were assigned to the STMF service layer. Here we can see the automatically generated GUID’s that are assigned to the ZFS backing stores.

root@ss1:~# sbdadm list-lu

Found 6 LU(s)

              GUID                    DATA SIZE           SOURCE
——————————–  ——————-  —————-
600144f01eb3862c0000494b56000006      17179803648      /dev/zvol/rdsk/sp1/gw/trio6
600144f01eb3862c0000494b55fd0005      17179803648      /dev/zvol/rdsk/sp1/gw/trio5
600144f01eb3862c0000494b55fa0004      17179803648      /dev/zvol/rdsk/sp1/gw/trio4
600144f01eb3862c0000494b55f80003      17179803648      /dev/zvol/rdsk/sp1/gw/trio3
600144f01eb3862c0000494b55f50002      17179803648      /dev/zvol/rdsk/sp1/gw/trio2
600144f01eb3862c0000494b55cd0001      17179803648      /dev/zvol/rdsk/sp1/gw/trio1

A host group was defined named GW1 and respectively these LU GUID’s were added to the GW1 host group as LU views assigning LUN 0 to 5.

Just as a note the group names are case sensitive. 
root@ss1:~#stmfadm create-hg GW1

Here we assigned the GUID’s a LUN value on the GW1 host group with the -n parm.   

root@ss1:~# stmfadm add-view -h GW1 -n 0 600144F01EB3862C0000494B55CD0001
root@ss1:~# stmfadm add-view -h GW1 -n 1 600144F01EB3862C0000494B55F50002
root@ss1:~# stmfadm add-view -h GW1 -n 2 600144F01EB3862C0000494B55F80003
root@ss1:~# stmfadm add-view -h GW1 -n 3 600144F01EB3862C0000494B55FA0004
root@ss1:~# stmfadm add-view -h GW1 -n 4 600144F01EB3862C0000494B55FD0005
root@ss1:~# stmfadm add-view -h GW1 -n 5 600144F01EB3862C0000494B56000006

With the LU’s now available in a host group view we can add the COMSTAR re-provisioning gateway server FC wwn’s to this host group and it will become available as a storage resource on the re-provisioning gateway server named ss2. We need to obtain the wwn from the gateway server using the fcinfo hba-port command.  
root@ss2:~# fcinfo hba-port
HBA Port WWN: 210000e08b100163
        Port Mode: Initiator
        Port ID: 10300
        OS Device Name: /dev/cfg/c8
        Manufacturer: QLogic Corp.
        Model: QLA2300
        Firmware Version: 03.03.27
        FCode/BIOS Version:  BIOS: 1.47;
        Serial Number: not available
        Driver Name: qlc
        Driver Version: 20080617-2.30
        Type: N-port
        State: online
        Supported Speeds: 1Gb 2Gb
        Current Speed: 2Gb
        Node WWN: 200000e08b100163
        NPIV Not Supported

Using the stmfadm utility we add the gateway server’s wwn address to the GW1 host group. 
root@ss1:~# stmfadm add-hg-member -g GW1 wwn.210000e08b100163

Once added to ss1 we can see that it is indeed available and online. 
root@ss1:~# stmfadm list-target -v

Target: wwn.2100001B320EFD58
    Operational Status: Online
    Provider Name     : qlt
    Alias             : qlt2,0
    Sessions          : 1
        Initiator: wwn.210000E08B100163
            Alias: :qlc1
            Logged in since: Fri Dec 19 01:47:07 2008

The cfgadm command will scan for the newly available LUN’s and now we can access the emulated (aka boat anchor) storage system using our gateway server ss2. Of course we could also set up more initiators and access it over a multipath connection.  

cfgadm -a

root@ss2:~# format
Searching for disks…done

AVAILABLE DISK SELECTIONS:
       0. c0t600144F01EB3862C0000494B55CD0001d0 <DEFAULT cyl 2086 alt 2 hd 255 sec 63>
          /scsi_vhci/disk@g600144f01eb3862c0000494b55cd0001

       1. c0t600144F01EB3862C0000494B55F50002d0 <DEFAULT cyl 2086 alt 2 hd 255 sec 63>
          /scsi_vhci/disk@g600144f01eb3862c0000494b55f50002

       2. c0t600144F01EB3862C0000494B55F80003d0 <DEFAULT cyl 2086 alt 2 hd 255 sec 63>
          /scsi_vhci/disk@g600144f01eb3862c0000494b55f80003

       3. c0t600144F01EB3862C0000494B55FA0004d0 <DEFAULT cyl 2086 alt 2 hd 255 sec 63>
          /scsi_vhci/disk@g600144f01eb3862c0000494b55fa0004

       4. c0t600144F01EB3862C0000494B55FD0005d0 <DEFAULT cyl 2086 alt 2 hd 255 sec 63>
          /scsi_vhci/disk@g600144f01eb3862c0000494b55fd0005

       5. c0t600144F01EB3862C0000494B56000006d0 <DEFAULT cyl 2086 alt 2 hd 255 sec 63>
          /scsi_vhci/disk@g600144f01eb3862c0000494b56000006

Now that we some FC LUN connections configured from to the storage system to be re-provisioned we can create a ZFS based pool which grants us the ability to carve out the block storage in a virtual manner. As discussed previously we will use raid dp a.k.a. raidz2 to provide a higher level of availability with the zpool create raidz2 option command.

root@ss2:~# zpool create gwrp1 raidz2 c0t600144F01EB3862C0000494B55CD0001d0 c0t600144F01EB3862C0000494B55F50002d0 c0t600144F01EB3862C0000494B55F80003d0 c0t600144F01EB3862C0000494B55FA0004d0 c0t600144F01EB3862C0000494B55FD0005d0 c0t600144F01EB3862C0000494B56000006d0

A quick status check reveals all is well with the ZFS pool.

root@ss2:~# zpool status gwrp1
  pool: gwrp1
 state: ONLINE
 scrub: none requested
config:

        NAME                                       STATE     READ WRITE CKSUM
        gwrp1                                      ONLINE       0     0     0
          raidz2                                   ONLINE       0     0     0
            c0t600144F01EB3862C0000494B55CD0001d0  ONLINE       0     0     0
            c0t600144F01EB3862C0000494B55F50002d0  ONLINE       0     0     0
            c0t600144F01EB3862C0000494B55F80003d0  ONLINE       0     0     0
            c0t600144F01EB3862C0000494B55FA0004d0  ONLINE       0     0     0
            c0t600144F01EB3862C0000494B55FD0005d0  ONLINE       0     0     0
            c0t600144F01EB3862C0000494B56000006d0  ONLINE       0     0     0

Let’s carve out some of this newly created pool as a 32GB sparse volume. The -p option creates the full path if it does not currently exist.


root@ss2:~# zfs create -p -s -V 32G gwrp1/stores/lun0
 

root@ss2:~# zfs list
NAME                         USED  AVAIL  REFER  MOUNTPOINT
gwrp1                        220K  62.6G  38.0K  /gwrp1
gwrp1/stores                67.9K  62.6G  36.0K  /gwrp1/stores
gwrp1/stores/lun0           32.0K  62.6G  32.0K  –

With a slice of the pool created we can now assign a GUID within the LU Provider layer using the sbdadm utility.

root@ss2:~# sbdadm create-lu /dev/zvol/rdsk/gwrp1/stores/lun0

Created the following LU:

              GUID                    DATA SIZE           SOURCE
——————————–  ——————-  —————-
600144f07ed404000000496813070001      34359672832      /dev/zvol/rdsk/gwrp1/stores/lun0

The LU Provider layer can also provision sparse based storage. However in this case the ZFS backing store is already thin provisioned. If this were a physical disk backing store it would be prudent to use the LU Provider’s sparse/thin provisioning feature. At this point we are ready to create the STMF Host Group and View that will be used to demonstrate a real world example of the multi protocol capability with the COMSTAR OpenStorage ss2 host. In this case I will use VMware ESX as a storage consumer. To reflect the host group type we will name it ESX1 and then we need to add a view for the LU GUID of the virtualized storage.

root@ss2:~# stmfadm create-hg ESX1

root@ss2:~# stmfadm add-view -h ESX1 -n 1 600144f07ed404000000496813070001

root@ss2:~# stmfadm list-view -l 600144F07ED404000000496813070001
View Entry: 0
    Host group   : ESX1
    Target group : All
    LUN          : 1

With a view defined for the VMware hosts let’s add an ESX host FC HBA wwn membership to the defined ESX1 host group. We need to retrieve the wwn from the VMware server using either the console or a Virtual Infrastructure Client GUI. Personally I like the console esxcfg-info tool, however if it’s an ESXi host then the GUI will serve the info just as well.


VMware Screen shot WWN by Mike La Spina

[root@vh1 root]# esxcfg-info -s | grep ‘Adapter WWN’
                     |—-Adapter WWNN…………………………20:00:00:e0:8b:01:f7:e2

root@ss2:~# stmfadm add-hg-member -g ESX1 wwn.210000e08b01f7e2

And the result of this change after we issue a rescan on vmhba1 and create a VMFS volume named ss2-cstar-zs0.0 with the re-provisioned storage is reflected here.

VMware Screen shot VMFS volume by Mike La Spina

This crafted storage is now a thinly provisioned VMFS store that can deliver replication, snapshots, cloning, advanced error detection and can also be re-platformed to a new storage system at a later date using ZFS’s hardware autonomy. The storage server is very attractive as it creates a level of future proofing and insulates the storage consumers from proprietary vendor lock in. But that’s not the best part of this example. Let’s say you wish provide different tiers of connectivity services for your storage consumers. For example we could attach a development or test environment using an iSCSI protocol and the more critical environments can use FC or FCoE based protocol.
So let’s look at how we can add a second SCSI transport protocol to this interesting configuration.
Just as a note the new iSCSI port provider is a kernel based implementation and has superior performance to its predecessor iscsitgt user land implementation.


To add the iSCSI protocol we need to enable the iscsi/target port provider service.



 



 


 


 

root@ss2:~# svcadm enable iscsi/target

Now we need to create an iSCSI target and iSCSI initiator definition so that we can add the iSCSI initiator to the ESX1 host group. As well we should define a target portal group so we can control what host IP(s) will service this target.

root@ss2:~# itadm create-tpg 2 10.0.0.1

root@ss2:~# itadm create-target

root@ss2:~# itadm create-target -n iqn.1986-03.com.sun:02:ss2.0 -t 2
Target iqn.1986-03.com.sun:02:ss2.0 successfully created

By default the iqn will be created as a member of the All targets group.

If we left out the parameters the itadm utility would create an iqn GUID and use the default target portal group of 1. And yes for those familiar with the predecessor iscsitadm utility we can now create a iqn name at the command line.

At this point we need to define the initiator iqn to the iSCSI port provider service and if required additionally secure it using CHAP. We need to retrieve the VMware initiator iqn name from either the Virtual Infrastructure Client GUI or console command line. Just as a note if we did not specify a host group when we defined our view the default would allow any initiator FC, iSCSI or otherwise to connect to the LU and this may have a purpose but generally it is a bad practice to allow in most configurations. Once created the initiator is added to the ESX1 host group thus enables our second access protocol to the same LU.

[root@vh1 root]# esxcfg-info -s | grep ‘iqn’
         |—-ISCSI Name……………………………………..iqn.1998-01.com.vmware:vh1.1
         |—-ISCSI Alias…………………………………….iqn.1998-01.com.vmware:vh1.1

root@ss2:~# itadm create-initiator iqn.1998-01.com.vmware:vh1.1

root@ss2:~# stmfadm add-hg-member -g ESX1 iqn.1998-01.com.vmware:vh1.1

After adding the ss2 iSCSI interface IP to VMware’s Software iSCSI initiator we now have a multipath multiprotocol connection to our COMSTAR storage host.

 VMware iqn example By Mike La Spina

VMware mpath example by Mike La Spina 

This is simply the most functional and advanced Open Source storage product in the world today. Here we have commodity white boxes serving advanced storage protocols in my home lab, can you imagine what could be done with Data Center class server hardware and Fishworks. You can begin to see the advantages of this future proof platform. As protocols like FCoE, Infiniband and iSER (iSCSI without the TCP session overhead) already working in COMSTAR the Sun Software Engineers and OpenSolaris community are crafting outstanding storage products.

Hope you found this blog to be interesting.
Regards,


Mike



 



 


 


 














Site Contents: © 2009  Mike La Spina

Next Page »