SUN Delivers De-duplication on ZFS

Today marks yet another great milestone for OpenSolaris and OpenStorage. SUN has as promised, delivered a much anticipated de-duplication feature for us to explore and use.  I must say that I am very excited about it and with no doubt this is a very cool feature indeed The ideas for how to use it are running around in my head like neurons do and your sure to see some of those ideas surface in a blog or two.

Now before we get too excited we need to keep in mind that this is the first release of this feature to the public space and we are sure to find the odd bump or two along the road while seeing this new very great file system feature mature. I’m sure that we will be more than pleased with the new feature and the many other capabilities that are sure to come.

If your interested in experimenting with the development releases you should be able to get your hands on the feature in about 3-4 weeks through IPS or SXCE. Or if your an advanced kernel type IT pro you could build it using the source code now….right…so then, for the rest of us.

To try it out the easy way when it becomes available just download and install OpenSolaris with the LiveCD (I recommend an x64 CPU with 4G of ram)

http://dlc.sun.com/osol/opensolaris/2009/06/osol-0906-x86.iso

Then set your repository publisher to the dev IPS image server and issue the pkg image-update command

e.g.

$ pfexec pkg set-publisher -O http://pkg.opensolaris.org/dev opensolaris.org

$ pfexec pkg image-update

And explore!


Jeff Bonwick, Bill Moore and company are definitely thinking up some brilliant technical and practical applications of their knowledge bringing us a powerful new storage direction that has changed the game.

Thanks go to the ZFS team.

You rock!

Regards,

Mike

Site Contents: © 2009  Mike La Spina

Controlling Snapshot Noise

The ability to perform file system, database and volume snapshots grants us many data protection benefits. However there are some serious problems that can occur if we do not carefully architect snapshot based storage infrastructures. This blog entry will discuss some of the issues with data noise induction and data integrity when using point in time data snapshot activities  and how we can reduce the negative aspects of these data protection methods.

With the emergence of snapshot technology in the data center data noise induction is an unwanted by product and needs to addressed. Active data within a file store or raw volume will have significant amounts of temporary data like  memory swaps and application temp files. This type of data is required for system operations but unfortunately it is completely useless within a point in time snapshot and simply consumes valuable storage space with no permanent value within the scope of system data protection. There are many sources of this undesirable data noise that we need to consider and define strategies to isolate and eliminate them where possible.

In some cases using raw stores eg. iSCSI, FC etc. we will have duplicate snapshot functionality points in the storage stream and this further complicates how we approach a solution to noise induction issues. One common example of snapshot functionality duplication is Microsoft Windows 2003 Volume Snapshot Service aka. VSS. If we enable VSS and an external snapshot service is employed then we are now provisioning snapshots of snapshots which of course is less than optimal because much of the delta between points in time are just redundant encapsulated data. There are some higher level advantages to allowing this to occur like provisioning self service end user restores and VSS aware file system quiescence but for the most part it is not optimal from a space consumption or performance and efficiency  perspective.

If we perform snapshots at multiple points in the data storage stream using VSS we will have three points of data delta. The changed data elements on the source store of the primary files, a copy-on-write set of  changed blocks of the same primary store including it’s meta data and finally the external snapshot delta and it’s meta data. As well if the two snapshot events were to occur at the same time it creates a non-integral copy of a snapshot and meta data which is just pure wasted space since it is inconsistent.

With the co-existing use of VSS we need to define what functionality is important to us. For example VSS is limited to 512 snapshots and 64 versions so if we need to exceed these limits we have to employ an external snapshot facility. Or perhaps we need to allow a user self service file restore functionality from a shared folder. In many cases this functionality is provided by the external snapshot provisioning point. OpenSolaris, EMC and NetApp are some examples of storage products that can provide such functionality. Of course my preference is Custom OpenSolaris storage servers or the S7000 series of storage product which is based on OpenSolaris and is well suited for the formally supported side of things.

Solely provisioning the snapshots externally verses native MS VSS can significantly reduce induced data noise if the external provider supports VSS features or provides tools to control VSS. VSS copy on write snapshot capability should not be active when using this strategy so as to eliminate the undesirable snapshot echo noise. Most environments will find that they require the use of snapshot services that exceed the native MS VSS capabilities.  Provisioning the snapshot function directly on a shared storage system is a significantly better strategy verses allowing a distributed deployment of storage management points across your infrastructure.

OpenSolaris and ZFS provides superior depth in snapshot provisioning than Microsoft shared folder snapshot copy services. Implementing ZFS dramatically reduces space consumption and allows snapshots up to the maximum capacity of the storage system and OpenSolaris provides MS SMB client access to the snapshots which users can manage recovery as a self service. By employing ZFS as a backing store, snapshot management is simplified and snapshots are  available for export to alternate share points by cloning and provisioning the point in time to data consumers performing a multitude of desirable tasks such as audits, validation, analysis and testing.

If we need to employ MS VSS snapshot services provisioned on a storage server that uses snapshot based data protection strategies  then we will need prevent  continuous snapshots on the storage server. We can use features like snap mirror and zfs replication to provision a replica of the data however this would need to be strictly limited in count e.g. 2 or 3 versions and timed to avoid a multiple system snapshot time collision. Older snapshots should be purged and we only allow the MS VSS snapshot provisioning to keep the data deltas.

Another common source of snapshot noise is temporary file data or memory swaps. Fortunately with this type of noise the solution is relatively easy to solve as we simply isolate this type of storage onto storage volumes or shares that are explicitly excluded from a snapshot service function. For example if we are using VMFS stores we can place vswp files on a designated VMFS volume and conversely within an operating system we can create a separate vmdk disk that maps to a VMFS volume which we also exclude from the snapshot function. This strategy requires that we ensure that any replication scheme incorporates the creation or one time replication of these volumes. Unfortunately this methodology does not play well with storage vmotion so one must ensure that the a relocation does not move the noisy vmdk’s back into the snapshot service provisioned stores.

VMware VMFS volume VM snapshots is a significant source of data noise. When a snapshot is initiated from within VMware all data writes are placed on delta file instances. These delta files will be captured on the external storage systems snapshot points and will remain there after the VM snapshot is removed. Significant amount of data delta are produced by VM based snapshots and sometimes mulitple deltas can exceed original vmdk size. An easy way to prevent this undesirable impact is to clone the VM to a store outside the snapshot provisioned stores rather than invoking snapshots.

Databases are probably the most challenging source of snapshot noise data and requires a different strategy than isolation because the data within a specific snapshot is all required to provide system integrity. For example we cannot isolate SQL log data because it is required to do a crash recovery or to roll forward etc.  We can isolate temp database stores since any data in those date stores would not be valid in a crash recovery state.

One strategy that I use as both a blanket method when we are not are able to use other methods and in concert with the previously discussed isolation methods is a snapshot roll-up function. This strategy simply reduces the number of long term snapshot copies that are kept. The format is based on a Grand Father, Father and Son (GFS) retention chain of the snapshot copies and is well suited for a variety of data types. The effect is to provide a reasonable amount of data protection points to satisfy most computing environments and keep the captured noise to a manageable value. For example if we were to snapshot without any management cycle every 15 minutes we would accumulate ~35,000 delta points of data over the period of 1 year. Conversely if we employ the GFS method we will accumulate 294 delta points of data over the period of 1 year. Obviously the consumption of storage resource is so greatly reduced that  we could keep many additional key points in time if we wished and still maintain a balance of recovery point verses consumption rate.

Let’s take a look at a simple real example of how snapshot noise can impact our storage system using VMware, OpenSolaris and ZFS block based iSCSI volume snapshots. In this example we have a simple Windows Vista VM that is sitting idle, in other words only the OS is loaded and it is power on and running.

First we take a ZFS snapshot of the VMFS ZFS iSCSI volume.

zfs snapshot sp1/ss1-vol0@beforevmsnap

Now we invoke a VMware based snapshot and have a look at the result.

root@ss1:~# zfs list -t snapshot
NAME                               USED  AVAIL  REFER  MOUNTPOINT
sp1/ss1-vol0@beforevmsnap          228M      –  79.4G  –

Keep in mind that we are not modifying any data files in this VM if we were to change data files the deltas would be much larger and in many cases with multiple VMware snapshots could exceed the VMs original size if it is allowed to remain for long periods of weeks and longer. The backing store snapshot initially consumes 228MB which will continue to grow as changes occurs on the volume. A significant part of the 228MB is the VMs memory image in this case and of course it has no permanent storage value.

sp1/ss1-vol0@after1stvmsnap          1.44M      –  79.5G  –

After the initial VMware snapshot occurs we create a new point in time ZFS snapshot and here we observe some noise in the next snapshot and again we have not changed any data files in the last minute or so.

sp1/ss1-vol0@after2ndvmsnap       1.78M      –  79.5G  –

And yet another ZFS snapshot a couple of minutes later shows more snapshot noise accumulation. This is one of the many issues that are present when we allow non-discretionary placement of files and temporary storage on snapshot based systems.

Now lets see the impact of destroying a snapshot that was created after we delete the VMware based snapshot.

root@ss1:~# zfs destroy sp1/ss1-vol0@beforevmsnap
root@ss1:~# zfs list -t snapshot
NAME                               USED  AVAIL  REFER  MOUNTPOINT
sp1/ss1-vol0@after1stvmsnap       19.6M      –  79.2G  –
sp1/ss1-vol0@after2ndvmsnap       1.78M      –  79.2G  –

Here we observe the reclamation of more than 200MB of storage. And this is why GFS based snapshot rollups can provide some level of noise control.

Well I hope you found this entry to be useful.

Til next time..

Regards,

Mike

 



Site Contents: © 2009  Mike La Spina

Securing COMSTAR and VMware iSCSI connections

Connecting VMware iSCSI sessions to COMSTAR or any iSCSI target provider securely is required to maintain a reliable system. Without some level of initiator to target connection gate keeping we will eventually encounter a security event. This can happen from a variety of sources, for example a non-cluster aware OS can connect to an unsecured VMware shared storage LUN and cause severe damage to it since the OS has no shared LUN access knowledge.  All to often we make assumptions that security is about confidentiality when it is actually more commonly about data availability and integrity which will both be compromised if an unintentional connection were to write on a shared LUN.

At the very minimum security level we should apply non-authenticated named initiator access grants to our targets. This low security method defines initiator to target connection states for lower security tolerant environments. This security method is applicable when confidentiality is not as important and security is maintained with the physical access control realm. As well it should also coincide with SAN fabric isolation and be strictly managed by the Virtual System or Storage Administrators. Additionally we can increase access security control by enabling CHAP authentication which is a serious improvement over named initiators. I will demonstrate both of these security methods using COMSTAR iSCSI Providers and VMware within this blog entry.

Before we dive into the configuration details lets examine how LU’s are exposed. COMSTAR controls iSCSI target access using several combined elements. One of these elements is within the COMSTAR STMF facility where we can assign membership of host and target groups. By default if we do not define a host or target group any created target will belong to an implied ALL group. This group as we would expect grants any connecting initiator membership to the ALL group assigned LUN’s. These assignments are called views in the STMF state machine and are a mapping function of the Storage Block Driver service (SBD) to the STMF IT_nexus state tables.

This means that if we were to create an initiator without assigning a host group or host/target group combination, an initiator would be allowed unrestricted connectivity to any ALL group LUN views and possibly without any authentication at all. Allowing this to occur would of course be very undesirable from a security perspective in almost all cases. Conversely if we use a target group definition then only the initiators that connect to the respective target will see the LUN views which are mapped on that target definition instance.

While target groups do not significantly improve access security it does provide a means controlling accessibility based on the definition of interface connectivity classes which in turn can be mapped out on respective VLAN priority groups, bandwidth availability and applicable path fault tolerance capabilities which are all important aspects of availability and unfortunately are seldom considered security concepts in many architectures.

Generally on most simple storage configurations the use of target groups is not a requirement. However they do provide a level of access control with LUN views. For example we can assign LUN views to a target group which in turn frees us from having to add the LUN view to each host group within shared LUN configurations like VMware stores. With combination’s of host and target groups we can create more flexible methods in respect to shared LUN visibility. With the addition of simple CHAP authentication we can more effectively insulate target groups. This is primarily due to the ability to assign separate CHAP user and password values for each target.

Lets look at this visual depiction to help see the effect of using target and host groups.

COMSTAR host and target view depiction

In this depiction any initiator that connects to the target group prod-tg1 will by default see the views that are mapped to that target groups interfaces. Additionally if the initiator is also a member of the host group prod-esx1 those view mapping will also be visible.

One major difference with target groups verses the all group is that you can define LU views on mass to an entire class of initiator connections e.g. a production class. This becomes an important control element in a unified media environment where the use of VLANs separates visibility. Virtual interfaces can be created at the storage server and attached to VLANs respectively. Target groups become a very desirable as a control within a unified computing context.

Named Initiator Access

Enabling named initiator to target using unauthenticated access with COMSTAR and VMware iSCSI services is a relatively simple operation. Let’s examine how this method controls initiator access.

We will define two host groups, one for production esx hosts and one for test esx hosts.

# stmfadm create-hg prod-esx1

# stmfadm create-hg test-esx1

With these host groups defined we individually assign LU’s views to the host groups and then we define any initiator to be a member of one of the host groups to which it would only see the views which belong to the host group and additionally any views assigned to the default all group.

To add a host initiator to a host group, we must first create it in the port provider of choice which in this case is the iSCSI port provider.

# itadm create-initiator iqn.1998-01.com.vmware:vh1.1

Once created the defined initiator can be added to a host group.

# stmfadm add-hg-member -g prod-esx1 iqn.1998-01.com.vmware:vh1.1

An ESX host initiator with this iqn name can now attach to our COMSTAR targets and will see any LU views that are added to the prod-esx1 host group. But there are still some issues here, for example any ESX host with this initiator name will be able to connect to our targets and see the LUs. This is where CHAP can help to improve access control.

Adding CHAP Authentication on the iSCSI Target

Adding CHAP authentication is very easy to accomplish, we simply need to set a chap user name and secret on the respective iSCSI target. Here is an example of its application.

# itadm modify-target -s -u tcuid1 iqn.2009-06.target.ss1.1

Enter CHAP secret:
Re-enter secret:

The CHAP secret must be between 12 and 255 characters long. The addition of CHAP allows us to further reduce any risks of a potential storage security event. We can define an additional target and they can have a different chap user names and or secrets.

CHAP is more secure when used in a mutual authentication back to the source initiator which is my preferred way to implement it on ESX 4 (ESX 3 does not support mutual chap). This mode does not stop a successful one-way authentication from an initiator to the target, it allows the initiator to request that the target host system iSCSI services must authenticate back to the initiator which provides validation that the target is indeed the correct one. Here is an example of the target side initiator definition that would provide this capability.

# itadm modify-initiator -s -u icuid1 iqn.1998-01.com.vmware:vh1.1

Enter CHAP secret:
Re-enter secret:

Configuring the ESX 4 Software iSCSI Initiator

On the ESX 4 host side we need to enter our initiator side CHAP values.

ESX 4 iSCSI Mutual CHAP

 

Be careful here, there are three places we can configure CHAP elements. The general tab allows a global point of admin where any target will inherit those entered values by default where applicable e.g. target chap settings. The the dynamic tab can override the global settings and as well the static tab overrides the global and dynamic ones. In this example we are configuring a dynamically discovered target to use mutual (aka bidirectional) authentication.

In closing CHAP is a reasonable method to ensure that we correctly grant initiator to target connectivity assignments in an effort to promote better integrity and availability. It does not however provide much on the side of confidentially for that we need more complex solutions like IPSec.

Hope you found this blog interesting.

Regards,

Mike

Site Contents: © 2009  Mike La Spina

Power House Blog on NFS and VMware lead by Chad Sakac

Another great read from Chad Sakac and company.

Quote:

We were quite a bit surprised to see how popular our “Multivendor iSCSI” postwas. The feedback was overwhelming and very supportive of industry leaders partnering to ensure customer’s success with VMware. While writing that post, we (Vaughn Stewart from NetApp and Chad Sakac from EMC) discussed following up the iSCSI post with one focused on deploying VMware over NFS. The most difficult part around creating this post is that we couldn’t do it with our iSCSI-focused colleagues.

A “Multivendor Post” to help our mutual NFS customers using VMware

Many thanks to Chad and all of the contributors!

Regards,

Mike

Site Contents: © 2009  Mike La Spina

OpenSolaris Storage Summit 2009

The OpenSolaris Storage Summit was really cool to attend this year. Mike Shapiro presented an interesting view of what is transpiring in storage hardware and where storage vendors need to focus on in order to be successful in the next few years. As always his presentation is a pleasure to follow. He talked about the s7000 series development and where it fits in terms of the current commodity hardware advancements. It was exciting to hear that we will see COMSTAR integration in the next firmware release coming in the second quarter on 2009. With the inclusion of COMSTAR we will have a very comprehensive storage provisioning solution that is fully supported by SUN.

        I also had the pleasure of hearing Don MackAskill speak on his experiences with OpenSolaris and the voyage that brought him to success on the s7000 product. His content was brilliant as usual and hopefully he will share more on SmugMug’s Blog site.

        I presented in the afternoon and talked about using COMSTAR to re-provision existing storage systems in an effort to enhance the performance and capacity of these aging products and retain their value. As well to bring some desired features to them like compression, snapshots and replication without the having the high cost licenses the on the native systems. I also created a couple of video frame stop demos. The first one demonstrated the ability to attach existing storage systems with Fiber Channel and reprovision LU’s which can then be transitioned from one storage head to another without impacting a storage consumer connection. In this case the consumer was VMware and was attached over both Fiber Channel and iSCSI in a multi path multi protocol configuration. The second demo revealed the cool world of encapsulation by virtualizing an OpenSolaris storage server within a VMware and then replicating ZFS from an X4500 to the virtulized OpenSolaris VM. Once in a virtual state we exposed replicated iSCSI targets to the underlying VMware ESXi server and attached to theVMFS volumes presented on the LU’s. 

        Ben Rockwood also presented in the afternoon and it was a great pleasure to see. He discussed his knowledge on ZFS. Specifically some of the things he has discovered as best practices and the use of tools. It was very informative and I wish he had much more time because the content was exceptional. All of the presenters both mentioned and not were really great I would like to thank all of them for giving us their valuable time in the OpenSolaris community efforts.

 

If your interested in the content please visit OpenSolaris Storage Summit

 

Regards,

 

Mike

Site Contents: © 2009  Mike La Spina

« Previous PageNext Page »