Securing COMSTAR and VMware iSCSI connections

Connecting VMware iSCSI sessions to COMSTAR or any iSCSI target provider securely is required to maintain a reliable system. Without some level of initiator to target connection gate keeping we will eventually encounter a security event. This can happen from a variety of sources, for example a non-cluster aware OS can connect to an unsecured VMware shared storage LUN and cause severe damage to it since the OS has no shared LUN access knowledge.  All to often we make assumptions that security is about confidentiality when it is actually more commonly about data availability and integrity which will both be compromised if an unintentional connection were to write on a shared LUN.

At the very minimum security level we should apply non-authenticated named initiator access grants to our targets. This low security method defines initiator to target connection states for lower security tolerant environments. This security method is applicable when confidentiality is not as important and security is maintained with the physical access control realm. As well it should also coincide with SAN fabric isolation and be strictly managed by the Virtual System or Storage Administrators. Additionally we can increase access security control by enabling CHAP authentication which is a serious improvement over named initiators. I will demonstrate both of these security methods using COMSTAR iSCSI Providers and VMware within this blog entry.

Before we dive into the configuration details lets examine how LU’s are exposed. COMSTAR controls iSCSI target access using several combined elements. One of these elements is within the COMSTAR STMF facility where we can assign membership of host and target groups. By default if we do not define a host or target group any created target will belong to an implied ALL group. This group as we would expect grants any connecting initiator membership to the ALL group assigned LUN’s. These assignments are called views in the STMF state machine and are a mapping function of the Storage Block Driver service (SBD) to the STMF IT_nexus state tables.

This means that if we were to create an initiator without assigning a host group or host/target group combination, an initiator would be allowed unrestricted connectivity to any ALL group LUN views and possibly without any authentication at all. Allowing this to occur would of course be very undesirable from a security perspective in almost all cases. Conversely if we use a target group definition then only the initiators that connect to the respective target will see the LUN views which are mapped on that target definition instance.

While target groups do not significantly improve access security it does provide a means controlling accessibility based on the definition of interface connectivity classes which in turn can be mapped out on respective VLAN priority groups, bandwidth availability and applicable path fault tolerance capabilities which are all important aspects of availability and unfortunately are seldom considered security concepts in many architectures.

Generally on most simple storage configurations the use of target groups is not a requirement. However they do provide a level of access control with LUN views. For example we can assign LUN views to a target group which in turn frees us from having to add the LUN view to each host group within shared LUN configurations like VMware stores. With combination’s of host and target groups we can create more flexible methods in respect to shared LUN visibility. With the addition of simple CHAP authentication we can more effectively insulate target groups. This is primarily due to the ability to assign separate CHAP user and password values for each target.

Lets look at this visual depiction to help see the effect of using target and host groups.

COMSTAR host and target view depiction

In this depiction any initiator that connects to the target group prod-tg1 will by default see the views that are mapped to that target groups interfaces. Additionally if the initiator is also a member of the host group prod-esx1 those view mapping will also be visible.

One major difference with target groups verses the all group is that you can define LU views on mass to an entire class of initiator connections e.g. a production class. This becomes an important control element in a unified media environment where the use of VLANs separates visibility. Virtual interfaces can be created at the storage server and attached to VLANs respectively. Target groups become a very desirable as a control within a unified computing context.

Named Initiator Access

Enabling named initiator to target using unauthenticated access with COMSTAR and VMware iSCSI services is a relatively simple operation. Let’s examine how this method controls initiator access.

We will define two host groups, one for production esx hosts and one for test esx hosts.

# stmfadm create-hg prod-esx1

# stmfadm create-hg test-esx1

With these host groups defined we individually assign LU’s views to the host groups and then we define any initiator to be a member of one of the host groups to which it would only see the views which belong to the host group and additionally any views assigned to the default all group.

To add a host initiator to a host group, we must first create it in the port provider of choice which in this case is the iSCSI port provider.

# itadm create-initiator iqn.1998-01.com.vmware:vh1.1

Once created the defined initiator can be added to a host group.

# stmfadm add-hg-member -g prod-esx1 iqn.1998-01.com.vmware:vh1.1

An ESX host initiator with this iqn name can now attach to our COMSTAR targets and will see any LU views that are added to the prod-esx1 host group. But there are still some issues here, for example any ESX host with this initiator name will be able to connect to our targets and see the LUs. This is where CHAP can help to improve access control.

Adding CHAP Authentication on the iSCSI Target

Adding CHAP authentication is very easy to accomplish, we simply need to set a chap user name and secret on the respective iSCSI target. Here is an example of its application.

# itadm modify-target -s -u tcuid1 iqn.2009-06.target.ss1.1

Enter CHAP secret:
Re-enter secret:

The CHAP secret must be between 12 and 255 characters long. The addition of CHAP allows us to further reduce any risks of a potential storage security event. We can define an additional target and they can have a different chap user names and or secrets.

CHAP is more secure when used in a mutual authentication back to the source initiator which is my preferred way to implement it on ESX 4 (ESX 3 does not support mutual chap). This mode does not stop a successful one-way authentication from an initiator to the target, it allows the initiator to request that the target host system iSCSI services must authenticate back to the initiator which provides validation that the target is indeed the correct one. Here is an example of the target side initiator definition that would provide this capability.

# itadm modify-initiator -s -u icuid1 iqn.1998-01.com.vmware:vh1.1

Enter CHAP secret:
Re-enter secret:

Configuring the ESX 4 Software iSCSI Initiator

On the ESX 4 host side we need to enter our initiator side CHAP values.

ESX 4 iSCSI Mutual CHAP

 

Be careful here, there are three places we can configure CHAP elements. The general tab allows a global point of admin where any target will inherit those entered values by default where applicable e.g. target chap settings. The the dynamic tab can override the global settings and as well the static tab overrides the global and dynamic ones. In this example we are configuring a dynamically discovered target to use mutual (aka bidirectional) authentication.

In closing CHAP is a reasonable method to ensure that we correctly grant initiator to target connectivity assignments in an effort to promote better integrity and availability. It does not however provide much on the side of confidentially for that we need more complex solutions like IPSec.

Hope you found this blog interesting.

Regards,

Mike

Site Contents: © 2009  Mike La Spina

Creating USB based boot media for ESX 4 installs

As a follow on to my Automating vSphere ESX4 Host Installations blog I have detailed a howto create USB based boot media using syslinux 3.82 and the ESX 4 installation source files. The process is actually quite simple as we can create the bootable USB from a Windows system.  You can also do the same with extlinux but most people will have a Windows based management system so lets only focus on this Windows based method within this blog.

The first step is to ofcourse obtain a copy of the Syslinux 3.82 or higher zip package from  http://syslinux.zytor.com/ and extract to a  file store of your choice.

Prepare the media:

To prepare a USB memory stick we need to format it with a FAT32 file system. Windows explorer provides that functionallity with a simple right click on your USB device.

Format USB device

Generate a bootable media device:

Once formated we will need to open a cmd prompt and go to our syslinux file store and execute the following example.

Syslinux cmd prompt

In this example the syslinux win32 tool creates a grub based loader and boot sector on the USB memory device mapped to drive G: the tool also defines the syslinux directory using the -d option as the root path and this is where we will copy the ESX 4 initial ramdisk image file and some additional syslinux text menu files.  If your planning to use the usb device as a source for the ESX 4 packages then those files  e.g. the VMware directory etc. would need to be placed in the root directory of the usb device and not the syslinux directory. In this blog the usb device is only used to launch a remote source file install.

Copy menu and ESX 4 install files:

From the ESX 4 ISO or CD copy the isolinux directory to G: and rename it to syslinux also copy the build_numbler file to G:  additionally explore the downloaded syslinux file store and locate ..syslinuxcom32menumenu.c32, copy this file to the G:syslinux location, you may also want to copy vesamenu.c32 if you wish to checkout a GUI based menu. That’s really just eye candy on the requirements side but it can provide some cool background display capabilities.

Create your selectable boot time menu:

Now we are ready to create the syslinux.cfg configuration file in the syslinux directory.  Here is an example I created for this blog.

default menu.c32
prompt 0
timeout 9000
menu title ESX 4 Automated Install VC1 HTTP Repo

label Default
kernel vmlinuz
append initrd=initrd.img vmkopts=debugLogToSerial:1 mem=512M quiet ks=http://vc1.laspina.ca:8088/esx/4.0/default.cfg

label vh0
kernel vmlinuz
append initrd=initrd.img vmkopts=debugLogToSerial:1 mem=512M quiet ks=http://vc1.laspina.ca:8088/esx/4.0/vh0.cfg

label vh1
kernel vmlinuz
append initrd=initrd.img vmkopts=debugLogToSerial:1 mem=512M quiet ks=http://vc1.laspina.ca:8088/esx/4.0/vh1.cfg

Once your cfg file is created your ready to boot the USB device either on your server or over RDAC/ILOM interfaces,  select a server target from the menu and walk away.

Yes it’s that simple and easy to create USB bootable media for your ESX 4 installs.

Regards,

Mike





Site Contents: © 2009  Mike La Spina

Understanding VMFS volumes

Understanding VMFS volumes is an important element within VMware ESX environments. When storage issues surface we need to correctly evaluate the VMFS volume states and apply the appropriate corrective actions to remediate undesirable storage events. VMFS architecture is not publically available and this certainly adds to the challenge when we need to correct a volume configuration or change issue. So lets begin to look at the components of a VMFS from what I have been able to decrypt using direct analysis.

All VMFS volume partitions will have a partition ID value of fb. Running fdisk can identify any partitions that are flagged as VMFS as shown here.

[root@vh1 ]# fdisk -lu /dev/sdc

Disk /dev/sdc: 274.8 GB, 274877889536 bytes
255 heads, 63 sectors/track, 33418 cylinders, total 536870878 sectors
Units = sectors of 1 * 512 = 512 bytes

   Device Boot    Start       End    Blocks   Id  System
/dev/sdc1           128 536860169 268430021   fb  Unknown

What’s important to note here is the sector size = 512 and the starting/ending blocks.


Many VMFS volume configuration elements are visible in the /vmfs mount folder. Within the directory the are two subdirectories, the volumes directory and the devices directory. The volumes directory provisions the mount point location and the devices directory holds configuration elements. Within the devices directory the are several subdirectories of which I can explain the disks and lvm folders, the others are not known to me outside of theory only.

A key part of a VMFS volume is it’s UUID (aka Universally Unique Identifier) and as the name suggests it used to ensure uniqueness when more than one volume is in use. The UUID is generated on the initial ESX host that created the VMFS volume based on the UUID creation standards. You can determine which ESX host the initial VMFS volume was created on by referring to the last 6 bytes of the UUID. This value is the same as the last six bytes of the ESX host’s system UUID found in the /etc/vmware/esx.conf file.

By far one of the most critical elements on a VMFS volume is the GUID. The GUID is integral within the volume because it is used to form the vml path (aka virtual multipath link). The GUID is stored within the VMFS volume header and begins at address 0x10002E.

The format of the GUID can vary based on different implementations of SCSI transport protocols but generally you will see some obvious length variances of the vml path identifiers which stem from the use of T11 and T10 Standard SCSI address formats like EUI-64, and NAA 64. Regardless of those variables there are components outside of the GUID within the vml that we should take notice of. The vml construct contains references to the LUN and partition values and these are useful to know about. The following illustrates where these elements appear in some real examples.

When we issue an ls -l from the /vmfs/devices/disks directory the following info is observed.

vhhba#:Target:LUN:Partition -> vml:??_LUN_??_GUID:Partition

                       LUN     GUID                         PARTITION
                       ^       ^                            ^
vmhba0:1:0:0  -> vml.02000000005005076719d163d844544e313436
vmhba0:1:0:1  -> vml.02000000005005076719d163d844544e313436:1
vmhba32:1:3:0 -> vml.0200030000600144f07ed404000000496ff8cd0003434f4d535441
vmhba32:1:3:1 -> vml.0200030000600144f07ed404000000496ff8cd0003434f4d535441:1

As well the issuing ls -l on the /vmfs/volumes list the VMFS UUID’s and the link name which is what we see displayed in the GUI client. In this example we will follow the UUID shown in blue and the named ss2-cstar-zs0.2 volume.

ss2-cstar-zs0.2 -> 49716cd8-ebcbbf9a-6792-000d60d46e2e

Additionally we can use esxcfg-vmkhbadevs -m to list the vmhba, dev and UUID associations.

[root@vh1 ]#  esxcfg-vmhbadevs -m
vmhba0:1:0:1    /dev/sdd1                        48a3b0f3-736b896e-af8f-00025567144e
vmhba32:1:3:1   /dev/sdf1                        49716cd8-ebcbbf9a-6792-000d60d46e2e

As you can see we indeed have different GUID lengths in this example. We also can see that the vmhba device is linked to a vml construct and this is how the kernel defines paths to a visible SCSI LUN. The vml path hosts the LUN ID, GUID and partition number information and this is also stored in the volumes VMFS header. As well the header contains a UUID signature but this is not the VMFS UUID.

If we use hexdump as illustrated below we can see these elements in the VMFS header directly.


[root@vh1 root]# hexdump -C -s 0x100000 -n 800 /dev/sdf1
00100000  0d d0 01 c0 03 00 00 00  10 00 00 00 02 16 03 00  |                | <- LUN ID
00100010  00 06 53 55 4e 20 20 20  20 20 43 4f 4d 53 54 41  |  SUN     COMSTA| <- Target Label
00100020  52 20 20 20 20 20 20 20  20 20 31 2e 30 20 60 01  |R         1.0 ` | <- LUN GUID
00100030  44 f0 7e d4 04 00 00 00  49 6f f8 cd 00 03 43 4f  |D ~     Io    CO|
00100040  4d 53 54 41 00 00 00 00  00 00 00 00 00 00 00 00  |MSTA            |
00100050  00 00 00 00 00 00 00 00  00 00 00 02 00 00 00 fc  |                | <- Volume Size
00100060  e9 ff 18 00 00 00 01 00  00 00 8f 01 00 00 8e 01  |                |
00100070  00 00 91 01 00 00 00 00  00 00 00 00 10 01 00 00  |                |
00100080  00 00 d8 6c 71 49 b0 aa  97 9b 6c 2f 00 0d 60 d4  |   lqI    l/  ` |
00100090  6e 2e 6e 89 19 fb a6 60  04 00 a7 ce 20 fb a6 60  |n n    `       `|
001000a0  04 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |                |
001000b0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |                |
*
00100200  00 00 00 f0 18 00 00 00  90 01 00 00 00 00 00 00  |                |
00100210  01 00 00 00 34 39 37 31  36 63 64 38 2d 36 30 37  |    49716cd8-607| <- SEG UUID in ASCII
00100220  35 38 39 39 61 2d 61 64  31 63 2d 30 30 30 64 36  |5899a-ad1c-000d6|
00100230  30 64 34 36 65 32 65 00  00 00 00 00 00 00 00 00  |0d46e2e         |
00100240  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |                |
00100250  00 00 00 00 d8 6c 71 49  9a 89 75 60 1c ad 00 0d  |     lqI  u`    | <- SEG UUID
00100260  60 d4 6e 2e 01 00 00 00  e1 9c 19 fb a6 60 04 00  |` n          `  |
00100270  00 00 00 00 8f 01 00 00  00 00 00 00 00 00 00 00  |                |
00100280  8e 01 00 00 00 00 00 00  64 cc 20 fb a6 60 04 00  |        d    `  |
00100290  01 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |                |
001002a0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |                |

 

In addition to the VMFS header block we have the hidden metadata files of the volume which you can list using ls -al. The vh.sf contains the UUID of the VMFS store and any member segments info. (I would presume the name vh stands for Volume Header … ;D) 

 

 

[root@vh1 ]# hexdump -C -s 0x200000 -n 256 /vmfs/volumes/49716cd8-ebcbbf9a-6792-000d60d46e2e/.vh.sf
00200000  5e f1 ab 2f 04 00 00 00  1f d8 6c 71 49 9a bf cb  |^   /  lqI      | <- VMFS UUID
00200010  eb 92 67 00 0d 60 d4 6e  2e 02 00 00 00 73 73 32  |  g  ` n     ss2| <- Volume Name
00200020  2d 63 73 74 61 72 2d 7a  73 30 2e 32 00 00 00 00  |-cstar-zs0.2    |
00200030  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |                |
*
00200090  00 00 00 00 00 00 00 00  00 00 00 00 00 00 02 00  |                |
002000a0  00 00 00 10 00 00 00 00  00 d8 6c 71 49 01 00 00  |          lqI   |
002000b0  00 d8 6c 71 49 9a 89 75  60 1c ad 00 0d 60 d4 6e  |  lqI  u`    ` n| <- SEG UUID
002000c0  2e 01 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |                |
002000d0  00 00 00 01 00 20 00 00  00 00 00 01 00 00 00 00  |                |
002000e0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |                |

And of course we can not leave out the partition entry block data for the device.

hexdump -C -n 256 /dev/sdf1

00000000  fa b8 00 10 8e d0 bc 00  b0 b8 00 00 8e d8 8e c0  |                |
00000010  fb be 00 7c bf 00 06 b9  00 02 f3 a4 ea 21 06 00  |   |         !  |
00000020  00 be be 07 38 04 75 0b  83 c6 10 81 fe fe 07 75  |    8 u        u|
00000030  f3 eb 16 b4 02 b0 01 bb  00 7c b2 80 8a 74 01 8b  |         |   t  |
00000040  4c 02 cd 13 ea 00 7c 00  00 eb fe 00 00 00 00 00  |L     |         |
00000050  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |                |
*
000001b0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 02  |                |
000001c0  03 00 fb fe ff ff 80 00  00 00 72 ef bf 5d 00 00  |          r  ]  | Type Start End
000001d0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |                |

With this detailed information it is possible to solve some common security issues with VMware stores like volume deletion and unintentional LUN ID changes.

Recently VMware added a some what useful command line tool named vmfs-undelete which exports metadata to a recovery log file which can restore vmdk block addresses in the event of deletion. It’s a simple tool and at present it’s experimental and unsupported and is not available on ESXi. The tool of course demands that you were proactive and ran it’s backup function in order to use it. Well I think this falls well short of what we need here. What if you have no previous backups of the VMFS configuration, so we really need to know what to look for and how to correct it and that’s exactly why I created this blog.  

The volume deletion event is quite easy to fix and thats simply because the VMFS volume header is not actually deleted. The partition block data is what gets trashed and you can just about get way with murder when it comes to recreating that part. Within the first 128 sectors is the peice we need to fix. One method is to create a store with the same storage volume and then block copy the partition to a file which can be block copied to the deleted device partition data blocks and this will fix the issue.

For example we create a new VMFS store on the same storage backing with the same LUN size as the original and it shows up as a LUN with a device name of /dev/sdd we can use esxcfg-vmhbadevs -m to find it if required

The deleted device name was /dev/sdc

We use the dd command to do a block copy from the new partition to a file or even directly in this case.

Remember to back it up first!

dd if=/dev/sdc of=/var/log/part-backup-sdc-1st.hex  bs=512 count=1

then issue

dd if=/dev/sdd of=/dev/sdc bs=512 count=1

or 

dd if=/dev/sdd of=/var/log/part-backup-sdd.hex bs=512 count=1

dd if=/var/log/part-backup-sdd.hex of=/dev/sdc bs=512 count=1

I personally like using a file to perform this function as this becomes a future backup element which you can move to a safe location. The file can actually be edited with other utilities to provide more flexibility. e.g. hexedit etc. Addtitionally you could use fdisk to directly edit the partition table and provide the correct start and end addresses. This is something you should only do if you are well versed in it’s usage.

As as an additional level of protection we could even include making backups of the vh.sf metadata file and the VMFS header.

cp /vmfs/volumes/49716cd8-ebcbbf9a-6792-000d60d46e2e/.vh.sf /var/log/vh.sf.bu

dd if=/dev/sdc of=/var/log/vmfsheader-bu-sdc.hex bs=512 count=4096

This would grant the ability for support to examine the exact details of the VMFS configuration and potentially allow recovery from more complex issues.

One of the most annoying security events is when a VMFS LUN get’s changed inadvertently. If a VMFS volume LUN ID changes and is presented to an ESX host then the presented volume will be treated as a potential snapshot LUN. If this event occurs and the ESX servers advanced LVM parameter settings are at default the ESX host will not mount the volume. This behaviour is to prevent the possibility of corruption and downing the host since it can not determine which VM metadata inventory is correct.

If you are aware that the LUN ID has changed then the best course of action is to re-establish the correct LUN ID at the storage server first and rescan the affected vmhba’s. This is important because if you need to resignature the VMFS volume it will also require that the VM’s be imported back into inventory. Virtual Center logging and other various settings will be lost when this action is performed. This is a result of now having an incorrect UUID between the metadata, mount location and the vmx file UUID value.

If the storage change cannot be reverted back then a VMFS resignature method is the only option for reprovisioning a VMFS volume mount.

This is invoked by setting the LVM.DisallowSnapshotLun = 0 and LVM.EnableResignature = 1 and these should reverted back once the VMFS resignature operation is complete.

Regards,

Mike

Site Contents: © 2009  Mike La Spina

A centrally based method for patching ESX3 VMWare Servers

I have updated my ESX servers manually many times and I find the process to say at the least is “annoying” so I decided to change it to an http based method with a modified patch configuration. I found that it really works well.

I did some searching prior to the method I settled on and found this blog which is quite good.

http://virtrix.blogspot.com/2007/03/vmware-autopatching-your-esx-host.html

I felt it does have some issues and I wanted to avoid running custom scripts on the server side. So here is what I build for my patch management solution.

Using the standard tools on the ESX3 server of cron jobs and esxupdate I created the following on my servers.

Define a new cron entry for running esxupdate every first Sunday of the month.
I chose to create a separate entry avoiding the esx installed ones for safety reasons. 

Edit the file /etc/crontab and add the line in bold as show. 

[root@yourserver /]nano /etc/crontab

SHELL=/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/bin
MAILTO=root
HOME=/

# run-parts
01 * * * * root run-parts /etc/cron.hourly
02 4 * * * root run-parts /etc/cron.daily
22 4 * * 0 root run-parts /etc/cron.weekly
42 4 1 * * root run-parts /etc/cron.monthly
30 3 1 * 0 root run-parts /etc/cron.esxupdate

Create a new directory for the esxupdate cron run part as follows.

[root@yourserver /]mkdir /etc/cron.esxupdate

Create a new command file for the esxupdate cron run part and add the text lines as follows.

[root@yourserver /]nano /etc/cron.esxupdate/esxpatch

esxupdate -r http://yourhttpserver.fq.dns:8088/esx3.all update

Set the file as executable using chmod

chmod 755 /etc/cron.esxupdate/esxpatch

Allow outgoing traffic from the server using the esxcfg-firewall script as follows.

[root@yourserver /]esxcfg-firewall –allowoutgoing

Create an http server to host the patch repository.
I built an IIS server on my Virtual Center 3 server on port 8088 as follows

Install IIS (Be sure to secure it).

Create a patch folder named C:VMWarePatches and set your IIS home directory as follows.

 IIS example

 
You will need to add the following content types from a right mouse click-> properties of the IIS server definition within the Computer Manager mmc as shown.

Content types .hdr and .info need to be present.

 IIS Mime Example

 
Download the patches you require and extract them to c:vmwarepatches in the usual manner for repo based deployment. For example I downloaded the available esx-upgrade-from-esx3-3.0.2-61618.tar.gz upgrade file and extracted it to c:vmwarepatches which leaves a directory named 61818 in the http distro folder.

Rename this directory to esx3.all

Now this is were things get interesting as we are going to customize this patch package to include the latest additional patches that are available.

VMWare is using the installation and update functionally of Redhat rpm, yum and esxupdate scripts which means we can modify the repo source by conforming to the existing configuration rules. This configuration of the repo is based on xml tags of which I modified three tag element types.

1) info

The descriptor xml file found in the root of the update package which contains the descriptions of various patches in the package and looks like the following.

<descriptor version=”1.0″>
  <vendor>VMware, Inc.</vendor>
  <product>VMware ESX Server</product>
  <release>3.0.2-62488</release>
  <releasedate>Fri Nov 2 05:01:31 PDT 2007</releasedate>
  <summary>3.0.2 Update 1 of VMware ESX Server</summary>
  <description>This is 3.0.2 Update 1 rollup to Nov 2nd,2007 of VMware ESX Server.
It contains:
3.0.2-52542  Full 3.0.2 release of VMware ESX Server
ESX-1001724  Security bugs fixed in vmx rpm.
ESX-1001735  To update tzdata rpm.

ESX-1002424  VMotion RARP broadcast to multiple vmnic.
ESX-1002425  VMware-hostd-esx 3.0.2-62488
ESX-1002429  Path failback issue with EMC iSCSI array.
</description>

This xml file is used by the esxupdate script and can be modified to support additional patches. The parts we need to add are some descriptor text element data so that we know what it covers. I added ESX-1002424, 25 and 29 to the descriptor tag. And edited the headings date to reflect its current value.

2) rpms

The second element type I changed was the rpmlist tag. It looks like the following which is only a partial.

  <rpmlist>
    <rpm arch=”i386″ rel=”8.37.15″ ver=”2.4″>kernel-utils</rpm>
    <rpm arch=”i686″ rel=”47.0.1.EL.62488″ ver=”2.4.21″>kernel-vmnix</rpm>
    <rpm arch=”i386″ rel=”66″ ver=”1.2.7″>krb5-libs</rpm>
    <rpm arch=”i386″ rel=”11″ ver=”1.1.1″>krbafs</rpm>
    <rpm arch=”i386″ rel=”3vmw” ver=”1.1.22.15″>kudzu</rpm>
    <rpm arch=”i386″ rel=”70RHEL3″ ver=”0.1″>laus-libs</rpm>
    <rpm arch=”i386″ rel=”12″ ver=”378″>less</rpm>
    <rpm arch=”i386″ rel=”52542″ ver=”3.0.2″>VMware-esx-perftools</rpm>
    <rpm arch=”i386″ rel=”52542″ ver=”3.0.2″>VMware-esx-uwlibs</rpm>
    <rpm arch=”i386″ rel=”62488″ ver=”3.0.2″>VMware-esx-vmx</rpm>
    <rpm arch=”i386″ rel=”61818″ ver=”3.0.2″>VMware-esx-vmkernel</rpm>
    <rpm arch=”i386″ rel=”52542″ ver=”3.0.2″>VMware-esx-vmkctl</rpm>
    <rpm arch=”i386″ rel=”55869″ ver=”3.0.2″>VMware-esx-tools</rpm>
    <rpm arch=”i386″ rel=”52542″ ver=”3.0.2″>VMware-esx-srvrmgmt</rpm>
    <rpm arch=”i386″ rel=”52542″ ver=”3.0.2″>VMware-esx-scripts</rpm>
  </rpmlist>
When you extract a newer esx3 patch it will contain its own descriptor rpmlist tags etc. and you can update this major upgrade list to include those new elements. You need to replace the old tag info if it exists which it usually will. For example the bold entry above for VMware-esx-vmkernel needs to be completely updated by removing the old tag values like rel=61818 and insert the new value of rel=62488. I simply searched for the tag name VMware-esx-vmkernel deleted the line and inserted the updated one. I will eventually write a script to do the config edits, but for now these hacks will do the trick.

3) nodeps

The third element I changed will not normally be necessary but in this case there is a bug in this upgrade package that did not account for an rpm option value of –U which means upgrade an rpm if present. The descriptor included a tag named nodeps which I don’t think works well when the rpm already exists of the same version so we need to remove the tag all together for previous base installs above version esx 3.0.0.

I deleted these tags for my config.

<nodeps>
<rpm>kbd</rpm>
<rpm>nfs-utils</rpm>
</nodeps>


In addition to the descriptor xml file edits we will need to copy the new rpm files into the esx3.all folder, remove the old ones, edit the header.info file found in the headers sub folder, copy the new hdr files to the headers folder and remove the old ones in the headers folder.

For example in adding ESX-1002424 I copied

From c:vmwarepatchesesx-1002424

Files

VMware-esx-apps-3.0.2-62488.i386.rpm
VMware-esx-vmkernel-3.0.2-62488.i386.rpm
VMware-esx-vmx-3.0.2-62488.i386.rpm

To c:vmwarepatchesesx3.all

From c:vmwarepatchesesx-1002424headers

Files

VMware-esx-apps-0-3.0.2-62488.i386.hdr
VMware-esx-vmkernel-0-3.0.2-62488.i386.hdr
VMware-esx-vmx-0-3.0.2-62488.i386.hdr

To c:vmwarepatchesesx3.allheaders

You should delete any older file versions matching the prefixes portions of the newer files to avoid confusion. It will still function if you leave the old ones but it’s not a best practice.

The last step is to edit the header.info file, search for the text lines of the rpm file prefix and replace the info lines with the newer ones found from the updated patch header.info file.

For example search for text VMware-esx-vmx in the master header.info file which looks like the following partial example.

0:openssh-3.6.1p2-33.30.13vmw.i386=openssh-3.6.1p2-33.30.13vmw.i386.rpm
0:net-snmp-libs-5.0.9-2.30E.20.i386=net-snmp-libs-5.0.9-2.30E.20.i386.rpm
0:libtermcap-2.0.8-35.i386=libtermcap-2.0.8-35.i386.rpm
0:VMware-esx-vmx-3.0.2-62488.i386=VMware-esx-vmx-3.0.2-62488.i386.rpm
0:acl-2.2.3-1.i386=acl-2.2.3-1.i386.rpm
0:dev-3.3.12.3-1.i386=dev-3.3.12.3-1.i386.rpm
0:ethtool-1.8-3.3.i386=ethtool-1.8-3.3.i386.rpm

Replace it completely with the newer header.info text entry.

I tested my configuration on serveral base install versions and is was nice to see that on an esx 3.0.2-61818 it only downloaded and installed the addtional patches that were added to the master repo esx3.all

You should keep in mind that based on the cron schedule your VM’s may need to go down on that host and you must plan those outage times and gracefully shutdown where applicable.

Till next time …  happy esx patching.

Regards,

Mike

Site Contents: © 2008  Mike La Spina