Running VMware’s vCSA on MSSQL

I really like the new vCenter Appliance, but I am not a fan of either the embedded DB2,DB2 or Oracle database options. It seems unusual that VMware did not support MSSQL on the first release of the appliance. I suspect they have their reasons but I prefer not to wait for it when I know it runs without issue in my lab. If you’re interested in the details read on and discover how I hacked it into submission.

The vCSA Linux host contains almost all the necessary components to drive an MSSQL DB. The missing elements are an MSSQL ODBC and JDBC driver. They are both available from Microsoft and can be installed on the appliance. Now as you should know VMware would not support such a hack and I’m not suggesting you run it in your world. For me it’s more about the challenge and adventure of it. Besides I don’t expect VMware to support it nor do I need the support.

Outside of these two Microsoft products it is necessary to modify some of the VMware property files and bash code to allow for the mssql drivers.

The appliance hosts 3 major application components. A web front end using lightpd, a vpxd engine which appears to be coded in C and a Tomcat instance. Surrounding these elements we have configuration scripts and files that provide end users an easy way to setup the appliance. The first area to address for MSSQL connectivity surrounds  Microsofts 1.0 ODBC driver for Linux. It can be directly downloaded and installed on the vCSA using curl.

 

vcsa1:/ # cd /tmp
vcsa1:/tmp # curl http://download.microsoft.com/download/6/A/B/6AB27E13-46AE-4CE9-AFFD-406367CADC1D/Linux6/sqlncli-11.0.1790.0.tar.gz -o sqlncli-11.0.1790.0.tar.gz
vcsa1:/tmp # tar –xvf  sqlncli-11.0.1790.0.tar.gz
vcsa1:/tmp/sqlncli-11.0.1790.0 # ./install.sh install –force

 

Enter YES to accept the license or anything else to terminate the installation: YES   Checking for 64 bit Linux compatible OS ..................................... OK Checking required libs are installed ................................. NOT FOUND unixODBC utilities (odbc_config and odbcinst) installed ............ NOT CHECKED unixODBC Driver Manager version 2.3.0 installed .................... NOT CHECKED unixODBC Driver Manager configuration correct ...................... NOT CHECKED Microsoft SQL Server ODBC Driver V1.0 for Linux already installed .. NOT CHECKED Microsoft SQL Server ODBC Driver V1.0 for Linux files copied ................ OK Symbolic links for bcp and sqlcmd created ................................... OK Microsoft SQL Server ODBC Driver V1.0 for Linux registered ........... INSTALLED

You will find the install places the ODBC driver in /opt/microsoft

We need to edit the appliance odbcinst template file to include the newly added driver.

vcsa1:/ # vi /etc/vmware-vpx/odbcinst.ini.tpl

We need to append the following ODBC driver entry:

[MSSQL] Description = Microsoft ODBC driver for SQL v11 Driver = /opt/microsoft/sqlncli/lib64/libsqlncli-11.0.so.1790.0 UsageCount = 1 Threading = 1

The Microsoft driver will expect to have Openssl 1.0 available. It’s not installed on the appliance and I don’t feel it’s necessary either. We can just point to the installed 0.9.8 code and it will have no issues. Some symbolic links are all we need to get things rolling as shown here.

vcsa1:/tmp # ln -s /usr/lib64/libcrypto.so.0.9.8 /usr/lib64/libcrypto.so.10 vcsa1:/tmp # ln -s /usr/lib64/libssl.so.0.9.8 /usr/lib64/libssl.so.10

Tomcat as well needs to access the MSSQL server which requires a Microsoft JDBC driver and can be as well downloaded with curl.

vcsa1:/ # cd /tmp

vcsa1:/tmp # curl http://download.microsoft.com/download/0/2/A/02AAE597-3865-456C-AE7F-613F99F850A8/sqljdbc_4.0.2206.100_enu.tar.gz -o sqljdbc_4.0.2206.100_enu.tar.gz

vcsa1:/tmp # tar -xvf sqljdbc_4.0.2206.100_enu.tar.gz

vcsa1:/tmp # cp sqljdbc_4.0/enu/sqljdbc4.jar /usr/lib/vmware-vpx/common-jars/

 

I suspect that the JDBC driver is used within the Tomcat application to collect status info from ESX agents, but don’t hold me to that guess.

Once we have our MSSQL drivers in place we need to focus on hacking the config files and  shell scripts. Let’s start with the web front end first.

Within /opt/vmware/share/htdocs/service/virtualcenter we find the appliance service configuration scripts and other various files. We need to edit the following files.

layout.xml – Database action fields field.properties – Database type field list values

We need to add the mssql DBType values to give us the option from the database configuration menu and to enable the action.

Layout needs the following segment replaced.

 <changeHandlers> <!-- actions can be enable,disable,clear --> <onChange id="database.vc.type"> <field id="database.vc.server"> <if value="embedded" actions="disable,clear"/> <if value="UNCONFIGURED" actions="disable,clear"/> <if value="db2" actions="enable"/> <if value="oracle" actions="enable"/> <if value="mssql" actions="enable"/> </field> <field id="database.vc.port"> <if value="embedded" actions="disable,clear"/> <if value="UNCONFIGURED" actions="disable,clear"/> <if value="db2" actions="enable"/> <if value="oracle" actions="enable"/> <if value="mssql" actions="enable"/> </field> <field id="database.vc.instance"> <if value="embedded" actions="disable,clear"/> <if value="UNCONFIGURED" actions="disable,clear"/> <if value="db2" actions="enable"/> <if value="oracle" actions="enable"/> <if value="mssql" actions="enable"/> </field> <field id="database.vc.login"> <if value="embedded" actions="disable,clear"/> <if value="UNCONFIGURED" actions="disable,clear"/> <if value="db2" actions="enable"/> <if value="oracle" actions="enable"/> <if value="mssql" actions="enable"/> </field> <field id="database.vc.password"> <if value="embedded" actions="disable,clear"/> <if value="UNCONFIGURED" actions="disable,clear"/> <if value="db2" actions="enable"/> <if value="oracle" actions="enable"/> <if value="mssql" actions="enable"/> </field> </onChange> </changeHandlers>

Field.properties needs the following edit where we are adding mssql to the assignment statement.

database.type.vc.values = UNCONFIGURED;embedded;oracle;mssql

Once we have the web front end elements populated with the new values we can focus on the bash shell script. The scripts are located in /usr/sbin. We need to work the following script.

vpxd_servicecfg – This script needs the following subroutines replaced with one that formats the database connection string for mssql. There are two areas which need modification, do_db_test and do_db_write. The test section needs to accept mssql as a valid DBType and will, based on the DBType make a connection using a series of input parms like the server address user and instance. The cfg write routine needs to also detect the mssql DBType and do a custom mod for the db connection url. These calls depend on a proper mssql odbc driver configuration.

############################### # # Test DB configuration # do_db_test() { DB_TYPE=$1 DB_SERVER=$2 DB_PORT=$3 DB_INSTANCE=$4 DB_USER=$5 DB_PASSWORD=$6   log "Testing DB. Type ($DB_TYPE) Server ($DB_SERVER) Port ($DB_PORT) Instance ($DB_INSTANCE) User ($DB_USER)"   case "$DB_TYPE" in "mssql" ) log "DB Type is MSSQL" ;; "oracle" ) ;; "embedded" ) set_embedded_db ;; *) log "ERROR: Invalid DB TYPE ($DB_TYPE)" RESULT=$ERROR_DB_INVALID_TYPE return 1 ;; esac   if [[ -z "$DB_SERVER" ]]; then log "ERROR: DB Server was not specified" RESULT=$ERROR_DB_SERVER_NOT_FOUND return 1 fi   ping_host "$DB_SERVER" if [[ $? -ne 0 ]]; then log "ERROR: Failed to ping DB server: " "$DB_SERVER" RESULT=$ERROR_DB_SERVER_NOT_FOUND return 1 fi   # Check for spaces DB_PORT=`$SED 's/^ *$/0/' <<< $DB_PORT`   # check for non-digits if [[ ! "$DB_PORT" =~ ^[0-9]+$ ]]; then log "Error: Invalid database port: " $DB_PORT RESULT=$ERROR_DB_SERVER_PORT_INVALID return 1 fi   if [[ -z "$DB_PORT" || "$DB_PORT" == "0" ]]; then # Set port to default case "$DB_TYPE" in "db2") DB_PORT="50000" ;; "oracle") DB_PORT="1521" ;; *) DB_PORT="-1" ;; esac fi   #Check whether numeric typeset -i xport xport=$(($DB_PORT+0)) if [ $xport -eq 0 ]; then log "Error: Invalid database port: " $DB_PORT RESULT=$ERROR_DB_SERVER_PORT_INVALID return 1 fi   #Check whether within valid range if [[ $xport -lt 1 || $xport -gt 65535 ]]; then log "Error: Invalid database port: " $DB_PORT RESULT=$ERROR_DB_SERVER_PORT_INVALID return 1 fi   if [[ -z "$DB_INSTANCE" ]]; then log "ERROR: DB instance was not specified" RESULT=$ERROR_DB_INSTANCE_NOT_FOUND return 1 fi   if [[ -z "$DB_USER" ]]; then log "ERROR: DB user was not specified" RESULT=$ERROR_DB_CREDENTIALS_INVALID return 1 fi   if [[ -z "$DB_PASSWORD" ]]; then log "ERROR: DB password was not specified" RESULT=$ERROR_DB_CREDENTIALS_INVALID return 1 fi   if [ `date +%s` -lt `cat /etc/vmware-vpx/install.time` ]; then log "ERROR: Wrong system time" RESULT=$ERROR_DB_WRONG_TIME return 1 fi   return 0 }     ############################### # # Write DB configuration # do_db_write() { DB_TYPE=$1 DB_SERVER=$2 DB_PORT=$3 DB_INSTANCE=$4 DB_USER=$5 DB_PASSWORD=$6   case "$DB_TYPE" in "embedded" ) set_embedded_db_autostart on &>/dev/null start_embedded_db &>/dev/null if [[ $? -ne 0 ]]; then log "ERROR: Failed to start embedded DB" fi ;; * ) set_embedded_db_autostart off &>/dev/null stop_embedded_db &>/dev/null ;; esac   set_embedded_db   ESCAPED_DB_INSTANCE=$(escape_for_sed $DB_INSTANCE) ESCAPED_DB_TYPE=$(escape_for_sed $DB_TYPE) ESCAPED_DB_USER=$(escape_for_sed $DB_USER)   # these may be changed below ESCAPED_DB_SERVER=$(escape_for_sed $DB_SERVER) ESCAPED_DB_PORT=$(escape_for_sed $DB_PORT)   case "$DB_TYPE" in "db2") # Set port to default if its set to 0 if [[ "$DB_PORT" -eq 0 ]]; then DB_PORT=50000 ESCAPED_DB_PORT=$(escape_for_sed $DB_PORT) fi   DRIVER_NAME="" URL=""   FILE=`$MKTEMP` $CP $DB2CLI_INI_OUT $FILE 1>/dev/null 2>&1 DB_FILES[${#DB_FILES[*]}]="$DB2CLI_INI_OUT $FILE" # Store file tuple   $SED -e "s!$TNS_SERVICE_SED_STRING!$ESCAPED_DB_INSTANCE!" -e "s!$SERVER_NAME_SED_STRING!$ESCAPED_DB_SERVER!" -e "s!$SERVER_PORT_SED_STRING!$ESCAPED_DB_PORT!" -e "s!$USER_ID_SED_STRING!$ESCAPED_DB_USER!" $DB2CLI_INI_IN > $DB2CLI_INI_OUT   ;; "oracle") # Add [ ] around IPv6 addresses echo "$DB_SERVER" | grep -q '^[^[].*:' && DB_SERVER='['"$DB_SERVER"']' ;; "mssql" ) TNS_SERVICE=$DB_INSTANCE # Set port to default if its set to 0 if [[ "$DB_PORT" -eq 0 ]]; then DB_PORT=1433 ESCAPED_DB_PORT=$(escape_for_sed $DB_PORT) fi   ;; esac   if [[ "$DB_PORT" -eq 0 ]]; then DB_PORT=`get_default_db_port $DB_TYPE` fi   # Save the original ODBC and DB configuration files FILE=`$MKTEMP` $CP $ODBC_INI_OUT $FILE 1>/dev/null 2>&1 DB_FILES[${#DB_FILES[*]}]="$ODBC_INI_OUT $FILE" # Store filename   FILE=`$MKTEMP` $CP $ODBCINST_INI_OUT $FILE 1>/dev/null 2>&1 DB_FILES[${#DB_FILES[*]}]="$ODBCINST_INI_OUT $FILE" # Store filename   # update the values ESCAPED_DB_SERVER=$(escape_for_sed $DB_SERVER) ESCAPED_DB_PORT=$(escape_for_sed $DB_PORT)   # Create new configuration files $SED -e "s!$DB_TYPE_SED_STRING!$ESCAPED_DB_TYPE!" -e "s!$TNS_SERVICE_SED_STRING!$ESCAPED_DB_INSTANCE!" -e "s!$SERVER_NAME_SED_STRING!$ESCAPED_DB_SERVER!" -e "s!$SERVER_PORT_SED_STRING!$ESCAPED_DB_PORT!" -e "s!$USER_ID_SED_STRING!$ESCAPED_DB_USER!" $ODBC_INI_IN > $ODBC_INI_OUT   $CP $ODBCINST_INI_IN $ODBCINST_INI_OUT 1>/dev/null 2>&1   do_jdbc_write "$DB_TYPE" "$DB_SERVER" "$DB_PORT" "$DB_INSTANCE" "$DB_USER" "$DB_PASSWORD"   return 0 }

At this point the appliance MUST be restarted to work correctly.

With the hacks applied, our appliance is now capable of driving an MSSQL database. On the MSSQL server side you need to have the database created and named VCDB. You will also require an SQL user named vc which needs to be initially set as a sysadmin and once the database is initialized you can downgrade it as a dbo of only the VCDB.

The steps to add your database to the appliance are very easy and here are some screen shots of the web console database config panel to demonstrate this ease of implementation.

vCSA Database Configuration Page

 

vCSA vpxd Status Page

If your interested in trying it out I have included the files for release 5.0.0-455964 here.

 

/etc/vmware-vpx/odbcinst.ini.tpl

/opt/vmware/share/htdocs/service/virtualcenter/layout.xml

/opt/vmware/share/htdocs/service/virtualcenter/fields.properties

/usr/sbin/vpxd_servicecfg

 

 

I have found no issue to date in my lab after 15 days, this does not mean it’s issue free and I would advise anyone to use caution. This was not tested with heavy loads.

Well I hope you found this blog entry to be interesting and possibly useful.

Regards,

Mike

Site Contents: © 2012  Mike La Spina

Encapsulating VT-d Accelerated ZFS Storage within ESXi

Some time ago I found myself conceptually provisioning ESXi hosts that could transition local storage in a distributed manner within an array of hypervisors. The architectural model likens itself to an amorphous cluster of servers which share a common VM client service that self provisions shared storage to it’s parent hypervisor or even other external hypervisiors. This concept originally became a reality in one of my earlier blog entries named Provisioning Disaster Recovery with ZFS, iSCSI and VMware. With this previous success of a DR scope we can now explore more adventurous applications of storage encapsulation and further coin the phrase of “rampent layering violations of storage provisioning” thanks to Jeff Bonwick, Jim Moore and many other brilliant creative minds  behind the  ZFS storage technology advancements. One of the main barriers of success for this concept was the serious issue of circular latency from within the self provisioning storage VM. What this commonly means is we have a long wait cycle for the storage VM to ready the requested storage since it must wait for the hypervisior to schedule access to the raw storage blocks  for the virtualized shared target which then will re-provision it to other VM’s. This issue is acceptable for a DR application but it’s a major show stopper for applications that require normal performance levels.

This major issue now has a solution with the introduction of Intel’s VT-d technology. VT-d allows us to accelerate storage I/O functionality directly inside a VM served by a VMware based ESX and ESXi hypervisors. VMware has leveraged Intel’s VT-d technology on ESXi 4.x (AMD I/O Virtualization Technology (IOMMU) is also supported) as part of the named feature VMDirectPath. This feature now allows us to insert high speed devices inside a VM which can now host a device that operates at the hardware speed of the PCI Bus and that my friend allows virtualized ZFS storage provisioning VMs to dramatically reduce or eliminate the hypervisor’s circular latency issue.

Very exciting indeed, so lets leverage a visual diagram of this amorphous server cluster concept to better capture what this envisioning actually entails.

Encapsulated Accelerated ZFS Architecture


The concept depicted here sets a multipoint NFS share strategy. Each ESXi host provisions it’s own NFS share from it’s local storage which can be accessed by any of the other hosts including itself. Additionally each encapsulated storage VM incorporates ZFS replication to a neighboring storage VM in a ring pattern thus allowing for crash based recovery in the event of a host failure. Each ESXi instance hosts a DDRdrive X1 PCIe Card which is presented to it’s storage VM over VT-d and VMDirectPath aka. PCI Pass Through. When managed via vCenter this solution allows us to svMotion VM’s across the cluster allowing rolling upgrades or hardware servicing.

The ZFS replication cycle works as a background ZFS send receive script process that incrementally updates the target storage VM. One very useful feature of  ZFS send receive capability is the include ZFS properties flag -p. When this flag is used any NFS share properties that are defined using “sharenfs= ” will be sent the the target host. Thus the only required action to enable access to the replicated NFS share is to add it as an NFS storage target on our ESXi host. Of course we would also need to stop replication if we wish to use the backup or clone it to a new share for testing. Testing the backup without cloning will result in a modified ZFS target file system and this could force a complete ZFS resend of the file system in some cases.

Within this architecture our storage VM is built with OpenSolaris snv_134 thus we have the ability to engage in ZFS deduplication. This not only improves the storage capacity it also grants improved performance when we allocate sufficient memory to the storage VM. ZFS Arc caching needs only to cache these dedup block hits once which accelerates all depup access requests. For example if this cluster served a Virtual Desktop Environment (VDI) we would see all the OS file allocation blocks enter into the ZFS Arc cache and thus all VMs that reference the same OS file blocks would be cache accelerated. Dedup also grants a benefit with ZFS replication with the use of the ZFS send -D flag. This flag instructs ZFS send to the stream in dedup format and this dramatically reduces replication bandwidth and time consumption in a VMware environment.

With VT-d we now have the ability to add a non-volatile disk device as a dedicated ZIL accelerator commonly called a SLOG or Separate Intent Log. In this proof of concept architecture I have defined the DDRdrive X1 as a SLOG disk over VMware VMDirectPath to our storage VM. This was a challenge to accomplish as VT-d is just emerging and has many unknown behaviors with system PCI BUS timing and IRQ handling. Coaxing VT-d to work correctly proved to be the most technically difficult component of this proof of concept, however success is at hand using a reasonably cost effective ASUS motherboard in my home lab environment.

Let’s begin with the configuration of VT-d and VMware VMDirectPath.

VT-d requires system BIOS support and this function is available on the ASUS P6X58D series of motherboards. The feature is not enabled by default you must change it in BIOS. I have found that enabling VT-d does impact how ESXi behaves, for example some local storage devices that were available prior to enabling VT-d may not be accessible after enabling it and could result in messages like “cannot retrieve extended partition information”.

The following screen shots demonstrate where you would find the VT-d BIOS setting on the P6X58D mobo.


VT-d-BIOS-Enable1

VT-d-BIOS-Enable2.png

If your using an AMD 890FX based ASUS Crosshair IV mobo then look for the IOMMU setting as depicted here:

Thanks go to Stu Radnidge over at http://vinternals.com/ for the screen shot!

IOMMU on AMD 890FX Based Mobos

Once VT-d or IOMMU is enabled ESXi VMDirectPath can be enabled from the VMware vSphere client host configuration-> advanced menu and will require a reboot to complete any further PCI sharing configurations.

One challenge I encountered was PCIe BUS timing issues, fortunately the ASUS P6X58D overclocking capability grants us the ability to align our clock timing on the PCIe BUS by tuning the frequency and voltage and thus I was able to stabilize the PCIe interface running on the DDRdrive X1.  Here are original values I used that worked.  Since that time I have pushed the i7 CPU to 4.0Ghz, but that can be risky since you need to up the CPU and DRAM voltages so I will leave the safe values for public consumption.


P6X58D-Overclock-Tuning1


P6X58D-Overclock-Tuning2


ESXi-Console_Shot


Once VT-d is active you will be able to edit the enumerated PCI device list check boxes and allow pass through for the device of your choice. There are three important PCI values to note. The device ID, Vendor ID and the Class ID of which you can Google it or take this short cut http://www.pcidatabase.com/ and discover who owns the device and what class it belongs to. In this case I needed to ID the DDRdrive X1 and I know by the class ID 0100 that it is a SCSI device.


VMDirectPath Enabled


Once our DDRdrive X1 device is added to the encapsulated OpenSolaris VM it’s shared IRQ mode will need to be adjusted such that no other IRQ’s are chained to it. This is adjusted by adding a custom VM config parameter named pciPassthru0.msiEnabled and setting its value to false.


VMPassThru-msiEnabled=false


In this proof of concept the storage VM is assigned 4Gb of memory which is reasonable for non-deduped storage.  If you plan to dedup the storage I would suggest significantly more memory to allow the block hash table to be held in memory, this is important for performance and is also needed if you have to delete a ZFS file system. The amount will vary depending on the total storage provisioned. I would rough estimate about 8GB of memory for each 1TB of used storage. As  well we have two network interfaces of which one will provision the storage traffic only. Keep in mind that dedup is still developing and should be heavily tested, you should expect some issues.


.VM Settings

If you have read my previous blog entry Running ZFS Over NFS as a VMware Store you will find the next section to be very similar. This is essentially many of the same steps but excludes  aggregation and IPMP capability.

Using a basic OpenSolaris Indiana completed install we can proceed to configure a shared NFS store so let’s begin with the IP interface. We don’t need a complex network configuration for this storage VM and therefore we will just setup simple static IP interfaces, one to manage the OpenSolaris storage VM and one to provision the NFS store. Remember that you should normally separate storage networks from other network  types from both a management and security perspective.

OpenSolaris will default to a dynamic network service configuration named nwam, this needs to be disabled and the physical:default service enabled.

root@uss1:~# svcadm disable svc:/network/physical:nwam
root@uss1:~# svcadm enable svc:/network/physical:default

To persistently configure the interfaces we can store the IP address in the local hosts file. The file will be referenced by the physical:default service to define the network IP address of the interfaces when the service starts up.

Edit /etc/hosts to have the following host entries.

::1 localhost
127.0.0.1 uss1.local localhost loghost
10.0.0.1 uss1 uss1.domain.name
10.1.0.1 uss1.esan.data1

As an option if you don’t normally use vi you can install nano.

root@uss1:~# pkg install SUNWgnu-nano

When an OpenSolaris host starts up the physical:default service will reference the /etc directory and match any plumbed network device to a file which contains the interface name a prefix of “hostname” and an extension using the interface name.  For example in this VM we have defined two Intel e1000 interfaces which will be plumbed using the following commands.

root@uss1:~# ifconfig e1000g0 plumb
root@uss1:~# ifconfig e1000g1 plumb

Once plumbed these network devices will be enumerated by the physical:default service and if a file exists in the /etc directory named hostname.e1000g0 the service will use the content of this file to configure this interface in the format that ifconfig uses. Here we have created the file using echo, the “uss1.esan.data1” name will be looked up in the hosts file and maps to IP 10.1.0.1, the network mask and broadcast will be assigned as specified.

root@uss1:~# echo uss1.esan.data1 netmask 255.255.0.0  broadcast 10.1.255.255 > /etc/hostname.e1000g0

One important note:  if your /etc/hostname.e1000g0 file has blank lines you may find that persistence fails on any interface after the blank line, thus no blank in the file sanity check would be advised.

One important requirement is the default gateway or route. Here we will assign a default route to network 10.0.0.1 which is the management network. also we need to add a route for network 10.1.0.0. using the following commands. Normally the routing function will dynamically assign the route for 10.1.0.0 so assigning a static one will ensure that no undesired discovered gateways are found and used which may cause poor performance.

root@uss1:~# route -p add default 10.0.0.254
root@uss1:~# route -p add 10.1.0.0 10.1.0.1


When using NFS I prefer provisioning name resolution as a additional layer of access control. If we use names to define NFS shares and clients we can externally validate the incoming IP  with a static file or DNS based name lookup. An OpenSolaris NFS implementation inherently grants this methodology.  When a client IP requests access to an NFS share we can define a forward lookup to ensure the IP maps to a name which is granted access to the targeted share. We can simply define the desired FQDNs against the NFS shares.

In small configurations static files are acceptable as is in the case here. For large host farms the use of a DNS service instance would ease the admin cycle. You would just have to be careful that your cached TimeToLive (TTL) value is greater that 2 hours thus preventing excessive name resolution traffic. The TTL value will control how long the name is cached and this prevents constant external DNS lookups.

To configure name resolution for both file and DNS we simply copy the predefined config file named nsswitch.dns to the active config file nsswitch.conf as follows:

root@uss1:~# cp /etc/nsswitch.dns /etc/nsswitch.conf

Enabling DNS will require the configuration of our /etc/resolv.conf file which defines our name servers and namespace.

e.g.

root@ss1:~# cat /etc/resolv.conf
domain laspina.ca
nameserver 10.1.0.200
nameserver 10.1.0.201

You can also use the static /etc/hosts file to define any resolvable name to IP mapping, which is my preferred method but since were are using ESXi I will use DNS to ease the administration cycle and avoid the unsupported console hack of ESXi.

It is now necessary to define a zpool using our VT-d enabled PCI DDRdrive X1 and VMDK. The VMDK can be located on any suitable VT-d compatible adapter. There is a good change that some HBA devices will not work with VT-d correctly with your system BIOS. As a tip I suggest you use a USB disk to provision the ESXi installation as it almost always works and is easy to backup and transfer to other hardware. In this POC I used a 500GB SATA disk attached over an ICH10 AHCI interface. Obviously there are other better performing disk subsystems available, however this is a POC and not for production consumption.

To establish the zpool we need to ID the PCI to CxTxDx device mappings, there are two ways that I am aware to find these names. You can ream the output of the prtconf -v command and look for disk instances and dev_links or do it the easy way and use the format command like the following.

root@uss1:~# format
Searching for disks…done

AVAILABLE DISK SELECTIONS:
0. c8t0d0 <DEFAULT cyl 4093 alt 2 hd 128 sec 32>
/pci@0,0/pci15ad,1976@10/sd@0,0
1. c8t1d0 <VMware-Virtual disk-1.0-256.00GB>
/pci@0,0/pci15ad,1976@10/sd@1,0
2. c11t0d0 <DDRDRIVE-X1-0030-3.87GB>
/pci@0,0/pci15ad,7a0@15/pci19e3,8@0/sd@0,0
Specify disk (enter its number): ^C
root@uss1:~#

With the device link info handy we can define the zpool with the DDRdrive X1 as a ZIL using the following command:

root@uss1:~# zpool create sp1 c8t1d0 log c11t0d0

root@uss1:~# zpool status
pool: rpool
state: ONLINE
scrub: none requested

config:
NAME        STATE     READ WRITE CKSUM
rpool       ONLINE       0     0     0
c8t0d0s0    ONLINE       0     0     0

errors: No known data errors

pool: sp1
state: ONLINE
scrub: none requested

config:
NAME        STATE     READ WRITE CKSUM
sp1         ONLINE       0     0     0
c8t1d0      ONLINE       0     0     0
logs
c11t0d0     ONLINE       0     0     0
errors: No known data errors

With a functional IP interface and ZFS pool complete you can define the NFS share and ZFS file system. Always define  NFS properties using ZFS set sharenfs=, the share parameters will store as part of the ZFS file system which is ideal for a system failure recovery or ZFS relocation.

zfs create -p sp1/nas/vol0
zfs set mountpoint=/export/uss1-nas-vol0 sp1/nas/vol0
zfs set sharenfs=rw,nosuid,root=vh3-nas:vh2-nas:vh1-nas:vh0-nas sp1/nas/vol0

To connect a VMware ESXi host to this  NFS store(s) we need to define a vmkernel network interface which I like to name eSAN-Interface1. This interface should only connect to the storage network vSwitch. The management network and VM network  should be on another separate vSwitch.

vmkernel eSAN-Interface1

Since we are encapsulating the storage VM on the same server we also need to connect the VM to the storage interface over a VM network port group as show above.  At this point we have all the base NFS services ready, we can now connect our ESXi host to the newly defined NAS storage target.

Add NFS Store

Thus we now have an Encapsulated NFS storage VM provisioning an NFS share to it’s parent hypervisor.

Encapsulated NFS Share





You may have noticed that the capacity of this share is ~390GB however we only granted  a 256GB vmdk to this storage VM. The capacity anomaly is the result of ZFS deduplication on the shared file system. There are 10 16GB Windows XP hosts and 2 32GB Linux host located on this file system which would normally require 224GB of storage.  Obviously dedup is a serious benefit in this case however you need to be aware of the costs, in order to sustain performance levels similar to non-deduped storage you MUST grant the ZFS code sufficient memory to hold the block hash table in memory. If this is memory not provisioned in sufficient amounts, your storage VM will be relegated to a what appears to be a permanent storage bottle neck, in other words you will enter a “processing time vortex”. (Thus as I have cautioned in the past ZFS dedup is maturing and needs some code changes before trusting it to mission critical loads, always test, test, test and repeat until you’re head spins)

Here’ s the result of using dedup within the encapsulated storage VM.

root@uss1:~# zpool list
NAME    SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
rpool  7.94G  3.64G  4.30G    45%  1.00x  ONLINE  –
sp1     254G  24.9G   229G     9%  6.97x  ONLINE  –

And here’s a look at what’s it’s serving.

Encapsulated VM


Incredibly the IO performance is simply jaw dropping fast, here we are observing a grueling 100% random read load at 512 bytes per request. Yes that’s correct we are reaching 40,420 IOs per second.

Sample IOMeter IOPS

Even more incredible is the IO performance with a 100% random write load at 512 bytes per request. it’s simply unbelievable seeing 38491 IOs per second inside a VM which is served from a peer VM all on the same hypervisor.

Sample IOMeter IOPS 100% Random 512 Byte Writes

With a successfully configured and operational  NFS share provisioned the next logical task is to define and automate the replication of this share and any others shares we may we to add to a neighboring encapsulated storage VM or for that matter any OpenSolaris host.

The basic elements to this functionality as follows:

  • Define a dedicated secured user to execute the replication functions.
  • Grant the appropriate permissions to this user to access a cron and ZFS.
  • Assign an RSA Key pair for automated ssh authentication.
  • Define a snapshot replication script using ZFS send/receive calls.
  • Define a cron job to regularly invoke the script.

Let define the dedicated replication user. In this example I will use the name zfsadm.

First we need to create the zfsadm user on all of our storage VMs.

root@uss1:~# useradd -s /bin/bash -d /export/home/zfsadm -P ‘ZFS File System Management’ zfsadm
root@uss1:~# mkdir /export/home/zfsadm
root@uss1:~# cp /etc/skel/* /export/home/zfsadm
root@uss1:~# echo PATH=/bin:/sbin:/usr/ucb:/etc:. > /export/home/zfsadm/.profile
root@uss1:~# echo export PATH >> /export/home/zfsadm/.profile
root@uss1:~# echo PS1=$’${LOGNAME}@$(/usr/bin/hostname)’~#’ ‘ >> /export/home/zfsadm/.profile

root@uss1:~# chown –R zfsadm /export/home/zfsadm
root@uss1:~# passwd zfsadm

In order to use an RSA key for authentication we must first generate an RSA private/public key pair on the storage head. This is performed using ssh-keygen while logged in as the zfsadm user. You must set the passphrase as blank otherwise the session will prompt for it.

root@uss1:~# su – zfsadm

zfsadm@uss1~#ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/export/home/zfsadm/.ssh/id_rsa):
Created directory ‘/export/home/zfsadm/.ssh’.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /export/home/zfsadm/.ssh/id_rsa.
Your public key has been saved in /export/home/zfsadm/.ssh/id_rsa.pub.
The key fingerprint is:
0c:82:88:fa:46:c7:a2:6c:e2:28:5e:13:0f:a2:38:7f zfsadm@uss1
zfsadm@uss1~#

The id_rsa file should not be exposed outside of this directory as it contains the private key of the pair, only the public key file id_rsa.pub needs to be exported. Now that our key pair is generated we need to append the public portion of the key pair to a file named authorized_keys2.

# cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys2

Repeat all the crypto key steps on the target VM as well.

We will use the Secure Copy command to place the public key file on the target hosts zfsadm users home directory. It’s very important that the private key is secured properly and it is not necessary to back it up as you can regenerate them if required.

From the local server here named uss1 (The remote server is uss2)

zfsadm@uss1~# scp $HOME/.ssh/id_rsa.pub uss2:$HOME/.ssh/uss1.pub
Password:
id_rsa.pub      100% |**********************************************|   603       00:00
zfsadm@uss1~# scp uss2:$HOME/.ssh/id_rsa.pub $HOME/.ssh/uss2.pub
Password:
id_rsa.pub      100% |**********************************************|   603       00:00
zfsadm@uss1~# cat $HOME/.ssh/uss2.pub >> $HOME/.ssh/authorized_keys2

And on the remote server uss2

# ssh uss2
password:
zfsadm@uss2~# cat $HOME/.ssh/uss1.pub >> $HOME/.ssh/authorized_keys2
# exit

Now that we are able to authenticate without a password prompt we need to define the automated replication launch using cron. Rather that using the /etc/cron.allow file to grant permissions to the zfsadm user we are going to use a finer instrument and grant the user access at the user properties level shown here. Keep in mind you can not use both ways simultaneously.

root@uss1~# usermod -A solaris.jobs.user zfsadm
root@uss1~# crontab –e zfsadm
59 23 * * * ./zfs-daily-rpl.sh zfs-daily.rpl

Hint: crontab uses vi – http://www.kcomputing.com/kcvi.pdf  “vi cheat sheet”

The key sequence would be hit “i” and key in the line then hit “esc :wq” and to abort “esc :q!”

Be aware of the timezone the cron service runs under, you should check it and adjust it if required. Here is a example of whats required to set it.

root@uss1~# pargs -e `pgrep -f /usr/sbin/cron`

8550:   /usr/sbin/cron
envp[0]: LOGNAME=root
envp[1]: _=/usr/sbin/cron
envp[2]: LANG=en_US.UTF-8
envp[3]: PATH=/usr/sbin:/usr/bin
envp[4]: PWD=/root
envp[5]: SMF_FMRI=svc:/system/cron:default
envp[6]: SMF_METHOD=start
envp[7]: SMF_RESTARTER=svc:/system/svc/restarter:default
envp[8]: SMF_ZONENAME=global
envp[9]: TZ=PST8PDT

Let’s change it to CST6CDT

root@uss1~# svccfg -s system/cron:default setenv TZ CST6DST

Also the default environment path for cron may cause some script “command not found” issues, check for a path and adjust it if required.

root@uss1~# cat /etc/default/cron
#
# Copyright 1991 Sun Microsystems, Inc.  All rights reserved.
# Use is subject to license terms.
#
#pragma ident   “%Z%%M% %I%     %E% SMI”
CRONLOG=YES

This one has no default path, add the path using echo.

root@uss1~# echo PATH=/usr/bin:/usr/sbin:/usr/ucb:/etc:. > /etc/default/cron
# svcadm refresh cron
# svcadm restart cron

The final part of the replication process is a script that will handle the ZFS send/recv invocations. I have written a script in the past that can serve this task with some very minor changes.   

Here is the link for the modified zfs-daily-rpl.sh replication script you will need to grant exec rights to this file e.g.

# chmod 755 zfs-daily-rpl.sh

This script will require that a zpool named sp2 exists on the target system, this is shamefully hard coded in the script.

A file containing the file system to replicate and the target are required as well.

e.g.

zfs-daily-rpl.sh filesystems.lst

Where filesystems.lst contains:

sp1/nas/vol0 uss2
sp1/nas/vol1 uss2


With any ZFS replicated file system  that you wish to invoke on a remote host it is important to remember not make changes to the active replication stream. You must take a clone of this replication stream and this will avoid forcing  a complete resend or other replication issues when you wish to test or validate that it’s operating as you expect.

For example:

We take a clone of one of the snapshots and then share it via NFS:

root@uss2~# zfs clone sp2/nas/vol0@31-04-10-23:59 sp2/clones/uss1/nas/vol0
root@uss2~# zfs set mountpoint=/export/uss1-nas-vol0 sp2/clones/uss1/nas/vol0
root@uss2~# zfs set sharenfs=rw,nosuid,root=vh3-nas:vh2-nas:vh1-nas:vh0-nas sp2/clones/uss1/nas/vol0

Well I hope you found this entry interesting.

Regards,

Mike

Site Contents: © 2010  Mike La Spina

Running ZFS over NFS as a VMware Store


NFS is definitely a very well rounded high performance file storage system and it certainly serves VMware Stores successfully over many storage products. Recently one of my subscribers asked me if there was a reason why my blogs were more centric to iSCSI. Thus the question was probing for a answer to a question many of us ask ourselves. Is NFS superior to block based iSCSI and which one should I choose for VMware. The answer to this question is not which protocol is superior but which protocol serves to provision the features and function you require most effectively. I use both protocols and find they both have desirable capability and functionality and conversely have some negative points as well.

NFS typically is generally more accessible because its a file level protocol and sits higher up on the network stack. This makes it very appealing when working with VMware virtual disks aka vmdk’s simply because they also exist at the same layer. NFS is ubiquitous across NAS vendors and can be provisioned by multiple agnostic implementation endpoints.  An NFS protocol hosts the capability to be virtualized and encapsulated within any Hypevisor instance either clustered or standalone. The network file locking and share semantics of NFS grant it a multitude of configurable elements which can serve a wide range of applications.

In this blog entry we will explore how to implement an NFS share for VMware ESX using OpenSolaris and ZFS. We will also explore a new way of accelerating the servers I/O performance with a new product called the DDRdrive X1.

OpenSolaris is an excellent choice for provisioning NFS storage volumes on VMware.  It hosts many advanced desirable storage features that set it far ahead of other Unix flavors. We can use the advanced networking features and ZFS including the newly integrated dedup functionality to craft the best NFS functionality available today.

Let start by examining the overall NAS storage architecture.


NFS OpenSolaris/VMware Architecture by Mike La Spina



In this architecture we are defining a fault tolerant configuration using two physical 1Gbe switches with a quad or dual Ethernet adapter(s). On the OpenSolaris storage head we are using IPMP aka IP Multipathing to establish a single IP address to serve our NFS store endpoint. A single IP is more appropriate for VMware environments as they do not support multiple NFS IP targets per NFS mount point.  IPMP provisions layer 3 load balancing and interface fault tolerance. IPMP commonly uses ICMP and default routes to determine interface failure states thus it well suited for a NAS protocol service layer. In a effort to reduce excessive ICMP rates we will aggregate the two dual interfaces into a single channel connection to each switch. This will allow us to define two test IP addresses for the IPMP service and keep our logical interface count down to a minimum. We are also defining a 2 port trunk/aggregate between the two physical switches which provides more path availability and reduces  switch failure detection times.

On the ESX host side we are defining 1 interface per switch. This type of configuration requires that only one of the VMware interfaces is an active team member vmnic within a single vSwitch definition. If this is not configured this way the ESX host will fail to detect and activate the second nic under some failure modes. This is not a bandwidth constraint issue since the vmkernel IP interface will only activity use one nic.

With an architecture set in place let now explore some of the pros and cons of running VMware on Opensolaris NFS.

Some of the obvious pros are:

  • VMware uses NFS in a thin provisioned format.
  • VMDKs are stored as files and are mountable over a variety of hosts.
  • Simple backup and recovery.
  • Simple cloning and migration.
  • Scalable storage volumes.

And some of the less obvious pros:

  • IP based transports can be virtualized and encapsulated for disaster recovery.
  • No vendor lock-in
  • ZFS retains NFS share properties within the ZFS filesystem.
  • ZFS will dedup VMDKs files at the block level.

And there are the cons:

  • Every write I/O from VMware is an O_SYNC write.
  • Firewall setups are complex.
  • Limited in its application. Only NFS clients can consume NFS file systems.
  • General  protocol security challenges. (RPC)
  • VMware kernel constraints
  • High CPU overhead.
  • Bursty data flow.

Before we break out into the configuration detail level lets examine some of the VMware and NFS behaviors so as to gain some insight into the reason I primarily use iSCSI for most VMware implementations.

I would like demonstrate some characteristics that are primarily a VMware client side behavior and it’s important that you are aware of them when your considering NFS as a Datastore.

This VMware performance chart of an IOMeter generated load reveals the burst nature of the NFS protocol. The VMware NFS client exclusively uses a O_SYNC flag on write operations which requires a committed response for the NFS server. At some point the storage system will not be able to complete every request and thus a pause in transmission will occur. The same occurs on reads when the network component buffers reach saturation. In this example chart we are observing a single 1Gbe interface at saturation from a read stream.


NFS VMware Network I/O Behavior by Mike La Spina


In this output we are observing a read stream across vh0 which is one of two active ESX4 host VMs loading our OpenSolaris NFS store and we can see the maximum network throughput is achieved which is ~81MB/s. If you examine the average value of 78MB/s you can see the burst events do not have significant impact and is not a bandwidth concern with ~3MB/s of loss.


NFS VMware Network Read I/O Limit Behavior by Mike La Spina


At the same time we are recording this write stream chart on vh3 a second ESX 4 host loading the same NFS OpenSolaris store.  As I would expect, its very similar to the read stream except that we can see the write performance is lower and that’s to be expected with any write operations. We can also identify that we are using a full duplex path transmission across to our OpenSolaris NFS host since vh0 is reading (recieving) and vh3 is writing(transmitting).


NFS VMware Network Write I/O Limit Behavior by Mike La Spina


In this chart we are observing a limiting characteristic of the VMware vmkernel NFS client process. We have introduced a read stream in combination with a preexisting active write stream on a single ESX host. As you can see the transmit and receive packet rates are both reduced and now sum to a maximum of ~75MB/s.



NFS VMware Network Mixed Read Write I/O Limit Behavior by Mike La Spina


Transitioning from read to write active streams confirms the transmission is limited to ~75Mb/s regardless the full duplex interface capability.  This information demonstrates that a host using 1Gbe ethernet connections will be constrained based on its available resources. This is a important element to consider when using NFS as a VMware datastore.


NFS VMware Network Mixed Read Write I/O Flip Limit Behavior by Mike La Spina


Another important element to consider is the CPU load impact of running the vmkernel NFS client. There is a significant CPU cycle cost on VMware hosts and this is very apparent under heavier loads. The following screen shot depicts a running IOmeter load test against our OpenSolaris NFS store. The important elements are as follows. IOMeter is performing 32KB reads in a 100% sequential access mode which drives a CPU load on the VM of ~35% however this is not the only CPU activity that occurs for this VM.


NFS IOMeter ZFS Throughput 32KB-Seq


When we examine the ESX host resource summary for the running VM we can now observe the resulting overhead load which is realized by viewing the Consumed Host CPU value. The VM in this case is granted 2 CPUs each are a 3.2Ghz Intel hypervisor resource. We can see that the ESX host is running at 6.6Ghz to drive the vmkernel NFS I/O load.


NFS VMware ESX 4 CPU Load


Lets see the performance chart results when we svMotion the activily loaded running VM on the same ESX host to an iSCSI VMFS based store on the same OpenSolaris storage host. The only elements changing in this test are the underlying storage protocols. Here we can clearly see CPU object 0 is the ESX host CPU load. During the svMotion activity we begin to see some I/O drop off due to the addition background disk load. Finally we observe the VM transition at the idle point and the resultant CPU load of iSCSI I/O impact. We clearly see the ESX host CPU load drop from 6.6Ghz to 3.5Ghz which makes it very apparent the NFS requires substantially higher CPU that iSCSI.


VM Trasitioned with vMotion from NFS to iSCSI on same ZFS Storage host


With the svMotion completed we now observe the same IOMeter screen shot retake and its very obvious that our throughput and IOPS have increased significantly and the VM granted CPU load has not changed significantly.   A decrease of ESX host CPU load in the order of  ~55% and and increase of ~32% in IOPS and 45% of throughput shows us there are some negative behaviors to be cognizant of. Keep in mind that this is not that case when the I/O type is small and random like that of a Database in those cases  NFS is normally the winner, however VMware normally hosts mixed loads and thus we need to consider this negative effect at design time and when targeting VM I/O characteristics.


iSCSI IOMeter ZFS X1DDR Cache Throughput 32KB-Seq Mike La Spina

iSCSI ESX 4 CPU Load by Mike La Spina


With a clear understanding of some important negative aspects to implementing NFS for VMware ESX hosts we can proceed to the storage system build detail. The first order of business is the hardware configuration detail. This build is simply one of my generic white boxes and it hosts the following hardware:


GA-EP45-DS3L Mobo with an Intel 3.2Ghz E8500 Core Duo

1 x 70GB OS Disk

2 x 500GB SATA II ST3500320AS disks

2GB of Ram

1 x Intel Pro 1000 PT Quad Network Adapter


As a very special treat on this configuration I am also privileged to run an DDRDrive X1 Cache Accelerator which I am currently testing some newly developed beta drivers for OpenSoalris. Normally I would use 4GB of ram as a minimum but I needed to constraint this build in a effort to load down the dedicated X1 LOG drive and the physical SATA disks thus this instance is running only 2GB of ram. In this blog entry I will not be detailing the OpenSolaris install process, we will begin from a Live CD installed OS.

OpenSolaris will default to a dynamic network service configuration named nwam, this needs to be disabled and the physical:default service enabled.

root@uss1:~# svcadm disable svc:/network/physical:nwam
root@uss1:~# svcadm enable svc:/network/physical:default

To establish an aggregation we need to un-configure any interfaces that we previously configured before proceeding.

root@uss1:~# ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
e1000g0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
inet 10.1.0.1 netmask ffff0000 broadcast 10.255.255.255
ether 0:50:56:bf:11:c3
lo0: flags=2002000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv6,VIRTUAL> mtu 8252 index 1
inet6 ::1/128

root@uss1:~# ifconfig e1000g0 unplumb

Once cleared the assignment of the physical devices is possible using the following commands

dladm create-aggr –d e1000g0 –d e1000g1 –P L2,L3 1
dladm create-aggr –d e1000g2 –d e1000g3 –P L2,L3 2

Here we have set the policy allowing layer 2 and 3 and defined two aggregates aggr1 and aggr2. We can now define the VLAN based interface shown here as VLAN 500 instances 1 are 2 respective of the aggr instances. You just need to apply the following formula for defining the VLAN interface.

(Adaptor Name) + vlan * 1000 + (Adaptor Instance)

ifconfig aggr500001 plumb up 10.1.0.1 netmask 255.0.0.0
ifconfig aggr500002 plumb up 10.1.0.2 netmask 255.0.0.0

Each pair of interfaces needs to be attached to a trunk definition on its switch path. Typically this will be a Cisco or HP switch in most environments. Here is a sample of how to configure each brand.

Cisco:

configure terminal
interface port-channel 1
interface ethernet 1/1
channel-group 1
interface ethernet 1/2
channel-group 1
interface ethernet po1
switchport mode trunk allowed vlan 500
exit

HP Procurve:

trunk 1-2 trk1 trunk
vlan 500
name “eSAN1”
tagged trk1

 

Once we have our two physical aggregates setup we can define the IP multipathing interface components.  As a best practice we should define the IP addresses in our hosts file and then refer to those names in the remaining configuration tasks.

Edit /etc/hosts to have the following host entries.

::1 localhost
127.0.0.1 uss1.local localhost loghost
10.0.0.1 uss1 uss1.domain.name
10.1.0.1 uss1.esan.data1
10.1.0.2 uss1.esan.ipmpt1
10.1.0.3 uss1.esan.ipmpt2

Here we have named the IPMP data interface aka a public IP as uss1.esan-data1 this ip will be the active connection for our VMware storage consumers.  The other two named uss1.esan-ipmpt1 and uss1.esan-ipmpt2 are beacon probe  IP test addresses and will not be available to external connections.

IPMP functionallity is included with OpenSolaris and is configured with the ifconfig utility. The follow sets up the first aggregate with a real public IP and a test address. The deprecated keyword defines the IP as a test address and the failover keyword defines if the IP can be moved in the event of interface failure.

ifconfig aggr500001 plumb uss1.esan.ipmpt1 netmask + broadcast + group ipmpg1 deprecated -failover up addif uss1.esan.data1 netmask + broadcast + failover up
ifconfig aggr500002 plumb uss1.esan.ipmpt2 netmask + broadcast + group ipmpg1 deprecated -failover up

To persist the IPMP network configuration on boot you will need to create hostname files matching the interface names with the IPMP configuration statement store in them. The following will address it.

echo uss1.esan.ipmpt1 netmask + broadcast + group ipmpg1 deprecated -failover up addif uss1.esan.data1 netmask + broadcast + failover up > /etc/hostname.aggr500001

echo uss1.esan.ipmpt1 netmask + broadcast + group ipmpg1 deprecated -failover up > /etc/hostname.aggr500002

The resulting interfaces will look like the following:

root@uss1:~# ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
aggr1: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 1500 index 2
inet 10.1.0.2 netmask ff000000 broadcast 10.255.255.255
groupname ipmpg1
ether 0:50:56:bf:11:c3
aggr2: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 1500 index 3
inet 10.1.0.3 netmask ff000000 broadcast 10.255.255.255
groupname ipmpg1
ether 0:50:56:bf:6e:2f
ipmp0: flags=8001000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,IPMP> mtu 1500 index 5
inet 10.1.0.1 netmask ff000000 broadcast 10.255.255.255
groupname ipmpg1
lo0: flags=2002000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv6,VIRTUAL> mtu 8252 index 1
inet6 ::1/128

In order for IPMP to detect failures in this configuration you will need to define target probe addresses for IPMP use. For example I use multiple ESX hosts as probe target on the storage network.

e.g.

root@uss1:~# route add -host 10.1.2.1 10.1.2.1 -static
root@uss1:~# route add -host 10.1.2.2 10.1.2.2 -static

This network configuration yields 2,2Gbe aggregate paths bound to a single logical active IP address on 10.1.0.1, with  interfaces aggr1 and aggr2 the keyword deprecated directs the IPMP mpathd service daemon to prevent application session connection packets establishment and the nofailover keyword instructs mpathd not to allow the bound IP to failover to any other interface in the IPMP group.

There are many other possible configurations but I prefer this method because it remains logically easy to diagnose and does not introduce unnecessary complexity.

Now that we have layer 3 network connectivity we should establish the other essential OpenSolaris static TCP/IP configuration elements. We need to ensure we have a persistent default gateway and our DNS client resolution enabled.

The persistent default gateway is very simple to define as is done with the route utility command as follows.

root@uss1:~# route -p add default 10.1.0.254
add persistent net default: gateway

When using NFS I prefer provisioning name resolution as a additional layer of access control. If we use names to define NFS shares and clients we can externally validate the incoming IP  with a static file or DNS based name lookup. An OpenSolaris NFS implementation inherently grants this methodology.  When a client IP requests access to an NFS share we can define a forward lookup to ensure the IP maps to a name which is granted access to the targeted share. We can simply define the desired FQDNs against the NFS shares.

In small configurations static files are acceptable as is in the case here. For large host farms the use of a DNS service instance would ease the admin cycle. You would just have to be careful that your cached TimeToLive (TTL) value is greater that 2 hours thus preventing excessive name resolution traffic. The TTL value will control how long the name is cached and this prevents constant external DNS lookups.

To configure name resolution for both file and DNS we simply copy the predefined config file named nsswitch.dns to the active config file nsswitch.conf as follows:

root@uss1:~# cp /etc/nsswitch.dns /etc/nsswitch.conf

Enabling DNS will require the configuration of our /etc/resolv.conf file which defines our name servers and namespace.

e.g.

root@ss1:~# cat /etc/resolv.conf
domain laspina.ca
nameserver 10.1.0.200
nameserver 10.1.0.201

You can also use the static /etc/hosts file to define any resolvable name to IP mapping.

With OpenSolaris you should always define your NFS share properties using the ZFS administrative tools. When this method is used we can the take advantage of keeping the NFS share properties inside of ZFS. This is really useful when you replicate or clone the ZFS file system to an alternate host as all the share properties will be retained. Here are the basic elements of an NFS share configuration for use on VMware storage consumers.

zfs create -p sp1/nas/vol1
zfs set mountpoint=/export/uss1-nas-vol1 sp1/nas/vol1
zfs set sharenfs=rw,nosuid,root=vh3-nas:vh2-nas:vh1-nas:vh0-nas sp1/nas/vol1

The ACL NFS share property of rw sets the entire share as read write, you could alternately use rw=hostname for each host but it seems redundant to me.  The nosuid prevents any incoming connection from switching user ids for example from a non-root value to 0. Finally the root=hostname property grants the incoming host name access to the share with root access permissions.  Any files created by the host will be as the root id. While these steps are some level of access control it falls well short of secure thus I also keep the NAS subnets fully isolated or firewalled to prevent external network access to the NFS share hosts.

Once our NFS share is up and running we can proceed to configure the VMware network components and share connection properties. VMware requires a vmkernel network interface definition to provision NFS connectivity. You should dedicate a vmnic team and a vswitch for your storage network.

Here is a visual  example of a vmkernel configuration with a teamed pair of vmnics

vmkernel eNAS-Interface by Mike La Spina

As you can see we have dedicated the vSwitch and vmnics on VLAN 500, no other traffic should be permitted on this network. You should also set the default vmkernel gateway to its own address. This will promote better performance as there is no need to leave this network.

For eNAS-Interface1 you should define one active and one standby vmnic. This will ensure proper interface fail-over in all failure modes.  The VMware NFS kernel instance will only use a single vmnic so your not loosing any bandwidth. The vmnic team only serves as a fault tolerant connection and is not a load balanced configuration.

VMkernel Team Stanby by Mike La Spina


At this point you should validate your network connectivity by pinging the vmkernel IP address from the OpenSolaris host. If you chose to ping from ESX use vmkping instead of ping otherwise you will not get a response.

Provided your network connectivity is good you can define your vmkernel NFS share properties. Here is a visual example.

Add an NFS share by Mike La Spina

And if you prefer an ESX command line method:

esxcfg-nas -a -o uss1-nas -s /export/uss1-nas-vol1 uss1-nas-vol1

In this example we are using a DNS based name of uss1-nas. This would allow you to change the host IP without having to reconfigure VMware hosts. You will want to make sure the DNS name cache TTL in not a small value for two reasons. One an DNS outage would impact the IP resolution and as well you do not want excessive resolution traffic on the eSAN subnet(s).

The NFS share configuration info is maintained in the /etc/vmware/esx.conf file and looks like the following example.

/nas/uss1-nas-vol1/enabled = “true”
/nas/uss1-nas-vol1/host = “uss1-nas”
/nas/uss1-nas-vol1/readOnly = “false”
/nas/uss1-nas-vol1/share = “/export/uss1-nas-vol1”

If your trying to change NFS share parameters and the NFS share is not available after a successful configuration you could run into a messed up vmkernel NFS state and you’ll receive the following message:

Unable to get Console path for Mount

You will need to reboot the ESX server to clean it up so don’t mess with anything else until that is performed. (I’ve wasted a few hours on that buggy VMware kernel NFS client behavior).

Once the preceeding steps are successful the result will be a NAS based NFS share which is now available like this example.

Running NFS shares by Mike La Spina

With a working NFS storage system we can now look at optimizing the I/O capability of ZFS and NFS.

VMware performs write operations over NFS using an O_SYNC control flag. This will force the storage system to commit all write operations to disk to ensure VM file integrity. This can be very expensive when it comes to high performance IOPS especially when using SATA architecture. We could disable our ZIL aka ZFS Intent Log but this could result in severe corruption in the event of a systems fault or environmental issue. A much better alternative is to use a non-volatile ZIL device. In this case we have an DDRdrive X1 which is a 4GB high speed externally powered dram bank with a high speed SCSI interface and also hosts 4GB of flash for long term shutdowns.  The DDRdrive X1 IO capability reaches the 200,000/sec range and up. By using an external UPS power source we can economically prevent ZFS corruption and reap the high speed benefits of dram even when unexpected system interruptions occur.

In this blog our storage host is using Seagate ST3500320AS disk which are challenged to achieve ~180 IOPS. And that IO rate is under ideal sequential read write loads. With a cache we can expect that these disks will deliver no greater than 360 IOPS under ideal conditions.

Now lets see if this is true based on some load tests using Microsoft’s SQLIO tool. First we will disable our ZFS ZIL caching DDRdrive X1 show here as device c9t0d0

NAME        STATE     READ WRITE CKSUM
sp1         DEGRADED     0     0     0
mirror-0  ONLINE       0     0     0
c6t1d0  ONLINE       0     0     0
c6t2d0  ONLINE       0     0     0
logs
c9t0d0  OFFLINE      0     0     0

No lets run the SQLIO test for 5 minutes with random 8K I/O write requests which are simply brutal for any SATA disk to keep up with.  We have defined a file size of 32GB to ensure we hit the disk by exceeding our 2GB cache memory foot print. As you can see from the output we achieve 227 IOs/sec which is below the mirrored drive pair capability.

C:Program FilesSQLIO>sqlio -kW -s300 -frandom -o4 -b8 -LS -Fparam.txt
sqlio v1.5.SG
using system counter for latency timings, 3579545 counts per second
parameter file used: param.txt
file c:testfile.dat with 2 threads (0-1) using mask 0x0 (0)
2 threads writing for 300 secs to file c:testfile.dat
using 8KB random IOs
enabling multiple I/Os per thread with 4 outstanding
using specified size: 32768MB for file: c:testfile.dat
initialization done
CUMULATIVE DATA:
throughput metrics:
IOs/sec:   227.76
MBs/sec:     1.77

latency metrics:
Min_Latency(ms): 8
Avg_Latency(ms): 34
Max_Latency(ms): 1753
histogram:
ms: 0  1  2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+
%:  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  1  1 29  7  3  2  1  1  1 54

new  name   name  attr  attr lookup rddir  read read  write write
file remov  chng   get   set    ops   ops   ops bytes   ops bytes
0     0     0   300     0      0     0     3   16K   146 1.12M /export/uss1-nas-vol1
0     0     0   617     0      0     0     0     0   309 2.39M /export/uss1-nas-vol1
0     0     0   660     0      0     0     0     0   329 2.52M /export/uss1-nas-vol1
0     0     0   677     0      0     0     0     0   338 2.63M /export/uss1-nas-vol1
0     0     0   638     0      0     0     0     0   321 2.46M /export/uss1-nas-vol1
0     0     0   496     0      0     0     0     0   246 1.88M /export/uss1-nas-vol1
0     0     0    44     0      0     0     0     0    21  168K /export/uss1-nas-vol1
0     0     0   344     0      0     0     0     0   172 1.32M /export/uss1-nas-vol1
0     0     0   646     0      0     0     0     0   323 2.51M /export/uss1-nas-vol1
0     0     0   570     0      0     0     0     0   285 2.20M /export/uss1-nas-vol1
0     0     0   695     0      0     0     0     0   350 2.72M /export/uss1-nas-vol1
0     0     0   624     0      0     0     0     0   309 2.38M /export/uss1-nas-vol1
0     0     0   562     0      0     0     0     0   282 2.15M /export/uss1-nas-vol1


Now lets enable the DDRdrive X1 ZIL cache and see where that takes us.

NAME        STATE     READ WRITE CKSUM
sp1         ONLINE       0     0     0
mirror-0  ONLINE       0     0     0
c6t1d0  ONLINE       0     0     0
c6t2d0  ONLINE       0     0     0
logs
c9t0d0  ONLINE       0     0     0

Again we run the identical SQLIO test and results are dramatically different, we immediately see a 4X improvement in IOPS but whats much more important is the reduction in latency which will make any database workload fly.

C:Program FilesSQLIO>sqlio -kW -s300 -frandom -o4 -b8 -LS -Fparam.txt
sqlio v1.5.SG
using system counter for latency timings, 3579545 counts per second
parameter file used: param.txt
file c:testfile.dat with 2 threads (0-1) using mask 0x0 (0)
2 threads writing for 300 secs to file c:testfile.dat
using 8KB random IOs
enabling multiple I/Os per thread with 4 outstanding
using specified size: 32768 MB for file: c:testfile.dat
initialization done
CUMULATIVE DATA:
throughput metrics:
IOs/sec:   865.75
MBs/sec:     6.76

latency metrics:
Min_Latency(ms): 0
Avg_Latency(ms): 8
Max_Latency(ms): 535
histogram:
ms: 0  1  2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+
%: 56 13  9  3  1  0  1  1  1  1  1  1  1  1  1  0  0  0  0  0  0  0  0  0  7

new  name   name  attr  attr lookup rddir  read read  write write
file remov  chng   get   set    ops   ops   ops bytes   ops bytes
0     0     0   131     0      0     0     0     0    66  516K /export/uss1-nas-vol1
0     0     0 3.23K     0      0     0     0     0 1.62K 12.8M /export/uss1-nas-vol1
0     0     0    95     0      0     0     2    8K    43  324K /export/uss1-nas-vol1
0     0     0 2.62K     0      0     0     0     0 1.31K 10.3M /export/uss1-nas-vol1
0     0     0   741     0      0     0     0     0   369 2.78M /export/uss1-nas-vol1
0     0     0 1.99K     0      0     0     0     0  1019 7.90M /export/uss1-nas-vol1
0     0     0 1.34K     0      0     0     0     0   687 5.32M /export/uss1-nas-vol1
0     0     0   937     0      0     0     0     0   468 3.62M /export/uss1-nas-vol1
0     0     0 2.60K     0      0     0     0     0 1.30K 10.3M /export/uss1-nas-vol1
0     0     0 2.02K     0      0     0     0     0 1.01K 7.84M /export/uss1-nas-vol1
0     0     0 1.91K     0      0     0     0     0   978 7.58M /export/uss1-nas-vol1
0     0     0 1.94K     0      0     0     0     0   992 7.67M /export/uss1-nas-vol1

DDRdrive X1 Performance Chart by Mike La Spina


NFSStat Chart I/O DB Cache Compare by Mike La Spina


When we look at ZFS ZIL caching devices there are some important elements to consider. For most provisioned VMware storage systems you do not require large volumes of ZIL cache to generate good I/O performance.  What you need to do is carefully determine the active data write footprint size. Remember that ZIL is a write only world and that those writes will be relocated to a slower disk at some point. These relocation functions are processed in batches or as Ben Rockwood likes to say in a regular breathing cycle.  This means that random I/O operations can queued up and converted to a more sequential like behavior characteristic. Random synchronous write operations can be safely acknowledged immediately and then the ZFS DMU can process them more efficiently in the background. This means that if we provision cache devices that are closer to the system bus and have lower latency the back end core compute hardware will be able to move the data ahead of the bursting I/O peak up ramps and thus we deliver higher IOPS with significantly less cache requirements. Devices like the DDRdrive X1 are a good example of implementing this strategy.

I hope you found this blog entry to be interesting and useful.

Regards,

Mike

Site Contents: © 2010  Mike La Spina

Securing COMSTAR and VMware iSCSI connections

Connecting VMware iSCSI sessions to COMSTAR or any iSCSI target provider securely is required to maintain a reliable system. Without some level of initiator to target connection gate keeping we will eventually encounter a security event. This can happen from a variety of sources, for example a non-cluster aware OS can connect to an unsecured VMware shared storage LUN and cause severe damage to it since the OS has no shared LUN access knowledge.  All to often we make assumptions that security is about confidentiality when it is actually more commonly about data availability and integrity which will both be compromised if an unintentional connection were to write on a shared LUN.

At the very minimum security level we should apply non-authenticated named initiator access grants to our targets. This low security method defines initiator to target connection states for lower security tolerant environments. This security method is applicable when confidentiality is not as important and security is maintained with the physical access control realm. As well it should also coincide with SAN fabric isolation and be strictly managed by the Virtual System or Storage Administrators. Additionally we can increase access security control by enabling CHAP authentication which is a serious improvement over named initiators. I will demonstrate both of these security methods using COMSTAR iSCSI Providers and VMware within this blog entry.

Before we dive into the configuration details lets examine how LU’s are exposed. COMSTAR controls iSCSI target access using several combined elements. One of these elements is within the COMSTAR STMF facility where we can assign membership of host and target groups. By default if we do not define a host or target group any created target will belong to an implied ALL group. This group as we would expect grants any connecting initiator membership to the ALL group assigned LUN’s. These assignments are called views in the STMF state machine and are a mapping function of the Storage Block Driver service (SBD) to the STMF IT_nexus state tables.

This means that if we were to create an initiator without assigning a host group or host/target group combination, an initiator would be allowed unrestricted connectivity to any ALL group LUN views and possibly without any authentication at all. Allowing this to occur would of course be very undesirable from a security perspective in almost all cases. Conversely if we use a target group definition then only the initiators that connect to the respective target will see the LUN views which are mapped on that target definition instance.

While target groups do not significantly improve access security it does provide a means controlling accessibility based on the definition of interface connectivity classes which in turn can be mapped out on respective VLAN priority groups, bandwidth availability and applicable path fault tolerance capabilities which are all important aspects of availability and unfortunately are seldom considered security concepts in many architectures.

Generally on most simple storage configurations the use of target groups is not a requirement. However they do provide a level of access control with LUN views. For example we can assign LUN views to a target group which in turn frees us from having to add the LUN view to each host group within shared LUN configurations like VMware stores. With combination’s of host and target groups we can create more flexible methods in respect to shared LUN visibility. With the addition of simple CHAP authentication we can more effectively insulate target groups. This is primarily due to the ability to assign separate CHAP user and password values for each target.

Lets look at this visual depiction to help see the effect of using target and host groups.

COMSTAR host and target view depiction

In this depiction any initiator that connects to the target group prod-tg1 will by default see the views that are mapped to that target groups interfaces. Additionally if the initiator is also a member of the host group prod-esx1 those view mapping will also be visible.

One major difference with target groups verses the all group is that you can define LU views on mass to an entire class of initiator connections e.g. a production class. This becomes an important control element in a unified media environment where the use of VLANs separates visibility. Virtual interfaces can be created at the storage server and attached to VLANs respectively. Target groups become a very desirable as a control within a unified computing context.

Named Initiator Access

Enabling named initiator to target using unauthenticated access with COMSTAR and VMware iSCSI services is a relatively simple operation. Let’s examine how this method controls initiator access.

We will define two host groups, one for production esx hosts and one for test esx hosts.

# stmfadm create-hg prod-esx1

# stmfadm create-hg test-esx1

With these host groups defined we individually assign LU’s views to the host groups and then we define any initiator to be a member of one of the host groups to which it would only see the views which belong to the host group and additionally any views assigned to the default all group.

To add a host initiator to a host group, we must first create it in the port provider of choice which in this case is the iSCSI port provider.

# itadm create-initiator iqn.1998-01.com.vmware:vh1.1

Once created the defined initiator can be added to a host group.

# stmfadm add-hg-member -g prod-esx1 iqn.1998-01.com.vmware:vh1.1

An ESX host initiator with this iqn name can now attach to our COMSTAR targets and will see any LU views that are added to the prod-esx1 host group. But there are still some issues here, for example any ESX host with this initiator name will be able to connect to our targets and see the LUs. This is where CHAP can help to improve access control.

Adding CHAP Authentication on the iSCSI Target

Adding CHAP authentication is very easy to accomplish, we simply need to set a chap user name and secret on the respective iSCSI target. Here is an example of its application.

# itadm modify-target -s -u tcuid1 iqn.2009-06.target.ss1.1

Enter CHAP secret:
Re-enter secret:

The CHAP secret must be between 12 and 255 characters long. The addition of CHAP allows us to further reduce any risks of a potential storage security event. We can define an additional target and they can have a different chap user names and or secrets.

CHAP is more secure when used in a mutual authentication back to the source initiator which is my preferred way to implement it on ESX 4 (ESX 3 does not support mutual chap). This mode does not stop a successful one-way authentication from an initiator to the target, it allows the initiator to request that the target host system iSCSI services must authenticate back to the initiator which provides validation that the target is indeed the correct one. Here is an example of the target side initiator definition that would provide this capability.

# itadm modify-initiator -s -u icuid1 iqn.1998-01.com.vmware:vh1.1

Enter CHAP secret:
Re-enter secret:

Configuring the ESX 4 Software iSCSI Initiator

On the ESX 4 host side we need to enter our initiator side CHAP values.

ESX 4 iSCSI Mutual CHAP

 

Be careful here, there are three places we can configure CHAP elements. The general tab allows a global point of admin where any target will inherit those entered values by default where applicable e.g. target chap settings. The the dynamic tab can override the global settings and as well the static tab overrides the global and dynamic ones. In this example we are configuring a dynamically discovered target to use mutual (aka bidirectional) authentication.

In closing CHAP is a reasonable method to ensure that we correctly grant initiator to target connectivity assignments in an effort to promote better integrity and availability. It does not however provide much on the side of confidentially for that we need more complex solutions like IPSec.

Hope you found this blog interesting.

Regards,

Mike

Site Contents: © 2009  Mike La Spina

VMware Kernel Return Codes

From time to time I find myself reaming through VMware log files in an effort to diagnose various failure events. This is certainly not my favorite task so to make the process a little less painful I decided to extract the vmkernel return codes from the VMware open source libraries and place an easily accessible tabled version of them on my blog. While it’s not a very interesting blog entry it does have a useful purpose. I promise to get back to some more interesting entries soon.You can also use the console command vmkerrcode -l if it’s handy.

This posted list is only for ESX 3.5.x, vSphere has different error codes and they do not map to ESX 3.5.x

Regards,

Mike

 

0         Success
0xbad0001 Failure
0xbad0002 Would block
0xbad0003 Not found
0xbad0004 Busy
0xbad0005 Already exists
0xbad0006 Limit exceeded
0xbad0007 Bad parameter
0xbad0008 Metadata read error
0xbad0009 Metadata write error
0xbad000a I/O error
0xbad000b Read error
0xbad000c Write error
0xbad000d Invalid name
0xbad000e Invalid handle
0xbad000f No such SCSI adapter
0xbad0010 No such target on adapter
0xbad0011 No such partition on target
0xbad0012 No filesystem on the device
0xbad0013 Memory map mismatch
0xbad0014 Out of memory
0xbad0015 Out of memory (ok to retry)
0xbad0016 Out of resources
0xbad0017 No free handles
0xbad0018 Exceeded maximum number of allowed handles
0xbad0019 No free pointer blocks (deprecated)
0xbad001a No free data blocks (deprecated)
0xbad001b Corrupt RedoLog
0xbad001c Status pending
0xbad001d Status free
0xbad001e Unsupported CPU
0xbad001f Not supported
0xbad0020 Timeout
0xbad0021 Read only
0xbad0022 SCSI reservation conflict
0xbad0023 File system locked
0xbad0024 Out of slots
0xbad0025 Invalid address
0xbad0026 Not shared
0xbad0027 Page is shared
0xbad0028 Kseg pair flushed
0xbad0029 Max async I/O requests pending
0xbad002a Minor version mismatch
0xbad002b Major version mismatch
0xbad002c Already connected
0xbad002d Already disconnected
0xbad002e Already enabled
0xbad002f Already disabled
0xbad0030 Not initialized
0xbad0031 Wait interrupted
0xbad0032 Name too long
0xbad0033 VMFS volume missing physical extents
0xbad0034 NIC teaming master valid
0xbad0035 NIC teaming slave
0xbad0036 NIC teaming regular VMNIC
0xbad0037 Abort not running
0xbad0038 Not ready
0xbad0039 Checksum mismatch
0xbad003a VLan HW Acceleration not supported
0xbad003b VLan is not supported in vmkernel
0xbad003c Not a VLan handle
0xbad003d Couldn’t retrieve VLan id
0xbad003e Connection closed by remote host, possibly due to timeout
0xbad003f No connection
0xbad0040 Segment overlap
0xbad0041 Error parsing MPS Table
0xbad0042 Error parsing ACPI Table
0xbad0043 Failed to resume VM
0xbad0044 Insufficient address space for operation
0xbad0045 Bad address range
0xbad0046 Network is down
0xbad0047 Network unreachable
0xbad0048 Network dropped connection on reset
0xbad0049 Software caused connection abort
0xbad004a Connection reset by peer
0xbad004b Socket is not connected
0xbad004c Can’t send after socket shutdown
0xbad004d Too many references: can’t splice
0xbad004e Connection refused
0xbad004f Host is down
0xbad0050 No route to host
0xbad0051 Address already in use
0xbad0052 Broken pipe
0xbad0053 Not a directory
0xbad0054 Is a directory
0xbad0055 Directory not empty
0xbad0056 Not implemented
0xbad0057 No signal handler
0xbad0058 Fatal signal blocked
0xbad0059 Permission denied
0xbad005a Operation not permitted
0xbad005b Undefined syscall
0xbad005c Result too large
0xbad005d Pkts dropped because of VLAN (support) mismatch
0xbad005e Unsafe exception frame
0xbad005f Necessary module isn’t loaded
0xbad0060 No dead world by that name
0xbad0061 No cartel by that name
0xbad0062 Is a symbolic link
0xbad0063 Cross-device link
0xbad0064 Not a socket
0xbad0065 Illegal seek
0xbad0066 Unsupported address family
0xbad0067 Already connected
0xbad0068 World is marked for death
0xbad0069 No valid scheduler cell assignment
0xbad006a Invalid cpu min
0xbad006b Invalid cpu minLimit
0xbad006c Invalid cpu max
0xbad006d Invalid cpu shares
0xbad006e Cpu min outside valid range
0xbad006f Cpu minLimit outside valid range
0xbad0070 Cpu max outside valid range
0xbad0071 Cpu min exceeds minLimit
0xbad0072 Cpu min exceeds max
0xbad0073 Cpu minLimit less than cpu already reserved by children
0xbad0074 Cpu max less than cpu already reserved by children
0xbad0075 Admission check failed for cpu resource
0xbad0076 Invalid memory min
0xbad0077 Invalid memory minLimit
0xbad0078 Invalid memory max
0xbad0079 Memory min outside valid range
0xbad007a Memory minLimit outside valid range
0xbad007b Memory max outside valid range
0xbad007c Memory min exceeds minLimit
0xbad007d Memory min exceeds max
0xbad007e Memory minLimit less than memory already reserved by children
0xbad007f Memory max less than memory already reserved by children
0xbad0080 Admission check failed for memory resource
0xbad0081 No swap file
0xbad0082 Bad parameter count
0xbad0083 Bad parameter type
0xbad0084 Dueling unmaps (ok to retry)
0xbad0085 Inappropriate ioctl for device
0xbad0086 Mmap changed under page fault (ok to retry)
0xbad0087 Operation now in progress
0xbad0088 Address temporarily unmapped
0xbad0089 Invalid buddy type
0xbad008a Large page info not found
0xbad008b Invalid large page info
0xbad008c SCSI LUN is in snapshot state
0xbad008d SCSI LUN is in transition
0xbad008e Transaction ran out of lock space or log space
0xbad008f Lock was not free
0xbad0090 Exceed maximum number of files on the filesystem
0xbad0091 Migration determined a failure by the VMX
0xbad0092 VSI GetList handler overflow
0xbad0093 Invalid world
0xbad0094 Invalid vmm
0xbad0095 Invalid transaction
0xbad0096 Transient file system condition, suggest retry
0xbad0097 Number of running VCPUs limit exceeded
0xbad0098 Invalid metadata
0xbad0099 Invalid page number
0xbad009a Not in executable format
0xbad009b Unable to connect to NFS server
0xbad009c The NFS server does not support MOUNT version 3 over TCP
0xbad009d The NFS server does not support NFS version 3 over TCP
0xbad009e The mount request was denied by the NFS server. Check that the export exists and that the client is permitted to mount it
0xbad009f The specified mount path was not a directory
0xbad00a0 Unable to query remote mount point’s attributes
0xbad00a1 NFS has reached the maximum number of supported volumes
0xbad00a2 Out of nice memory
0xbad00a3 VMotion failed to start due to lack of cpu or memory resources
0xbad00a4 Cache miss
0xbad00a5 Error induced when stress options are enabled
0xbad00a6 Maximum number of concurrent hosts are already accessing this resource
0xbad00a7 Host doesn’t have a journal
0xbad00a8 Lock rank violation detected
0xbad00a9 Module failed
0xbad00aa Unable to open slave if no master pty
0xbad00ab Not IOAble
0xbad00ac No free inodes
0xbad00ad No free memory for file data
0xbad00ae No free space to expand file or meta data
0xbad00af Unable to open writer if no fifo reader
0xbad00b0 No underlying device for major,minor
0xbad00b1 Memory min exceeds memSize
0xbad00b2 No virtual terminal for number
0xbad00b3 Too many elements for list
0xbad00b4 VMM<->VMK shared are mismatch
0xbad00b5 Failure during exec while original state already lost
0xbad00b6 vmnixmod kernel module not loaded
0xbad00b7 Invalid module
0xbad00b8 Address is not aligned on page boundary
0xbad00b9 Address is not mapped in address space
0xbad00ba No space to record a message
0xbad00bb No space left on PDI stack
0xbad00bc Invalid exception handler
0xbad00bd Exception not handled by exception handler
0xbad00be Can’t open sparse/TBZ files in multiwriter mode
0xbad00bf Transient storage condition, suggest retry
0xbad00c0 Storage initiator error
0xbad00c1 Timer initialization failed
0xbad00c2 Module not found
0xbad00c3 Socket not owned by cartel
0xbad00c4 No VSI handler found for the requested node
0xbad00c5 Invalid mmap protection flags
0xbad00c6 Invalid chunk size for contiguous mmap
0xbad00c7 Invalid MPN max for contiguous mmap
0xbad00c8 Invalid mmap flag on contiguous mmap
0xbad00c9 Unexpected fault on pre-faulted memory region
0xbad00ca Memory region cannot be split (remap/unmap)
0xbad00cb Cache Information not available
0xbad00cc Cannot remap pinned memory
0xbad00cd No cartel group by that name
0xbad00ce SPLock stats collection disabled
0xbad00cf Boot image is corrupted
0xbad00d0 Branched file cannot be modified
0xbad00d1 Name is reserved for branched file
0xbad00d2 Unlinked file cannot be branched
0xbad00d3 Maximum kernel-level retries exceeded
0xbad00d4 Optimistic lock acquired by another host
0xbad00d5 Object cannot be mmapped
0xbad00d6 Invalid cpu affinity
0xbad00d7 Device does not contain a logical volume
0xbad00d8 No space left on device
0xbad00d9 Invalid vsi node ID
0xbad00da Too many users accessing this resource
0xbad00db Operation already in progress
0xbad00dc Buffer too small to complete the operation
0xbad00dd Snapshot device disallowed
0xbad00de LVM device unreachable
0xbad00df Invalid cpu resource units
0xbad00e0 Invalid memory resource units
0xbad00e1 IO was aborted
0xbad00e2 Memory min less than memory already reserved by children
0xbad00e3 Memory min less than memory required to support current consumption
0xbad00e4 Memory max less than memory required to support current consumption
0xbad00e5 Timeout (ok to retry)
0xbad00e6 Reservation Lost
0xbad00e7 Cached metadata is stale
0xbad00e8 No fcntl lock slot left
0xbad00e9 No fcntl lock holder slot left
0xbad00ea Not licensed to access VMFS volumes
0xbad00eb Transient LVM device condition, suggest retry
0xbad00ec Snapshot LV incomplete
0xbad00ed Medium not found
0xbad00ee Maximum allowed SCSI paths have already been claimed
0xbad00ef Filesystem is not mountable
0xbad00f0 Memory size exceeds memSizeLimit
0xbad00f1 Disk lock acquired earlier, lost
0x2bad0000 Generic service console error

Site Contents: © 2009  Mike La Spina

Next Page »