VMware Kernel Return Codes

From time to time I find myself reaming through VMware log files in an effort to diagnose various failure events. This is certainly not my favorite task so to make the process a little less painful I decided to extract the vmkernel return codes from the VMware open source libraries and place an easily accessible tabled version of them on my blog. While it’s not a very interesting blog entry it does have a useful purpose. I promise to get back to some more interesting entries soon.You can also use the console command vmkerrcode -l if it’s handy.

This posted list is only for ESX 3.5.x, vSphere has different error codes and they do not map to ESX 3.5.x

Regards,

Mike

 

0         Success
0xbad0001 Failure
0xbad0002 Would block
0xbad0003 Not found
0xbad0004 Busy
0xbad0005 Already exists
0xbad0006 Limit exceeded
0xbad0007 Bad parameter
0xbad0008 Metadata read error
0xbad0009 Metadata write error
0xbad000a I/O error
0xbad000b Read error
0xbad000c Write error
0xbad000d Invalid name
0xbad000e Invalid handle
0xbad000f No such SCSI adapter
0xbad0010 No such target on adapter
0xbad0011 No such partition on target
0xbad0012 No filesystem on the device
0xbad0013 Memory map mismatch
0xbad0014 Out of memory
0xbad0015 Out of memory (ok to retry)
0xbad0016 Out of resources
0xbad0017 No free handles
0xbad0018 Exceeded maximum number of allowed handles
0xbad0019 No free pointer blocks (deprecated)
0xbad001a No free data blocks (deprecated)
0xbad001b Corrupt RedoLog
0xbad001c Status pending
0xbad001d Status free
0xbad001e Unsupported CPU
0xbad001f Not supported
0xbad0020 Timeout
0xbad0021 Read only
0xbad0022 SCSI reservation conflict
0xbad0023 File system locked
0xbad0024 Out of slots
0xbad0025 Invalid address
0xbad0026 Not shared
0xbad0027 Page is shared
0xbad0028 Kseg pair flushed
0xbad0029 Max async I/O requests pending
0xbad002a Minor version mismatch
0xbad002b Major version mismatch
0xbad002c Already connected
0xbad002d Already disconnected
0xbad002e Already enabled
0xbad002f Already disabled
0xbad0030 Not initialized
0xbad0031 Wait interrupted
0xbad0032 Name too long
0xbad0033 VMFS volume missing physical extents
0xbad0034 NIC teaming master valid
0xbad0035 NIC teaming slave
0xbad0036 NIC teaming regular VMNIC
0xbad0037 Abort not running
0xbad0038 Not ready
0xbad0039 Checksum mismatch
0xbad003a VLan HW Acceleration not supported
0xbad003b VLan is not supported in vmkernel
0xbad003c Not a VLan handle
0xbad003d Couldn’t retrieve VLan id
0xbad003e Connection closed by remote host, possibly due to timeout
0xbad003f No connection
0xbad0040 Segment overlap
0xbad0041 Error parsing MPS Table
0xbad0042 Error parsing ACPI Table
0xbad0043 Failed to resume VM
0xbad0044 Insufficient address space for operation
0xbad0045 Bad address range
0xbad0046 Network is down
0xbad0047 Network unreachable
0xbad0048 Network dropped connection on reset
0xbad0049 Software caused connection abort
0xbad004a Connection reset by peer
0xbad004b Socket is not connected
0xbad004c Can’t send after socket shutdown
0xbad004d Too many references: can’t splice
0xbad004e Connection refused
0xbad004f Host is down
0xbad0050 No route to host
0xbad0051 Address already in use
0xbad0052 Broken pipe
0xbad0053 Not a directory
0xbad0054 Is a directory
0xbad0055 Directory not empty
0xbad0056 Not implemented
0xbad0057 No signal handler
0xbad0058 Fatal signal blocked
0xbad0059 Permission denied
0xbad005a Operation not permitted
0xbad005b Undefined syscall
0xbad005c Result too large
0xbad005d Pkts dropped because of VLAN (support) mismatch
0xbad005e Unsafe exception frame
0xbad005f Necessary module isn’t loaded
0xbad0060 No dead world by that name
0xbad0061 No cartel by that name
0xbad0062 Is a symbolic link
0xbad0063 Cross-device link
0xbad0064 Not a socket
0xbad0065 Illegal seek
0xbad0066 Unsupported address family
0xbad0067 Already connected
0xbad0068 World is marked for death
0xbad0069 No valid scheduler cell assignment
0xbad006a Invalid cpu min
0xbad006b Invalid cpu minLimit
0xbad006c Invalid cpu max
0xbad006d Invalid cpu shares
0xbad006e Cpu min outside valid range
0xbad006f Cpu minLimit outside valid range
0xbad0070 Cpu max outside valid range
0xbad0071 Cpu min exceeds minLimit
0xbad0072 Cpu min exceeds max
0xbad0073 Cpu minLimit less than cpu already reserved by children
0xbad0074 Cpu max less than cpu already reserved by children
0xbad0075 Admission check failed for cpu resource
0xbad0076 Invalid memory min
0xbad0077 Invalid memory minLimit
0xbad0078 Invalid memory max
0xbad0079 Memory min outside valid range
0xbad007a Memory minLimit outside valid range
0xbad007b Memory max outside valid range
0xbad007c Memory min exceeds minLimit
0xbad007d Memory min exceeds max
0xbad007e Memory minLimit less than memory already reserved by children
0xbad007f Memory max less than memory already reserved by children
0xbad0080 Admission check failed for memory resource
0xbad0081 No swap file
0xbad0082 Bad parameter count
0xbad0083 Bad parameter type
0xbad0084 Dueling unmaps (ok to retry)
0xbad0085 Inappropriate ioctl for device
0xbad0086 Mmap changed under page fault (ok to retry)
0xbad0087 Operation now in progress
0xbad0088 Address temporarily unmapped
0xbad0089 Invalid buddy type
0xbad008a Large page info not found
0xbad008b Invalid large page info
0xbad008c SCSI LUN is in snapshot state
0xbad008d SCSI LUN is in transition
0xbad008e Transaction ran out of lock space or log space
0xbad008f Lock was not free
0xbad0090 Exceed maximum number of files on the filesystem
0xbad0091 Migration determined a failure by the VMX
0xbad0092 VSI GetList handler overflow
0xbad0093 Invalid world
0xbad0094 Invalid vmm
0xbad0095 Invalid transaction
0xbad0096 Transient file system condition, suggest retry
0xbad0097 Number of running VCPUs limit exceeded
0xbad0098 Invalid metadata
0xbad0099 Invalid page number
0xbad009a Not in executable format
0xbad009b Unable to connect to NFS server
0xbad009c The NFS server does not support MOUNT version 3 over TCP
0xbad009d The NFS server does not support NFS version 3 over TCP
0xbad009e The mount request was denied by the NFS server. Check that the export exists and that the client is permitted to mount it
0xbad009f The specified mount path was not a directory
0xbad00a0 Unable to query remote mount point’s attributes
0xbad00a1 NFS has reached the maximum number of supported volumes
0xbad00a2 Out of nice memory
0xbad00a3 VMotion failed to start due to lack of cpu or memory resources
0xbad00a4 Cache miss
0xbad00a5 Error induced when stress options are enabled
0xbad00a6 Maximum number of concurrent hosts are already accessing this resource
0xbad00a7 Host doesn’t have a journal
0xbad00a8 Lock rank violation detected
0xbad00a9 Module failed
0xbad00aa Unable to open slave if no master pty
0xbad00ab Not IOAble
0xbad00ac No free inodes
0xbad00ad No free memory for file data
0xbad00ae No free space to expand file or meta data
0xbad00af Unable to open writer if no fifo reader
0xbad00b0 No underlying device for major,minor
0xbad00b1 Memory min exceeds memSize
0xbad00b2 No virtual terminal for number
0xbad00b3 Too many elements for list
0xbad00b4 VMM<->VMK shared are mismatch
0xbad00b5 Failure during exec while original state already lost
0xbad00b6 vmnixmod kernel module not loaded
0xbad00b7 Invalid module
0xbad00b8 Address is not aligned on page boundary
0xbad00b9 Address is not mapped in address space
0xbad00ba No space to record a message
0xbad00bb No space left on PDI stack
0xbad00bc Invalid exception handler
0xbad00bd Exception not handled by exception handler
0xbad00be Can’t open sparse/TBZ files in multiwriter mode
0xbad00bf Transient storage condition, suggest retry
0xbad00c0 Storage initiator error
0xbad00c1 Timer initialization failed
0xbad00c2 Module not found
0xbad00c3 Socket not owned by cartel
0xbad00c4 No VSI handler found for the requested node
0xbad00c5 Invalid mmap protection flags
0xbad00c6 Invalid chunk size for contiguous mmap
0xbad00c7 Invalid MPN max for contiguous mmap
0xbad00c8 Invalid mmap flag on contiguous mmap
0xbad00c9 Unexpected fault on pre-faulted memory region
0xbad00ca Memory region cannot be split (remap/unmap)
0xbad00cb Cache Information not available
0xbad00cc Cannot remap pinned memory
0xbad00cd No cartel group by that name
0xbad00ce SPLock stats collection disabled
0xbad00cf Boot image is corrupted
0xbad00d0 Branched file cannot be modified
0xbad00d1 Name is reserved for branched file
0xbad00d2 Unlinked file cannot be branched
0xbad00d3 Maximum kernel-level retries exceeded
0xbad00d4 Optimistic lock acquired by another host
0xbad00d5 Object cannot be mmapped
0xbad00d6 Invalid cpu affinity
0xbad00d7 Device does not contain a logical volume
0xbad00d8 No space left on device
0xbad00d9 Invalid vsi node ID
0xbad00da Too many users accessing this resource
0xbad00db Operation already in progress
0xbad00dc Buffer too small to complete the operation
0xbad00dd Snapshot device disallowed
0xbad00de LVM device unreachable
0xbad00df Invalid cpu resource units
0xbad00e0 Invalid memory resource units
0xbad00e1 IO was aborted
0xbad00e2 Memory min less than memory already reserved by children
0xbad00e3 Memory min less than memory required to support current consumption
0xbad00e4 Memory max less than memory required to support current consumption
0xbad00e5 Timeout (ok to retry)
0xbad00e6 Reservation Lost
0xbad00e7 Cached metadata is stale
0xbad00e8 No fcntl lock slot left
0xbad00e9 No fcntl lock holder slot left
0xbad00ea Not licensed to access VMFS volumes
0xbad00eb Transient LVM device condition, suggest retry
0xbad00ec Snapshot LV incomplete
0xbad00ed Medium not found
0xbad00ee Maximum allowed SCSI paths have already been claimed
0xbad00ef Filesystem is not mountable
0xbad00f0 Memory size exceeds memSizeLimit
0xbad00f1 Disk lock acquired earlier, lost
0x2bad0000 Generic service console error

Site Contents: © 2009  Mike La Spina

Additional VMFS Backup Automation script features

I was conversing with William Lam about my blog entry Protecting ESX VMFS Stores with Automation and we exchanged ideas on the simple automation script that I originally posted. William is well versed in bash and has brought more functionality to the original automation script. We now have a have a rolling backup set 10 versions deep with folder augmented organization based on the host name, store alias, date label and the rolling instance number. The updated script is named vmfs-bu2 linked here.

Thanks for your contribution William!

Regards,

Mike

Site Contents: © 2009  Mike La Spina

Protecting ESX VMFS Stores with Automation

Some time ago I shared some interesting information about VMFS volumes that I found using direct analysis in my blog named Understanding VMFS volumes. This spawned some discussions on the VMware Community forums and it became apparent that an automated backup of the critical VMFS info could be useful in the event of an undesirable security event that impacts our system availability. By creating a simple backup script process we can provide the ability to recover much more quickly from such events. In this howto guide we will enable this process with a cron job using the existing /etc/cron.daily/ job location directory. We simply need to copy an automation script to this location and it will run daily. Or if your change rate is less frequent maybe the /etc/cron.weekly location is more suitable.    

I have provided a simple script named vmfs-bu linked here which creates VMFS header file backups using dd for all the VMFS volumes visible on an ESX host. The script will output it’s backups to the /var/log directory. Here is a sample of what the backups look like from one my lab hosts.

-rw-r–r–    1 root     root      2097152 Mar 28 21:01 vmfs-header-backup-sda.hex
-rw-r–r–    1 root     root      2097152 Mar 28 21:01 vmfs-header-backup-sdb.hex
-rw-r–r–    1 root     root      2097152 Mar 28 21:01 vmfs-header-backup-sdc.hex
-rw-r–r–    1 root     root      2097152 Mar 28 21:01 vmfs-header-backup-sdd.hex
-r——–    1 root     root      8388608 Mar 28 21:01 vmfs-metadata-sda-vh.sf.bu
-r——–    1 root     root      8388608 Mar 28 21:01 vmfs-metadata-sdb-vh.sf.bu
-r——–    1 root     root      8388608 Mar 28 21:01 vmfs-metadata-sdc-vh.sf.bu
-r——–    1 root     root      8388608 Mar 28 21:01 vmfs-metadata-sdd-vh.sf.bu

Also be sure that the script permissions are configured as executable using the follwing.

chmod 755 /etc/cron.daily/vmfs-bu

With these simple easy steps we can create a proactive failsafe for VMFS recovery in the event of those undesirable moments we hope will never occur.

In addition to this basic method we can extend it’s functionally to recovery from undesirable VM deletions by automating the experimental vmfs-undelete tool. Before I demonstrate this functionally I must advise you that this tool may have some negative impacts to certain configurations. The vmfs-undelete tool is a set of python scripts that basically read vmks block allocations and outputs them to an archive file. In order to read the vmdks the scripts will invoke VM snapshots and this will not work if a current snapshot exists. This is not a big issue since you should not be leaving any snapshots around anyway as a best practice. However this action can negatively impact VCB or other backup agents that require snapshoting to grant vmdk access, Thus if you do not carefully time the script invocation when we have other snapshot dependant activities it could cause failures of those functions.

With this important information considered we can proceed to look at how to invoke the additional protection feature. The scripts that were originally developed by VMware are designed to be user interactive and cannot be used as originally coded therefore I have modified them in order to provision some basic automation. You can access the modified scripts named vmfs-undelete-auto-script and menuauto.py here.

The modified menuauto.py script needs to be placed within the /usr/lib/vmware/python2.2/site-packages/vmware/undeletemods directory. While I could have just modified the existing menu.py script it is subject to change so this method prevents potential conflict issues. The vmfs-undelete-auto-script script location is optional and can be placed where ever you find apropriate. I chose to place it in the /root directory. The script requires a single argument which is to direct it output location with a path and file name. Since there is a potention for conflict with other snapshot based services the script should be invoked using a cron job outside of the daily or other predefined jobs. This cron job can be implemented using the crontab facility. Here is an example of how to create it while logged in as root.

crontab -e

This command will create a cron entry in /var/spool/cron/root and will load vi as a editor you will need to create the following cron details to invoke the vmfs-undelete script. You can find more info on cron editing at http://en.wikipedia.org/wiki/Cron.

05 22 * * * python /root/vmfs-undelete-auto-script /var/log/vmfs-undelete-archive

This entry will execute at 10:05PM every day. Again be sure to set the security of the script file as executable as shown here.

chmod 755 /root/vmfs-undelete-auto-script

When the script executes it can take several minutes to complete.

Hope you found this information to be useful.

Regards,

Mike

Site Contents: © 2009  Mike La Spina

Awarded VMware vExpert Status

While I was away at the OpenSolaris Storage Summit in San Francisco a really nice email appeared in my inbox which put a grateful smile on my face. John Troyer and his team felt that I was worthy of the vExperts award. I’m honoured to receive the recognition and thank-you to the judging panel members 

Congrats to all the other recipients as well.

Site Contents: © 2009  Mike La Spina

OpenSolaris Storage Summit 2009

The OpenSolaris Storage Summit was really cool to attend this year. Mike Shapiro presented an interesting view of what is transpiring in storage hardware and where storage vendors need to focus on in order to be successful in the next few years. As always his presentation is a pleasure to follow. He talked about the s7000 series development and where it fits in terms of the current commodity hardware advancements. It was exciting to hear that we will see COMSTAR integration in the next firmware release coming in the second quarter on 2009. With the inclusion of COMSTAR we will have a very comprehensive storage provisioning solution that is fully supported by SUN.

        I also had the pleasure of hearing Don MackAskill speak on his experiences with OpenSolaris and the voyage that brought him to success on the s7000 product. His content was brilliant as usual and hopefully he will share more on SmugMug’s Blog site.

        I presented in the afternoon and talked about using COMSTAR to re-provision existing storage systems in an effort to enhance the performance and capacity of these aging products and retain their value. As well to bring some desired features to them like compression, snapshots and replication without the having the high cost licenses the on the native systems. I also created a couple of video frame stop demos. The first one demonstrated the ability to attach existing storage systems with Fiber Channel and reprovision LU’s which can then be transitioned from one storage head to another without impacting a storage consumer connection. In this case the consumer was VMware and was attached over both Fiber Channel and iSCSI in a multi path multi protocol configuration. The second demo revealed the cool world of encapsulation by virtualizing an OpenSolaris storage server within a VMware and then replicating ZFS from an X4500 to the virtulized OpenSolaris VM. Once in a virtual state we exposed replicated iSCSI targets to the underlying VMware ESXi server and attached to theVMFS volumes presented on the LU’s. 

        Ben Rockwood also presented in the afternoon and it was a great pleasure to see. He discussed his knowledge on ZFS. Specifically some of the things he has discovered as best practices and the use of tools. It was very informative and I wish he had much more time because the content was exceptional. All of the presenters both mentioned and not were really great I would like to thank all of them for giving us their valuable time in the OpenSolaris community efforts.

 

If your interested in the content please visit OpenSolaris Storage Summit

 

Regards,

 

Mike

Site Contents: © 2009  Mike La Spina

« Previous PageNext Page »