Encapsulating VT-d Accelerated ZFS Storage within ESXi
Some time ago I found myself conceptually provisioning ESXi hosts that could transition local storage in a distributed manner within an array of hypervisors. The architectural model likens itself to an amorphous cluster of servers which share a common VM client service that self provisions shared storage to it’s parent hypervisor or even other external hypervisiors. This concept originally became a reality in one of my earlier blog entries named Provisioning Disaster Recovery with ZFS, iSCSI and VMware. With this previous success of a DR scope we can now explore more adventurous applications of storage encapsulation and further coin the phrase of “rampent layering violations of storage provisioning” thanks to Jeff Bonwick, Jim Moore and many other brilliant creative minds behind the ZFS storage technology advancements. One of the main barriers of success for this concept was the serious issue of circular latency from within the self provisioning storage VM. What this commonly means is we have a long wait cycle for the storage VM to ready the requested storage since it must wait for the hypervisior to schedule access to the raw storage blocks for the virtualized shared target which then will re-provision it to other VM’s. This issue is acceptable for a DR application but it’s a major show stopper for applications that require normal performance levels.
This major issue now has a solution with the introduction of Intel’s VT-d technology. VT-d allows us to accelerate storage I/O functionality directly inside a VM served by a VMware based ESX and ESXi hypervisors. VMware has leveraged Intel’s VT-d technology on ESXi 4.x (AMD I/O Virtualization Technology (IOMMU) is also supported) as part of the named feature VMDirectPath. This feature now allows us to insert high speed devices inside a VM which can now host a device that operates at the hardware speed of the PCI Bus and that my friend allows virtualized ZFS storage provisioning VMs to dramatically reduce or eliminate the hypervisor’s circular latency issue.
Very exciting indeed, so lets leverage a visual diagram of this amorphous server cluster concept to better capture what this envisioning actually entails.
The concept depicted here sets a multipoint NFS share strategy. Each ESXi host provisions it’s own NFS share from it’s local storage which can be accessed by any of the other hosts including itself. Additionally each encapsulated storage VM incorporates ZFS replication to a neighboring storage VM in a ring pattern thus allowing for crash based recovery in the event of a host failure. Each ESXi instance hosts a DDRdrive X1 PCIe Card which is presented to it’s storage VM over VT-d and VMDirectPath aka. PCI Pass Through. When managed via vCenter this solution allows us to svMotion VM’s across the cluster allowing rolling upgrades or hardware servicing.
The ZFS replication cycle works as a background ZFS send receive script process that incrementally updates the target storage VM. One very useful feature of ZFS send receive capability is the include ZFS properties flag -p. When this flag is used any NFS share properties that are defined using “sharenfs= ” will be sent the the target host. Thus the only required action to enable access to the replicated NFS share is to add it as an NFS storage target on our ESXi host. Of course we would also need to stop replication if we wish to use the backup or clone it to a new share for testing. Testing the backup without cloning will result in a modified ZFS target file system and this could force a complete ZFS resend of the file system in some cases.
Within this architecture our storage VM is built with OpenSolaris snv_134 thus we have the ability to engage in ZFS deduplication. This not only improves the storage capacity it also grants improved performance when we allocate sufficient memory to the storage VM. ZFS Arc caching needs only to cache these dedup block hits once which accelerates all depup access requests. For example if this cluster served a Virtual Desktop Environment (VDI) we would see all the OS file allocation blocks enter into the ZFS Arc cache and thus all VMs that reference the same OS file blocks would be cache accelerated. Dedup also grants a benefit with ZFS replication with the use of the ZFS send -D flag. This flag instructs ZFS send to the stream in dedup format and this dramatically reduces replication bandwidth and time consumption in a VMware environment.
With VT-d we now have the ability to add a non-volatile disk device as a dedicated ZIL accelerator commonly called a SLOG or Separate Intent Log. In this proof of concept architecture I have defined the DDRdrive X1 as a SLOG disk over VMware VMDirectPath to our storage VM. This was a challenge to accomplish as VT-d is just emerging and has many unknown behaviors with system PCI BUS timing and IRQ handling. Coaxing VT-d to work correctly proved to be the most technically difficult component of this proof of concept, however success is at hand using a reasonably cost effective ASUS motherboard in my home lab environment.
Let’s begin with the configuration of VT-d and VMware VMDirectPath.
VT-d requires system BIOS support and this function is available on the ASUS P6X58D series of motherboards. The feature is not enabled by default you must change it in BIOS. I have found that enabling VT-d does impact how ESXi behaves, for example some local storage devices that were available prior to enabling VT-d may not be accessible after enabling it and could result in messages like “cannot retrieve extended partition information”.
The following screen shots demonstrate where you would find the VT-d BIOS setting on the P6X58D mobo.
If your using an AMD 890FX based ASUS Crosshair IV mobo then look for the IOMMU setting as depicted here:
Thanks go to Stu Radnidge over at http://vinternals.com/ for the screen shot!
Once VT-d or IOMMU is enabled ESXi VMDirectPath can be enabled from the VMware vSphere client host configuration-> advanced menu and will require a reboot to complete any further PCI sharing configurations.
One challenge I encountered was PCIe BUS timing issues, fortunately the ASUS P6X58D overclocking capability grants us the ability to align our clock timing on the PCIe BUS by tuning the frequency and voltage and thus I was able to stabilize the PCIe interface running on the DDRdrive X1. Here are original values I used that worked. Since that time I have pushed the i7 CPU to 4.0Ghz, but that can be risky since you need to up the CPU and DRAM voltages so I will leave the safe values for public consumption.
Once VT-d is active you will be able to edit the enumerated PCI device list check boxes and allow pass through for the device of your choice. There are three important PCI values to note. The device ID, Vendor ID and the Class ID of which you can Google it or take this short cut http://www.pcidatabase.com/ and discover who owns the device and what class it belongs to. In this case I needed to ID the DDRdrive X1 and I know by the class ID 0100 that it is a SCSI device.
Once our DDRdrive X1 device is added to the encapsulated OpenSolaris VM it’s shared IRQ mode will need to be adjusted such that no other IRQ’s are chained to it. This is adjusted by adding a custom VM config parameter named pciPassthru0.msiEnabled and setting its value to false.
In this proof of concept the storage VM is assigned 4Gb of memory which is reasonable for non-deduped storage. If you plan to dedup the storage I would suggest significantly more memory to allow the block hash table to be held in memory, this is important for performance and is also needed if you have to delete a ZFS file system. The amount will vary depending on the total storage provisioned. I would rough estimate about 8GB of memory for each 1TB of used storage. As well we have two network interfaces of which one will provision the storage traffic only. Keep in mind that dedup is still developing and should be heavily tested, you should expect some issues.
.
If you have read my previous blog entry Running ZFS Over NFS as a VMware Store you will find the next section to be very similar. This is essentially many of the same steps but excludes aggregation and IPMP capability.
Using a basic OpenSolaris Indiana completed install we can proceed to configure a shared NFS store so let’s begin with the IP interface. We don’t need a complex network configuration for this storage VM and therefore we will just setup simple static IP interfaces, one to manage the OpenSolaris storage VM and one to provision the NFS store. Remember that you should normally separate storage networks from other network types from both a management and security perspective.
OpenSolaris will default to a dynamic network service configuration named nwam, this needs to be disabled and the physical:default service enabled.
root@uss1:~# svcadm disable svc:/network/physical:nwam
root@uss1:~# svcadm enable svc:/network/physical:default
To persistently configure the interfaces we can store the IP address in the local hosts file. The file will be referenced by the physical:default service to define the network IP address of the interfaces when the service starts up.
Edit /etc/hosts to have the following host entries.
::1 localhost
127.0.0.1 uss1.local localhost loghost
10.0.0.1 uss1 uss1.domain.name
10.1.0.1 uss1.esan.data1
As an option if you don’t normally use vi you can install nano.
root@uss1:~# pkg install SUNWgnu-nano
When an OpenSolaris host starts up the physical:default service will reference the /etc directory and match any plumbed network device to a file which contains the interface name a prefix of “hostname” and an extension using the interface name. For example in this VM we have defined two Intel e1000 interfaces which will be plumbed using the following commands.
root@uss1:~# ifconfig e1000g0 plumb
root@uss1:~# ifconfig e1000g1 plumb
Once plumbed these network devices will be enumerated by the physical:default service and if a file exists in the /etc directory named hostname.e1000g0 the service will use the content of this file to configure this interface in the format that ifconfig uses. Here we have created the file using echo, the “uss1.esan.data1” name will be looked up in the hosts file and maps to IP 10.1.0.1, the network mask and broadcast will be assigned as specified.
root@uss1:~# echo uss1.esan.data1 netmask 255.255.0.0 broadcast 10.1.255.255 > /etc/hostname.e1000g0
One important note: if your /etc/hostname.e1000g0 file has blank lines you may find that persistence fails on any interface after the blank line, thus no blank in the file sanity check would be advised.
One important requirement is the default gateway or route. Here we will assign a default route to network 10.0.0.1 which is the management network. also we need to add a route for network 10.1.0.0. using the following commands. Normally the routing function will dynamically assign the route for 10.1.0.0 so assigning a static one will ensure that no undesired discovered gateways are found and used which may cause poor performance.
root@uss1:~# route -p add default 10.0.0.254
root@uss1:~# route -p add 10.1.0.0 10.1.0.1
When using NFS I prefer provisioning name resolution as a additional layer of access control. If we use names to define NFS shares and clients we can externally validate the incoming IP with a static file or DNS based name lookup. An OpenSolaris NFS implementation inherently grants this methodology. When a client IP requests access to an NFS share we can define a forward lookup to ensure the IP maps to a name which is granted access to the targeted share. We can simply define the desired FQDNs against the NFS shares.
In small configurations static files are acceptable as is in the case here. For large host farms the use of a DNS service instance would ease the admin cycle. You would just have to be careful that your cached TimeToLive (TTL) value is greater that 2 hours thus preventing excessive name resolution traffic. The TTL value will control how long the name is cached and this prevents constant external DNS lookups.
To configure name resolution for both file and DNS we simply copy the predefined config file named nsswitch.dns to the active config file nsswitch.conf as follows:
root@uss1:~# cp /etc/nsswitch.dns /etc/nsswitch.conf
Enabling DNS will require the configuration of our /etc/resolv.conf file which defines our name servers and namespace.
e.g.
root@ss1:~# cat /etc/resolv.conf
domain laspina.ca
nameserver 10.1.0.200
nameserver 10.1.0.201
You can also use the static /etc/hosts file to define any resolvable name to IP mapping, which is my preferred method but since were are using ESXi I will use DNS to ease the administration cycle and avoid the unsupported console hack of ESXi.
It is now necessary to define a zpool using our VT-d enabled PCI DDRdrive X1 and VMDK. The VMDK can be located on any suitable VT-d compatible adapter. There is a good change that some HBA devices will not work with VT-d correctly with your system BIOS. As a tip I suggest you use a USB disk to provision the ESXi installation as it almost always works and is easy to backup and transfer to other hardware. In this POC I used a 500GB SATA disk attached over an ICH10 AHCI interface. Obviously there are other better performing disk subsystems available, however this is a POC and not for production consumption.
To establish the zpool we need to ID the PCI to CxTxDx device mappings, there are two ways that I am aware to find these names. You can ream the output of the prtconf -v command and look for disk instances and dev_links or do it the easy way and use the format command like the following.
root@uss1:~# format
Searching for disks…done
AVAILABLE DISK SELECTIONS:
0. c8t0d0 <DEFAULT cyl 4093 alt 2 hd 128 sec 32>
/pci@0,0/pci15ad,1976@10/sd@0,0
1. c8t1d0 <VMware-Virtual disk-1.0-256.00GB>
/pci@0,0/pci15ad,1976@10/sd@1,0
2. c11t0d0 <DDRDRIVE-X1-0030-3.87GB>
/pci@0,0/pci15ad,7a0@15/pci19e3,8@0/sd@0,0
Specify disk (enter its number): ^C
root@uss1:~#
With the device link info handy we can define the zpool with the DDRdrive X1 as a ZIL using the following command:
root@uss1:~# zpool create sp1 c8t1d0 log c11t0d0
root@uss1:~# zpool status
pool: rpool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
c8t0d0s0 ONLINE 0 0 0
errors: No known data errors
pool: sp1
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
sp1 ONLINE 0 0 0
c8t1d0 ONLINE 0 0 0
logs
c11t0d0 ONLINE 0 0 0
errors: No known data errors
With a functional IP interface and ZFS pool complete you can define the NFS share and ZFS file system. Always define NFS properties using ZFS set sharenfs=, the share parameters will store as part of the ZFS file system which is ideal for a system failure recovery or ZFS relocation.
zfs create -p sp1/nas/vol0
zfs set mountpoint=/export/uss1-nas-vol0 sp1/nas/vol0
zfs set sharenfs=rw,nosuid,root=vh3-nas:vh2-nas:vh1-nas:vh0-nas sp1/nas/vol0
To connect a VMware ESXi host to this NFS store(s) we need to define a vmkernel network interface which I like to name eSAN-Interface1. This interface should only connect to the storage network vSwitch. The management network and VM network should be on another separate vSwitch.
Since we are encapsulating the storage VM on the same server we also need to connect the VM to the storage interface over a VM network port group as show above. At this point we have all the base NFS services ready, we can now connect our ESXi host to the newly defined NAS storage target.
Thus we now have an Encapsulated NFS storage VM provisioning an NFS share to it’s parent hypervisor.
You may have noticed that the capacity of this share is ~390GB however we only granted a 256GB vmdk to this storage VM. The capacity anomaly is the result of ZFS deduplication on the shared file system. There are 10 16GB Windows XP hosts and 2 32GB Linux host located on this file system which would normally require 224GB of storage. Obviously dedup is a serious benefit in this case however you need to be aware of the costs, in order to sustain performance levels similar to non-deduped storage you MUST grant the ZFS code sufficient memory to hold the block hash table in memory. If this is memory not provisioned in sufficient amounts, your storage VM will be relegated to a what appears to be a permanent storage bottle neck, in other words you will enter a “processing time vortex”. (Thus as I have cautioned in the past ZFS dedup is maturing and needs some code changes before trusting it to mission critical loads, always test, test, test and repeat until you’re head spins)
Here’ s the result of using dedup within the encapsulated storage VM.
root@uss1:~# zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
rpool 7.94G 3.64G 4.30G 45% 1.00x ONLINE –
sp1 254G 24.9G 229G 9% 6.97x ONLINE –
And here’s a look at what’s it’s serving.
Incredibly the IO performance is simply jaw dropping fast, here we are observing a grueling 100% random read load at 512 bytes per request. Yes that’s correct we are reaching 40,420 IOs per second.
Even more incredible is the IO performance with a 100% random write load at 512 bytes per request. it’s simply unbelievable seeing 38491 IOs per second inside a VM which is served from a peer VM all on the same hypervisor.
With a successfully configured and operational NFS share provisioned the next logical task is to define and automate the replication of this share and any others shares we may we to add to a neighboring encapsulated storage VM or for that matter any OpenSolaris host.
The basic elements to this functionality as follows:
- Define a dedicated secured user to execute the replication functions.
- Grant the appropriate permissions to this user to access a cron and ZFS.
- Assign an RSA Key pair for automated ssh authentication.
- Define a snapshot replication script using ZFS send/receive calls.
- Define a cron job to regularly invoke the script.
Let define the dedicated replication user. In this example I will use the name zfsadm.
First we need to create the zfsadm user on all of our storage VMs.
root@uss1:~# useradd -s /bin/bash -d /export/home/zfsadm -P ‘ZFS File System Management’ zfsadm
root@uss1:~# mkdir /export/home/zfsadm
root@uss1:~# cp /etc/skel/* /export/home/zfsadm
root@uss1:~# echo PATH=/bin:/sbin:/usr/ucb:/etc:. > /export/home/zfsadm/.profile
root@uss1:~# echo export PATH >> /export/home/zfsadm/.profile
root@uss1:~# echo PS1=$’${LOGNAME}@$(/usr/bin/hostname)’~#’ ‘ >> /export/home/zfsadm/.profile
root@uss1:~# chown –R zfsadm /export/home/zfsadm
root@uss1:~# passwd zfsadm
In order to use an RSA key for authentication we must first generate an RSA private/public key pair on the storage head. This is performed using ssh-keygen while logged in as the zfsadm user. You must set the passphrase as blank otherwise the session will prompt for it.
root@uss1:~# su – zfsadm
zfsadm@uss1~#ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/export/home/zfsadm/.ssh/id_rsa):
Created directory ‘/export/home/zfsadm/.ssh’.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /export/home/zfsadm/.ssh/id_rsa.
Your public key has been saved in /export/home/zfsadm/.ssh/id_rsa.pub.
The key fingerprint is:
0c:82:88:fa:46:c7:a2:6c:e2:28:5e:13:0f:a2:38:7f zfsadm@uss1
zfsadm@uss1~#
The id_rsa file should not be exposed outside of this directory as it contains the private key of the pair, only the public key file id_rsa.pub needs to be exported. Now that our key pair is generated we need to append the public portion of the key pair to a file named authorized_keys2.
# cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys2
Repeat all the crypto key steps on the target VM as well.
We will use the Secure Copy command to place the public key file on the target hosts zfsadm users home directory. It’s very important that the private key is secured properly and it is not necessary to back it up as you can regenerate them if required.
From the local server here named uss1 (The remote server is uss2)
zfsadm@uss1~# scp $HOME/.ssh/id_rsa.pub uss2:$HOME/.ssh/uss1.pub
Password:
id_rsa.pub 100% |**********************************************| 603 00:00
zfsadm@uss1~# scp uss2:$HOME/.ssh/id_rsa.pub $HOME/.ssh/uss2.pub
Password:
id_rsa.pub 100% |**********************************************| 603 00:00
zfsadm@uss1~# cat $HOME/.ssh/uss2.pub >> $HOME/.ssh/authorized_keys2
And on the remote server uss2
# ssh uss2
password:
zfsadm@uss2~# cat $HOME/.ssh/uss1.pub >> $HOME/.ssh/authorized_keys2
# exit
Now that we are able to authenticate without a password prompt we need to define the automated replication launch using cron. Rather that using the /etc/cron.allow file to grant permissions to the zfsadm user we are going to use a finer instrument and grant the user access at the user properties level shown here. Keep in mind you can not use both ways simultaneously.
root@uss1~# usermod -A solaris.jobs.user zfsadm
root@uss1~# crontab –e zfsadm
59 23 * * * ./zfs-daily-rpl.sh zfs-daily.rpl
Hint: crontab uses vi – http://www.kcomputing.com/kcvi.pdf “vi cheat sheet”
The key sequence would be hit “i” and key in the line then hit “esc :wq” and to abort “esc :q!”
Be aware of the timezone the cron service runs under, you should check it and adjust it if required. Here is a example of whats required to set it.
root@uss1~# pargs -e `pgrep -f /usr/sbin/cron`
8550: /usr/sbin/cron
envp[0]: LOGNAME=root
envp[1]: _=/usr/sbin/cron
envp[2]: LANG=en_US.UTF-8
envp[3]: PATH=/usr/sbin:/usr/bin
envp[4]: PWD=/root
envp[5]: SMF_FMRI=svc:/system/cron:default
envp[6]: SMF_METHOD=start
envp[7]: SMF_RESTARTER=svc:/system/svc/restarter:default
envp[8]: SMF_ZONENAME=global
envp[9]: TZ=PST8PDT
Let’s change it to CST6CDT
root@uss1~# svccfg -s system/cron:default setenv TZ CST6DST
Also the default environment path for cron may cause some script “command not found” issues, check for a path and adjust it if required.
root@uss1~# cat /etc/default/cron
#
# Copyright 1991 Sun Microsystems, Inc. All rights reserved.
# Use is subject to license terms.
#
#pragma ident “%Z%%M% %I% %E% SMI”
CRONLOG=YES
This one has no default path, add the path using echo.
root@uss1~# echo PATH=/usr/bin:/usr/sbin:/usr/ucb:/etc:. > /etc/default/cron
# svcadm refresh cron
# svcadm restart cron
The final part of the replication process is a script that will handle the ZFS send/recv invocations. I have written a script in the past that can serve this task with some very minor changes.
Here is the link for the modified zfs-daily-rpl.sh replication script you will need to grant exec rights to this file e.g.
# chmod 755 zfs-daily-rpl.sh
This script will require that a zpool named sp2 exists on the target system, this is shamefully hard coded in the script.
A file containing the file system to replicate and the target are required as well.
e.g.
zfs-daily-rpl.sh filesystems.lst
Where filesystems.lst contains:
sp1/nas/vol0 uss2
sp1/nas/vol1 uss2
With any ZFS replicated file system that you wish to invoke on a remote host it is important to remember not make changes to the active replication stream. You must take a clone of this replication stream and this will avoid forcing a complete resend or other replication issues when you wish to test or validate that it’s operating as you expect.
For example:
We take a clone of one of the snapshots and then share it via NFS:
root@uss2~# zfs clone sp2/nas/vol0@31-04-10-23:59 sp2/clones/uss1/nas/vol0
root@uss2~# zfs set mountpoint=/export/uss1-nas-vol0 sp2/clones/uss1/nas/vol0
root@uss2~# zfs set sharenfs=rw,nosuid,root=vh3-nas:vh2-nas:vh1-nas:vh0-nas sp2/clones/uss1/nas/vol0
Well I hope you found this entry interesting.
Regards,
Mike
Tags: Acceleration, Dedup, Encapsulation, IOMMU, NFS, Storage, VMware, VT-d, zfs
Site Contents: © 2010 Mike La Spina
[…] This post was mentioned on Twitter by VMware Planet V12n, William Lam, Mike La Spina, raphael schitz, Didier Pironet and others. Didier Pironet said: RT @MikeLaSpina: New blog entry ZFS VT-d accelerated encapsulated ESXi storage http://tinyurl.com/2btjxa9 -> WOW impressive! […]
hello
i also suppose, that adding a zfs storage server to a virtual server is at the moment the best solution for a all in one virtual server.
i have a similar concept with esxi + nexenta + napp-it web-gui to manage the server
see my all in one server concept at http://www.napp-it.org
gea
Nice solution Gea!
I’m looking forward to watching it mature!
Regards,
Mike
[…] Ubiquitous Talk » Encapsulating VT-d Accelerated ZFS Storage … […]
Great post Mike, I have a new AMD 890FX based motherboard and have a screeny of the BIOS setup for IOMMU if you want it (just for the sake of completeness 🙂
Thanks Stu,
That would be great, I’ll ping your email.
Mike,
My colleague and I were headed down the same path with hopes to use the DDRDrive X1 as a ZIL in an embedded ZFS VM for some unique VM storage possibilities. To get the ball rolling I simply attached the DDRDrive to a Windows 2008 VM to run some simple ioMeter tests. I’m seeing extremely high interrupt rates when running tests against the passthru device; about 20x higher than running off an NFS volume on a remote server. Because of this, I believe my iops are falling quite short of their potential; roughly 20,000 iops using 512 byte sequential transfers. Any thoughts on tweaks to drop the interrupts and hope to achieve results similar to yours? As an FYI – I am running a Dell R710 with E5520 Nehalems and my VM is configured with 2CPU and 2GB RAM. No contention on the host, but my CPU and Memory shares are set to high with a full reservations as a precaution.
Thanks for sharing all this great material.
All the best,
Rob
Hi Rob,
As I think about the way NFS operates versus raw SCSI CDB’s I would expect significantly higher irq service calls from the hardware layer when driving 512 byte IOs. NFS will place the IOs on the network adapter ring buffers and these will be passed over the bus with a DMA service call in larger blocks which is very different from a SCSI CDB call handler. NFS will gather and write based on its record size verses the incoming size. I think the real issue here may be the VM CPU scheduler service timing and behavior.
I would explore some adjustments of the hypervisiors advanced settings in the area of irq.routingpolicy, irq.*weight and possibly the irq.irqrebalanceperiod.
I would also configure only one vCPU to simplify and then re-examine its behavior as this can help to indicate where the root cause could be.
Another area of concern would be any shared hardware PCI paths, be careful of the slot placements, I suspect that PCI bridge overloading would cause excessive irq handler calls.
Regards,
Mike
[…] and working IOMMU too. I found that the ASUS Crosshair IV Formaul has IOMMU support in the BIOS: Ubiquitous Talk But ASUS has a bad reputation with Intel VT-d, they don't support Linux. KVM and Xen cannot use […]
Which 890FX motherboard do you have with IOMMU?
Did you try it with XEN?
Hi Leonardo,
You can check with Stu Radnidge and see if he has tried XEN with it. The board is the ASUS Crosshair IV Formula and it supports IOMMU 1.2 which includes the required ACPI Tables but it’s BIOS support is simply on or off and does not have tunable settings.
Regards,
Mike
You wrote an earlier piece on iscsi vs nfs cpu utilisation, would be interested to see the numbers with the latest 4.0u2 upgrade to see how far improved to your original test.
Hi Brad,
I have a build with it right now, even without a bench mark I noticed improvements in both CPU load and general performance. I will see if I have time to throw a bench on it.
Regards,
Mike
G’day Mike,
Great article..
With your VT-d and adjustments of voltage etc; how did you work these out? I am doing some other projects (tv tuner) with vt-d and feel the pci-express tuning needs attention.
thanks
david
Hi David,
Thanks,
It was worked out by trial and error. There are no bus time specifications available and the equipment is far to costly to attain if it were. I simply incremented the bus, cpu speed voltages until the system was stable and then incremented to an unstable point and backed down to the center frequency.
Regards,
Mike
Hi, you are mentioning this board in your post: AMD 890FX based ASUS Crosshair IV, can you confirm that you were able to successfully passthrough a device to a VM in ESX using this board?
I have had issues with a similar board based on 890FX chipset and looking to replace with above mentioned one…but want to confirm i will be all good with VMDirectPath.
Sorry, I cannot confirm that it works with vmdirectpath. Try posting it to Stu Radnidge.
Hi Mike,
just had tested it folloqing convo with Stu, and it does appear to have work, at least the IOMMU driver is being properly picked up by ESXi… I am going to get some tv-tuner and try passing it to my VM … updates to follow.
That’s great, looking forward to what you discover using IOMMU.
Regards,
Mike
I could not wait, and instead set one of my NICs in the passthrough mode and assigned it to my windows xp 32 bit VM – just to see if it worked. See http://img.skitch.com/20101028-c4hp39qtkt8hiy1ppsxrupapmd.jpg for results. So, yes it is working.
On the other hand my disk read/write is way too slow.
What sort of disk (sata) speed are you usually getting on customer grade systems in ESXi – disks set as data store?
thanks
Hi Mike,
You will see a variety of performance differences from system to system. But generally using AHCI you will see about 15-20 MB/s from a VM using a single threading I/O process. Running IOMeter with multiple workers can yield up to 120MB/s.
Regards,
Mike
hi mike,
I know that zil can improve write iops,But why random read iops can be improved?
Reading operation should happen in the VMDK,right?
The VMDK file is basically just blocks on a physical disk, SSD acceleration works for both read and write in this case the ZIL is so fast that there is very little difference in the more intensive write operation. L2ARC is on the same SSD and since this is a NVRAM SCSI device the IOPS are remarkable.
Regards,
Mike
hi mike,
You mean zil and l2arc in the same ssd ( ddrdrive x1 ),My mind was in a complete haze.
Hi Phio,
I did not word that very well. What I was trying to convey is that the L2ARC cache is stored in DRAM and after a read warm up time it will accelerate speed and behave like the SSD’s NVRAM ZIL since they are both RAM based cache storage. The read IOPS would probably hit a peek of approximately 125,000 if the data was all in DRAM.
Regards,
Mike
Thanks mike,i am clear now
After setting up 2 vms just to test the zfs replication it appears getting to the automation of this trying to launch your script manually ‘zfs-daily-rpl.sh filesystems.lst’
I get the fallowing error:
ssh: nas_pool/vol0: node name or service name not known
ssh: nas_pool/vol0: node name or service name not known
ssh: nas_pool/vol0: node name or service name not known
ssh: nas_pool/vol0: node name or service name not known
I have edited your script to reflect my pool name of ‘nas_pool’
in the filesystem.lst is the following:
nas_pool/vol0 test-san-02
test-san-01 is my first san box test-san-02 is my test san box that i want to replicate to…
The replication.log shows:
Wednesday, September 14, 2011 10:37:53 AM PDT -> nas_pool/vol0@14-09-11-10:37 Snapshot creation start.
Wednesday, September 14, 2011 10:37:53 AM PDT -> nas_pool/vol0@14-09-11-10:37 Snapshot creation end.
Wednesday, September 14, 2011 10:37:53 AM PDT -> nas_pool/vol0 file system does not exist on target host nas_pool/vol0.
Wednesday, September 14, 2011 10:37:53 AM PDT -> nas_pool/vol0@14-09-11-10:37 Initial replication start.
Wednesday, September 14, 2011 10:37:53 AM PDT -> nas_pool/vol0@14-09-11-10:37 Initial replication end.
Any help is much appreciated.
Ryan
Hi Ryan,
You should have both hosts in the filesystem list.
e.g.
zpool1/vol0 sourcehost desthost
Regards,
Mike
Thanks mike! I will try that out!
i debugged your script and this is what looks like is happening….
+ parse_fs_list nas_pool/vol0 test-san-01 test-san-02
+ fs_list_line=nas_pool/vol0
++ echo nas_pool/vol0
++ cut -d, -f1
+ zfspath=nas_pool/vol0
++ echo nas_pool/vol0
++ cut -d, -f2
+ shost=nas_pool/vol0
++ echo nas_pool/vol0
++ cut -d, -f3
+ dhost=nas_pool/vol0
Ryan,
I see two posible causes:
The text file was created on a Windows host and contains hidden characters or bash does like the hyphens in the host names.
the txt file was created on the OI server using nano.
I will explore the host name with hyphens today.
Ryan
It appears that your script is no longer available 🙁
Hey Mike!
I wanted to let you know that every thing is functioning properly and to thank you for posting this information.
I have created a single script based from the script you mention in this article and the script mentioned in the ZFS Snapshot Rollup Bash Script to manage not only the snapshot creation and sending to a remote server how ever also maintaining X days worth of snaps. I also removed the need for hard coding the pool names.
If you would like I can send you the script as the original concept and most of the scripting ideas came from your 2 scripts for publishing how ever I am not 100% up and understating about what the rules are about GNU, not to mention all I did was clean things up and arrange things so that it made more sense to me.
My thoughts on the script and the next thing I think I will implement in to it is to have more flexibility on pool names and data set names as currently the pool and data set names remain consistent (seemed logical as I am creating a network mirrored file system).
Let me know your thoughts on this.
Again thank you so much for your creativity and input on this subject.
Ryan
Excellent work Ryan! I would love to post your contribution to OpenIndiana/Solaris/Illumos. I will send you an email to facilitate it.
Thanks,
Mike
Sent hope it looks ok.
Ryan
[…] storage software to co-exist on the same server as virtual machines. This can be achieved by using PCI Pass Through with Vmware or Xenserver along with storage appliance software such as NexentaStor. Another […]
[…] […]
[…] of these “all-in-one” ZFS storage setups. Initially inspired by the excellent posts at Ubiquitous Talk, my solution takes a slightly different approach to the hardware design, but yields the result of […]
[…] http://blog.laspina.ca/ubiquitous/encapsulating-vt-d-accelerated-zfs-storage-within-esxi Posted on 13/06/2012 by mehdi. This entry was posted in Reseaux, Unix, Virtualisation, ZFS and tagged ESXi, Solaris, ZFS. Bookmark the permalink. ZFS ZIL and L2ARC » […]
Got to use /usr/bin/pfsh instead of /bin/bash else profile permissions don’t apply.
[…] http://blog.laspina.ca/ubiquitous/encapsulating-vt-d-accelerated-zfs-storage-within-esxi Lik dette:Like Laster… Dette innlegget ble publisert i Unix/Linux/Solaris. Bokmerk permalenken. ← Konvertere Dell Perc H200 og Dell SAS 6/i til IT-mode […]