IOP Testing and Shit

attaching a ssd to a zfs pool of a raidz made up of 4x500gb hd to use
as a zil resulted in a ~4.89x performance increase in a real world
VMware 60% rand 65% write access pattern, an ssd alone netted a ~10x
improvement. When more than one worker is assigned the iops on the HSP
remained at a constant performance level where as the standard pool
quickly dropped to unusable levels. When the main os was converted to
a usb device and the other ssd was split into two partions and used as
a read cache for the zfs system, and the zil for the raidz pool with
the ssd in its own pool, at relative 16x IOPs improvement was noticed.
when running multiple workloads from different NFS datastores

So anywho im now flooding the fucking pipes and need to lag. A VERY
cost effect upgrade ($260 bucks for a hell of a ROI and performance
increase)