I'm in the process of trying to optimize the iomemory-vsl driver (the newest release at the time of this posting) parameters for maximum sustained read/write speeds on a FusionIO Octal 5.12TB.
In doing so I noticed a small bug that kept the driver from loading the specified options. Within /etc/sysconfig/iomemory-vsl there is a line like so:
# Any special module parameters for iomemory-vsl: "modinfo iomemory-vsl"# for a listing of driver parameters.FIO_DRIVER_MOD_OPTS=""
This is where one would normally put the options to be loaded at boot when the module is inserted. However, with the newest version of the init script and driver, the variable name actually needs to be "iomemory_vsl_MOD_OPTS".
That small bug aside, I've been messing with various module parameters to achieve optimum performance. However, I don't seem to be able to find a description for all the parameters that can be passed in to the module. Is there a listing of these parameters anywhere? I've gleaned a few nuggets of information from various sources around the 'net but it'd be nice to have a real description of each of the parameters and what they actually affect in the current driver.
Any help is appreciated!
I found several references indicating that smp_affinity can only be used for IO-APIC enabled devices, so I fully expected to see the same thing you did. I didn't. cat /proc/interrupts | grep 'CPU\|<irq#>:' would show that I was using MSI and was able to manipulate which CPUs were managing the load after setting the smp_affinity in /proc/irq/<irq#>/smp_affinity.
However, in thinking about it, I think the reason may have been due to using a VM (RHEL 6.1 kernel 2.6.32-131.6.1.el6.x86_64) on an ESXi box with PCI-Passthrough. I'll do some more tests on non-virtualized hardware and get back to you.
I tried on bare metal with the same kernel version, and MSI will not follow smp_affinity settings (which agrees with what I have been reading). As suspected, it must have been an ESXi thing for my first test.
Reading further: https://bugzilla.redhat.com/show_bug.cgi?id=432451 " MSI IRQ affinity cannot be changed
reliably for PCI devices without MSI mask bits (an optional MSI feature)". This would lead me to conclude that the ioDrive is one of these devices.
I have an additional question that should be easier to answer:
Does the driver respond properly to setting IRQ smp_affinity when using MSI interrupts? On our machine all of the interrupts land on CPU 0 even with the mask set to 8888,88888888 (all CPUs in socket 3). I didn't test whether the IRQ mask worked with IO-APIC interrupts but I can if needed.
EDIT: After restarting and inserting the driver without "disable_msi=0" IRQs are indeed able to be set to other cores. Is this a driver bug or is it intended behavior?
I'm surprised there is such a low level of traffic on this board. I would have thought that the driver parameters would have been an interest to more than myself.
I'll take a look at this tomorrow, see if I can generate the same results, and get back with you. As far as a list of explanations on the load parameters, not that I have seen, but I'll see what I can dig up.
Awesome! I wouldn't be surprised if the virtualized nature of your setup gives out oddball results.
Thanks for taking a look at it - you're the only person so far that's taken any interest in my issue. :)
My pleasure - it's been a fun exercise :)
I wasn't able to get to my other server (I'm out of town this week). I'll be back in Monday or Tuesday and will let you know what I find.
Awesome! I wish I could do more testing but I only have one type of machine that I can fit these cards into.
Good to hear it's not just my system. I didn't see a huge performance difference between the two configurations after further testing but it'd be nice if the bug was fixed (with regards to setting the options in the first place).
Thanks again for your testing!!!
Glad to be of help, and good luck with the Octal!