[Gluster-users] XFS and MD RAID

Brian Candler B.Candler at pobox.com
Wed Aug 29 14:26:26 UTC 2012


On Wed, Aug 29, 2012 at 08:47:22AM -0400, Brian Foster wrote:
> We have a few servers with 12 drive LSI RAID controllers we use for
> gluster (running XFS on RHEL6.2). I don't recall seeing major issues,
> but to be fair these particular systems see more hacking/dev/unit test
> work than longevity or stress testing. We also are not using MD in any
> way (hardware RAID).
> 
> I'd be happy to throw a similar workload at one of them if you can
> describe your configuration in a bit more detail: specific MD
> configuration (RAID type, chunk size, etc.), XFS format options and
> mount options, anything else that might be in the I/O stack (LVM?),
> specific bonnie++ test you're running (a single instance? or some kind
> of looping test?).

Running a couple of concurrent instances of

  while [ 1 ]; do bonnie++ -d /mnt/point -s 16384k -n 98:800k:500k:1000; done

was enough to make it fall over for me, when the underlying filesystem was
XFS, but not with ext4 or btrfs.  This was on a system with 24 disks: 16 on
an LSI 2116 controller and 8 on an LSI 2008.  It was MD RAID0:

  mdadm --create /dev/md/scratch -n 24 -c 1024 -l raid0 /dev/sd{b..y}
  mkfs.xfs -n size=16384 /dev/md/scratch
  mount -o inode64 /dev/md/scratch /mnt/point

I'm in the process of testing with cut-down configuration of 8 or 4 disks,
using only one controller card, to see if I can get it to fail there. So far
no failure after 3 hours.

> Could you collect the generic data and post it to linux-xfs? Somebody
> might be able to read further into the problem via the stack traces. It
> also might be worth testing an upstream kernel on your server, if possible.

I posted the tracebacks to the xfs at sgi.com mailing list (wasn't aware of
linux-xfs): threads starting
http://oss.sgi.com/pipermail/xfs/2012-May/019239.html
http://oss.sgi.com/pipermail/xfs/2012-May/019417.html

(Note: the box I referred to as "storage2" turned out to have a separate
hardware problem, it resets after a few days)

Actually, thank you for reminding me of this. Looking back through these
prior postings, I noted at one point that transfers had locked up to the
point that even 'dd' couldn't read some blocks.  This points the finger away
from XFS and MD RAID and more towards the LSI driver/firmware or the drives
themselves.  I'm now using 12.04; when I next get a similar lockup I'll
check for that again.

Regards,

Brian.



More information about the Gluster-users mailing list