[Gluster-users] XFS and MD RAID

Brian Foster bfoster at redhat.com
Fri Sep 7 17:27:27 UTC 2012


On 09/04/2012 01:41 PM, Brian Foster wrote:
> On 08/29/2012 12:06 PM, Brian Foster wrote:
>> On 08/29/2012 10:26 AM, Brian Candler wrote:
>>> On Wed, Aug 29, 2012 at 08:47:22AM -0400, Brian Foster wrote:
>> ...
>>>
>>> Running a couple of concurrent instances of
>>>
>>>   while [ 1 ]; do bonnie++ -d /mnt/point -s 16384k -n 98:800k:500k:1000; done
>>>
>>> was enough to make it fall over for me, when the underlying filesystem was
>>> XFS, but not with ext4 or btrfs.  This was on a system with 24 disks: 16 on
>>> an LSI 2116 controller and 8 on an LSI 2008.  It was MD RAID0:
>>>
>>>   mdadm --create /dev/md/scratch -n 24 -c 1024 -l raid0 /dev/sd{b..y}
>>>   mkfs.xfs -n size=16384 /dev/md/scratch
>>>   mount -o inode64 /dev/md/scratch /mnt/point
>>>
>>
>> Thanks. I didn't see an obvious way to pass through physical disks in
>> the interface I have, but I set up a hardware raid0 and a couple
>> instances of bonnie. This may not be close enough to your workload, but
>> can't hurt to try.
>>
> 
> To follow up on this, I ran this workload for a couple days without a
> problem. I was able to configure a bunch of single disk raid0 volumes to
> put into an md raid0, so I'm testing that next.
> 

To follow up again, the MD test has been running for a few days now
without incident as well. I think this test is as close as I'm going to
get with our hardware.

Brian

> If you do happen to reproduce the problem again, I would reiterate the
> suggestion to append that blocked task data to the thread over on the
> xfs list (re: my last post, it looks like some data was missing..?), as
> we might get more conclusive analysis on the state of the filesystem at
> the point of the hang.
> 
> Brian
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
> 




More information about the Gluster-users mailing list