[Gluster-users] XFS and MD RAID
Brian Candler
B.Candler at pobox.com
Sun Sep 9 09:22:02 UTC 2012
On Tue, Sep 04, 2012 at 01:41:42PM -0400, Brian Foster wrote:
> To follow up on this, I ran this workload for a couple days without a
> problem. I was able to configure a bunch of single disk raid0 volumes to
> put into an md raid0, so I'm testing that next.
And I've also run this workload on another test box here over a week and not
been able to reproduce the problem :-(
> If you do happen to reproduce the problem again, I would reiterate the
> suggestion to append that blocked task data to the thread over on the
> xfs list (re: my last post, it looks like some data was missing..?)
Yes, sorry I haven't had time to dig around for the data until now.
The message in question was
http://oss.sgi.com/pipermail/xfs/2012-May/019472.html
and it turns out I had attached a truncated version of dmesg. The full
one is attached here.
Now: I should say it's not too late for us to ditch the LSI controllers, so
perhaps it's time for me to ask a different question.
Does somebody know a good controller card (or cards) which will let us
connect 24 SATA drives into a Linux server in a rock-solid manner? I am
open to solutions which provide RAID on the controller card, and also to
HBAs which require software RAID.
It does need to be SATA because we are looking at getting maximum volumes of
archive data stored at minimum cost, using 3 or 4TB drives. Read/write
performance is not important. However there will be times when we hammer
the array hard, doing bulk imports or reads of data, so the controller needs
to work happily under those conditions.
Thanks,
Brian.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: storage3-sysreq.txt.gz
Type: application/x-gunzip
Size: 28530 bytes
Desc: not available
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20120909/c2e45566/attachment.bin>
More information about the Gluster-users
mailing list