[Gluster-users] XFS and MD RAID
Stephan von Krawczynski
skraw at ithnet.com
Mon Sep 10 09:03:41 UTC 2012
On Mon, 10 Sep 2012 09:39:18 +0100
Brian Candler <B.Candler at pobox.com> wrote:
> On Mon, Sep 10, 2012 at 09:29:25AM +0800, Jack Wang wrote:
> > below patch should fix your bug.
> Thank you Jack - that was a very quick response! I'm building a new kernel
> with this patch now and will report back.
> However, I think the existence of this bug suggests that Linux with software
> RAID is unsuitable for production use. There has obviously been no testing
> of basic critical functionality like hot-plugging drives, and serious
> regressions are introduced into supposedly "stable" kernels.
Brian, please re-think this. What you call a stable kernel (Ubuntu 3.2.0-30)
is indeed very old.
If you want to check a MD raid you should really use a stock kernel from
kernel.org (probably 3.4.10).
_That_ is the latest stable kernel.
> So I'm now on the lookout for a 24-port SATA RAID controller with good Linux
> support. What are my options?
> Googling I have found:
> * 3ware 9650SE-24
> * Areca ARC-1280ML
> * LSI MegaRAID 9280-24i (newer SAS/SATA)
> * Areca ARC-1882ix-24 (newer SAS/SATA)
I can tell you that I just had to throw away Areca because it had exactly the
problem you don't like: drives going offline for no good reason.
I went back to MD with the very same drives in the very same box, online using
the onboard SATA (6 ports) which works flawlessly.
My impression is Areca has troubles with new big drives of 2 TB and above. The
1TB worked ok.
I have some 3ware too, but did not check them with 2TB drives so far.
I must say I would probably drop them only because current processors are
faster with MD anyway. I just built a box with XEON E3-1280v2 with MD raid
4x2TB and I am impressed by the performance.
More information about the Gluster-users