<html>
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#FFFFFF">
On 2017-02-28 04:01 PM, Lindsay Mathieson wrote:<br>
<blockquote
cite="mid:CAEMkAmEkZQ4yvetUK6-OKypQrXm8xoXGx-ctbuMQd4or6c0hxw@mail.gmail.com"
type="cite">
<div dir="ltr">
<div class="gmail_extra"><br>
<div class="gmail_quote">On 1 March 2017 at 09:20, Ernie
Dunbar <span dir="ltr"><<a moz-do-not-send="true"
href="mailto:maillist@lightspeed.ca" target="_blank">maillist@lightspeed.ca</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">Every
node in the Gluster array has their RAID array configured
as RAID5, so I'd like to improve the performance on each
node by changing that to RAID0 instead. </blockquote>
</div>
<br>
</div>
<div class="gmail_extra">Hi Ernie, sorry saw your question
before and meant to reply but "stuff" kept happening ... :)<br>
<br>
</div>
<div class="gmail_extra">Presuming you're running Replica 3 I
don't see any issues with converting from RAID5 to RAID0,
there should be quite a local performance boost and I would
think its actually safer - the rebuild times for RAID5 are
horrendous and a performance killer to boot. With RAID0 you'll
loose the whole brick if you lose a disk but depending on your
network, healing from the other nodes would probably be
quicker.<br>
<br>
</div>
<div class="gmail_extra">nb. What is your raid controller?
network setup?<br>
<br>
</div>
<div class="gmail_extra">Alternatively I believe the general
recommendation is to actually run all your disks in JBOD mode
and create a brick per disk, that way individual disk failures
won't effect the other bricks on the node. However that would
require the same number of disks per node.<br>
<br>
</div>
<div class="gmail_extra">For myself, I actually run 4 disks per
node, setup as RAID10 with ZFS. One ZFS Pool and Brick per
node. I use it for VM Hosting though which is quite a
different usecase, a few very large files.<br>
</div>
<br>
</div>
</blockquote>
<br>
We're running Gluster on 3 Dell 2950s, using the PERC6i controller.
There's only one brick so far, but I think I'm going to have to keep
it that way, although some of our data on that brick isn't mail - VM
hosting is something that we'll be doing with this very soon.<br>
<br>
Considering that this is our mail store, I don't think that setting
up JBOD and a brick per disk is really reasonable, or we'd have to
go around creating new e-mail accounts on random gluster shares,
nevermind the tiny detail of what happens when any set of mailboxes
outgrows the brick. This makes it look like this more efficient
scheme would be highly impractical to us.<br>
</body>
</html>