[Gluster-users] Improving IOPS

David Gossage dgossage at carouselchecks.com
Sun Nov 6 12:28:00 UTC 2016


On Sun, Nov 6, 2016 at 3:24 AM, Gandalf Corvotempesta <
gandalf.corvotempesta at gmail.com> wrote:

> Il 06/11/2016 03:37, David Gossage ha scritto:
>
> The only thing you gain with raidz1 I think is maybe more usable space.
> Performance in general will not be as good, and whether the vdev is
> mirrored or z1 neither can survive 2 drives failing.  In most cases the z10
> will rebuild faster with less impact during rebuild. If you are already
> using gluster 3 node replicate as VM practices suggest then you are already
> pretty well protected if you lose the wrong 2 drives as well.
>
>
> Ok, i'll try again. I'm *not* talking about a single RAIDZ1 for the whole
> server.
>
> Let's assume a 12 disks server. 4TB each. Raw space = 4TB*12 = 48TB
>
> You can do one of the following:
> 1) *a single RAIDZ10*, using all disks, made up by 6 RAIDZ1 mirrors.
> usable space=4TB*6 = 24TB
> 2) *6 RAIDZ1 mirrors*. usable space=4TB*6 = 24TB
>

I see maybe you don't really means raidz1 here.   Raidz1 usually refers to
"raid5" type vdevs with at least 3 disks otherwise why pay a penalty for
tracking parity when you can have a mirrored pair.  So in your case you are
changing it from one zpool like was laid out to multiple zpools with each
one being 1 mirrored vdev pair of disks?

tank1
   mirror
    pair-a
    pair-a

tank2
   mirror
    pair-b
    pair-b

etc.....

as opposed to

tank1
  mirror
     pair-a
     pair-a
   mirror
     pair-b
     pair-b

>
> You'll get the same usable space for both solution.
>
> Now you have gluster, so you have at least 2 more servers in "identical"
> configuration.
>
> With solution 1, you can loose only 1 disk for each pair. If you loose 2
> disks from the same pair, you loose the whole RAIDZ10 and you have
> to heal 24TB from the network.
>
> With solution 2, you can loose the same number of disks, but if you loose
> 1 mirror at once, you only have to heal that mirror from the network, only
> 4TB.
>
> * IOPS should be the same, as Gluster will 'aggragate' each pair in a
> single volume, like a RAID10 does, but you get much more speed during an
> healing.
> * Resilvering time is the same, as ZFS has to resilver only the failed
> disk with both solutions.
>
> What i'm saying is to skip the "RAID0" part and use gluster as aggragator.
> Is much more secure and faster to recover in case of multiple failures.
>

So moving from a replicated to a distributed-replicated model?  or a
striped-distributed-replicate?  what is the command or layout you would use
to get to the model you are wanting to use?

>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20161106/c466f407/attachment.html>


More information about the Gluster-users mailing list