<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Mar 6, 2017 at 3:21 PM, Arman Khalatyan <span dir="ltr"><<a href="mailto:arm2arm@gmail.com" target="_blank">arm2arm@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote"><span class="">On Fri, Mar 3, 2017 at 7:00 PM, Darrell Budic <span dir="ltr"><<a href="mailto:budic@onholyground.com" target="_blank">budic@onholyground.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div>Why are you using an arbitrator if all your HW configs are identical? I’d use a true replica 3 in this case.<div><br></div></div></blockquote><div><br></div></span><div>This
was just GIU suggestion when I was creating the cluster it was asking
for the 3 Hosts , I did not knew even that an Arbiter does not keep the data.<br></div><div>I
am not so sure if I can change the type of the glusterfs to triplicated
one in the running system, probably I need to destroy whole cluster.<br></div><span class=""><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div>Also
in my experience with gluster and vm hosting, the ZIL/slog degrades
write performance unless it’s a truly dedicated disk. But I have 8
spinners backing my ZFS volumes, so trying to share a sata disk wasn’t a
good zil. If yours is dedicated SAS, keep it, if it’s SATA, try testing
without it.</div><div><br></div></div></blockquote><div><br></div></span><div>We
have also several huge systems running with zfs quite successful over
the years. This was an idea to use zfs + glusterfs for the HA solutions.<br></div><span class=""><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div>You
don’t have compression enabled on your zfs volume, and I’d recommend
enabling relatime on it. Depending on the amount of RAM in these boxes,
you probably want to limit your zfs arc size to 8G or so (1/4 total ram
or less). Gluster just works volumes hard during a rebuild, what’s the
problem you’re seeing? If it’s affecting your VMs, using shading and
tuning client & server threads can help avoid interruptions to your
VMs while repairs are running. If you really need to limit it, you can
use cgroups to keep it from hogging all the CPU, but it takes longer to
heal, of course. There are a couple older posts and blogs about it, if
you go back a while.</div></div></blockquote><div><br></div></span><div>Yes I
saw that glusterfs is CPU/RAM hugry!!! 99% of all 16 cores used just for
healing 500GB vm disks. It was taking almost infinity compare with nfs storage (single disk+zfs ssd cache, for sure one get an penalty for the HA:) )<br></div></div></div></div></blockquote><div><br></div><div>Is your gluster volume configured to use sharding feature? Could you provide output of gluster vol info?<br> <br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><div><br></div></div><br></div></div>
<br>______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br></blockquote></div><br></div></div>