<div dir="ltr"><div>Have you tried with:<br><br>performance.strict-o-direct : off<br>performance.strict-write-ordering : off<br><br></div>They can be changed dynamically.<br><br></div><div class="gmail_extra"><br><div class="gmail_quote">On 20 June 2017 at 17:21, Sahina Bose <span dir="ltr"><<a href="mailto:sabose@redhat.com" target="_blank">sabose@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">[Adding gluster-users]<br><div><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Jun 19, 2017 at 8:16 PM, Chris Boot <span dir="ltr"><<a href="mailto:bootc@bootc.net" target="_blank">bootc@bootc.net</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi folks,<br>
<br>
I have 3x servers in a "hyper-converged" oVirt 4.1.2 + GlusterFS 3.10<br>
configuration. My VMs run off a replica 3 arbiter 1 volume comprised of<br>
6 bricks, which themselves live on two SSDs in each of the servers (one<br>
brick per SSD). The bricks are XFS on LVM thin volumes straight onto the<br>
SSDs. Connectivity is 10G Ethernet.<br>
<br>
Performance within the VMs is pretty terrible. I experience very low<br>
throughput and random IO is really bad: it feels like a latency issue.<br>
On my oVirt nodes the SSDs are not generally very busy. The 10G network<br>
seems to run without errors (iperf3 gives bandwidth measurements of >=<br>
9.20 Gbits/sec between the three servers).<br>
<br>
To put this into perspective: I was getting better behaviour from NFS4<br>
on a gigabit connection than I am with GlusterFS on 10G: that doesn't<br>
feel right at all.<br>
<br>
My volume configuration looks like this:<br>
<br>
Volume Name: vmssd<br>
Type: Distributed-Replicate<br>
Volume ID: d5a5ddd1-a140-4e0d-b514-701cfe<wbr>464853<br>
Status: Started<br>
Snapshot Count: 0<br>
Number of Bricks: 2 x (2 + 1) = 6<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1: ovirt3:/gluster/ssd0_vmssd/bri<wbr>ck<br>
Brick2: ovirt1:/gluster/ssd0_vmssd/bri<wbr>ck<br>
Brick3: ovirt2:/gluster/ssd0_vmssd/bri<wbr>ck (arbiter)<br>
Brick4: ovirt3:/gluster/ssd1_vmssd/bri<wbr>ck<br>
Brick5: ovirt1:/gluster/ssd1_vmssd/bri<wbr>ck<br>
Brick6: ovirt2:/gluster/ssd1_vmssd/bri<wbr>ck (arbiter)<br>
Options Reconfigured:<br>
nfs.disable: on<br>
transport.address-family: inet6<br>
performance.quick-read: off<br>
performance.read-ahead: off<br>
performance.io-cache: off<br>
performance.stat-prefetch: off<br>
performance.low-prio-threads: 32<br>
network.remote-dio: off<br>
cluster.eager-lock: enable<br>
cluster.quorum-type: auto<br>
cluster.server-quorum-type: server<br>
cluster.data-self-heal-algorit<wbr>hm: full<br>
cluster.locking-scheme: granular<br>
cluster.shd-max-threads: 8<br>
cluster.shd-wait-qlength: 10000<br>
features.shard: on<br>
user.cifs: off<br>
storage.owner-uid: 36<br>
storage.owner-gid: 36<br>
features.shard-block-size: 128MB<br>
performance.strict-o-direct: on<br>
network.ping-timeout: 30<br>
cluster.granular-entry-heal: enable<br>
<br>
I would really appreciate some guidance on this to try to improve things<br>
because at this rate I will need to reconsider using GlusterFS altogether.<br></blockquote><div><br><br></div><div>Could you provide the gluster volume profile output while you're running your I/O tests.<br><br></div><div># gluster volume profile <volname> start <br></div><div>to start profiling<br><br></div><div># gluster volume profile <volname> info<br><br></div><div>for the profile output.<br></div><div> <br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
Cheers,<br>
Chris<span class="HOEnZb"><font color="#888888"><br>
<span class="m_337741932814777898HOEnZb"><font color="#888888"><br>
--<br>
Chris Boot<br>
<a href="mailto:bootc@bootc.net" target="_blank">bootc@bootc.net</a><br>
______________________________<wbr>_________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a><br>
</font></span></font></span></blockquote></div><br></div></div></div>
<br>______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br></blockquote></div><br><br clear="all"><br>-- <br><div class="gmail_signature" data-smartmail="gmail_signature">Lindsay</div>
</div>