<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
<p>Dear Artem,</p>
<p>can you also provide some information w.r.t your xfs filesystem,
i.e. xfs_info of your block device?</p>
<p><br>
</p>
<p>Regards,</p>
<p>Felix<br>
</p>
<div class="moz-cite-prefix">On 30/04/2020 17:27, Artem Russakovskii
wrote:<br>
</div>
<blockquote type="cite"
cite="mid:CAD+dzQdT7MGWzNPvxpYfuSR=sR9HNCu5=EGepSMOuZVDGGWmAw@mail.gmail.com">
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
<div dir="auto">
<div>Hi Strahil, in the original email I included both the times
for the first and subsequent reads on the fuse mounted gluster
volume as well as the xfs filesystem the gluster data resides
on (this is the brick, right?). </div>
<div dir="auto"><br>
<div class="gmail_quote" dir="auto">
<div dir="ltr" class="gmail_attr">On Thu, Apr 30, 2020, 7:44
AM Strahil Nikolov <<a
href="mailto:hunter86_bg@yahoo.com"
moz-do-not-send="true">hunter86_bg@yahoo.com</a>>
wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">On April
30, 2020 4:24:23 AM GMT+03:00, Artem Russakovskii <<a
href="mailto:archon810@gmail.com" target="_blank"
rel="noreferrer" moz-do-not-send="true">archon810@gmail.com</a>>
wrote:<br>
>Hi all,<br>
><br>
>We have 500GB and 10TB 4x1 replicate xfs-based gluster
volumes, and the<br>
>10TB one especially is extremely slow to do certain
things with (and<br>
>has<br>
>been since gluster 3.x when we started). We're
currently on 5.13.<br>
><br>
>The number of files isn't even what I'd consider that
great - under<br>
>100k<br>
>per dir.<br>
><br>
>Here are some numbers to look at:<br>
><br>
>On gluster volume in a dir of 45k files:<br>
>The first time<br>
><br>
>time find | wc -l<br>
>45423<br>
>real 8m44.819s<br>
>user 0m0.459s<br>
>sys 0m0.998s<br>
><br>
>And again<br>
><br>
>time find | wc -l<br>
>45423<br>
>real 0m34.677s<br>
>user 0m0.291s<br>
>sys 0m0.754s<br>
><br>
><br>
>If I run the same operation on the xfs block device
itself:<br>
>The first time<br>
><br>
>time find | wc -l<br>
>45423<br>
>real 0m13.514s<br>
>user 0m0.144s<br>
>sys 0m0.501s<br>
><br>
>And again<br>
><br>
>time find | wc -l<br>
>45423<br>
>real 0m0.197s<br>
>user 0m0.088s<br>
>sys 0m0.106s<br>
><br>
><br>
>I'd expect a performance difference here but just as
it was several<br>
>years<br>
>ago when we started with gluster, it's still huge, and
simple file<br>
>listings<br>
>are incredibly slow.<br>
><br>
>At the time, the team was looking to do some
optimizations, but I'm not<br>
>sure this has happened.<br>
><br>
>What can we do to try to improve performance?<br>
><br>
>Thank you.<br>
><br>
><br>
><br>
>Some setup values follow.<br>
><br>
>xfs_info /mnt/SNIP_block1<br>
>meta-data=/dev/sdc isize=512
agcount=103,<br>
>agsize=26214400<br>
>blks<br>
> = sectsz=512 attr=2,
projid32bit=1<br>
> = crc=1 finobt=1,
sparse=0, rmapbt=0<br>
> = reflink=0<br>
>data = bsize=4096
blocks=2684354560,<br>
>imaxpct=25<br>
> = sunit=0 swidth=0
blks<br>
>naming =version 2 bsize=4096
ascii-ci=0, ftype=1<br>
>log =internal log bsize=4096
blocks=51200, version=2<br>
> = sectsz=512 sunit=0
blks, lazy-count=1<br>
>realtime =none extsz=4096
blocks=0, rtextents=0<br>
><br>
>Volume Name: SNIP_data1<br>
>Type: Replicate<br>
>Volume ID: SNIP<br>
>Status: Started<br>
>Snapshot Count: 0<br>
>Number of Bricks: 1 x 4 = 4<br>
>Transport-type: tcp<br>
>Bricks:<br>
>Brick1: nexus2:/mnt/SNIP_block1/SNIP_data1<br>
>Brick2: forge:/mnt/SNIP_block1/SNIP_data1<br>
>Brick3: hive:/mnt/SNIP_block1/SNIP_data1<br>
>Brick4: citadel:/mnt/SNIP_block1/SNIP_data1<br>
>Options Reconfigured:<br>
>cluster.quorum-count: 1<br>
>cluster.quorum-type: fixed<br>
>network.ping-timeout: 5<br>
>network.remote-dio: enable<br>
>performance.rda-cache-limit: 256MB<br>
>performance.readdir-ahead: on<br>
>performance.parallel-readdir: on<br>
>network.inode-lru-limit: 500000<br>
>performance.md-cache-timeout: 600<br>
>performance.cache-invalidation: on<br>
>performance.stat-prefetch: on<br>
>features.cache-invalidation-timeout: 600<br>
>features.cache-invalidation: on<br>
>cluster.readdir-optimize: on<br>
>performance.io-thread-count: 32<br>
>server.event-threads: 4<br>
>client.event-threads: 4<br>
>performance.read-ahead: off<br>
>cluster.lookup-optimize: on<br>
>performance.cache-size: 1GB<br>
>cluster.self-heal-daemon: enable<br>
>transport.address-family: inet<br>
>nfs.disable: on<br>
>performance.client-io-threads: on<br>
>cluster.granular-entry-heal: enable<br>
>cluster.data-self-heal-algorithm: full<br>
><br>
>Sincerely,<br>
>Artem<br>
><br>
>--<br>
>Founder, Android Police <<a
href="http://www.androidpolice.com" rel="noreferrer
noreferrer" target="_blank" moz-do-not-send="true">http://www.androidpolice.com</a>>,
APK Mirror<br>
><<a href="http://www.apkmirror.com/"
rel="noreferrer noreferrer" target="_blank"
moz-do-not-send="true">http://www.apkmirror.com/</a>>,
Illogical Robot LLC<br>
><a href="http://beerpla.net" rel="noreferrer
noreferrer" target="_blank" moz-do-not-send="true">beerpla.net</a>
| @ArtemR <<a href="http://twitter.com/ArtemR"
rel="noreferrer noreferrer" target="_blank"
moz-do-not-send="true">http://twitter.com/ArtemR</a>><br>
<br>
Hi Artem,<br>
<br>
Have you checked the same on brick level ? How big is the
difference ?<br>
<br>
Best Regards,<br>
Strahil Nikolov<br>
</blockquote>
</div>
</div>
</div>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<pre class="moz-quote-pre" wrap="">________
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: <a class="moz-txt-link-freetext" href="https://bluejeans.com/441850968">https://bluejeans.com/441850968</a>
Gluster-users mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a>
<a class="moz-txt-link-freetext" href="https://lists.gluster.org/mailman/listinfo/gluster-users">https://lists.gluster.org/mailman/listinfo/gluster-users</a>
</pre>
</blockquote>
</body>
</html>