<div dir="ltr">getfattr -d -m. -e hex .<br># file: .<br>trusted.afr.SNIP_data1-client-0=0x000000000000000000000000<br>trusted.afr.dirty=0x000000000000000000000000<br>trusted.gfid=0x44b2db00267a47508b2a8a921f20e0f5<br>trusted.glusterfs.dht=0x000000010000000000000000ffffffff<br>trusted.glusterfs.dht.mds=0x00000000<br clear="all"><div><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><br>Sincerely,<br>Artem<br><br>--<br>Founder, <a href="http://www.androidpolice.com" target="_blank">Android Police</a>, <a href="http://www.apkmirror.com/" style="font-size:12.8px" target="_blank">APK Mirror</a><span style="font-size:12.8px">, Illogical Robot LLC</span></div><div dir="ltr"><a href="http://beerpla.net/" target="_blank">beerpla.net</a> | <a href="http://twitter.com/ArtemR" target="_blank">@ArtemR</a><br></div></div></div></div></div></div></div></div></div></div></div></div></div><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Apr 30, 2020 at 9:05 AM Felix Kölzow &lt;<a href="mailto:felix.koelzow@gmx.de">felix.koelzow@gmx.de</a>&gt; wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
  
    
  
  <div>
    <p>Dear Artem,</p>
    <p>sry for the noise, since you already provide the xfs_info. <br>
    </p>
    <p>Could you provide the output of <br>
    </p>
    <p><br>
    </p>
    <pre>getfattr -d -m. -e hex /DirectoryPathOfInterest_onTheBrick/


Felix
</pre>
    <div>On 30/04/2020 18:01, Felix Kölzow
      wrote:<br>
    </div>
    <blockquote type="cite">
      
      <p>Dear Artem,</p>
      <p>can you also provide some information w.r.t your xfs
        filesystem, i.e. xfs_info of your block device?</p>
      <p><br>
      </p>
      <p>Regards,</p>
      <p>Felix<br>
      </p>
      <div>On 30/04/2020 17:27, Artem
        Russakovskii wrote:<br>
      </div>
      <blockquote type="cite">
        
        <div dir="auto">
          <div>Hi Strahil, in the original email I included both the
            times for the first and subsequent reads on the fuse mounted
            gluster volume as well as the xfs filesystem the gluster
            data resides on (this is the brick, right?). </div>
          <div dir="auto"><br>
            <div class="gmail_quote" dir="auto">
              <div dir="ltr" class="gmail_attr">On Thu, Apr 30, 2020,
                7:44 AM Strahil Nikolov &lt;<a href="mailto:hunter86_bg@yahoo.com" target="_blank">hunter86_bg@yahoo.com</a>&gt;
                wrote:<br>
              </div>
              <blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">On
                April 30, 2020 4:24:23 AM GMT+03:00, Artem Russakovskii
                &lt;<a href="mailto:archon810@gmail.com" rel="noreferrer" target="_blank">archon810@gmail.com</a>&gt;
                wrote:<br>
                &gt;Hi all,<br>
                &gt;<br>
                &gt;We have 500GB and 10TB 4x1 replicate xfs-based
                gluster volumes, and the<br>
                &gt;10TB one especially is extremely slow to do certain
                things with (and<br>
                &gt;has<br>
                &gt;been since gluster 3.x when we started). We&#39;re
                currently on 5.13.<br>
                &gt;<br>
                &gt;The number of files isn&#39;t even what I&#39;d consider
                that great - under<br>
                &gt;100k<br>
                &gt;per dir.<br>
                &gt;<br>
                &gt;Here are some numbers to look at:<br>
                &gt;<br>
                &gt;On gluster volume in a dir of 45k files:<br>
                &gt;The first time<br>
                &gt;<br>
                &gt;time find | wc -l<br>
                &gt;45423<br>
                &gt;real    8m44.819s<br>
                &gt;user    0m0.459s<br>
                &gt;sys     0m0.998s<br>
                &gt;<br>
                &gt;And again<br>
                &gt;<br>
                &gt;time find | wc -l<br>
                &gt;45423<br>
                &gt;real    0m34.677s<br>
                &gt;user    0m0.291s<br>
                &gt;sys     0m0.754s<br>
                &gt;<br>
                &gt;<br>
                &gt;If I run the same operation on the xfs block device
                itself:<br>
                &gt;The first time<br>
                &gt;<br>
                &gt;time find | wc -l<br>
                &gt;45423<br>
                &gt;real    0m13.514s<br>
                &gt;user    0m0.144s<br>
                &gt;sys     0m0.501s<br>
                &gt;<br>
                &gt;And again<br>
                &gt;<br>
                &gt;time find | wc -l<br>
                &gt;45423<br>
                &gt;real    0m0.197s<br>
                &gt;user    0m0.088s<br>
                &gt;sys     0m0.106s<br>
                &gt;<br>
                &gt;<br>
                &gt;I&#39;d expect a performance difference here but just as
                it was several<br>
                &gt;years<br>
                &gt;ago when we started with gluster, it&#39;s still huge,
                and simple file<br>
                &gt;listings<br>
                &gt;are incredibly slow.<br>
                &gt;<br>
                &gt;At the time, the team was looking to do some
                optimizations, but I&#39;m not<br>
                &gt;sure this has happened.<br>
                &gt;<br>
                &gt;What can we do to try to improve performance?<br>
                &gt;<br>
                &gt;Thank you.<br>
                &gt;<br>
                &gt;<br>
                &gt;<br>
                &gt;Some setup values follow.<br>
                &gt;<br>
                &gt;xfs_info /mnt/SNIP_block1<br>
                &gt;meta-data=/dev/sdc               isize=512   
                agcount=103,<br>
                &gt;agsize=26214400<br>
                &gt;blks<br>
                &gt;         =                       sectsz=512 
                 attr=2, projid32bit=1<br>
                &gt;      =                       crc=1        finobt=1,
                sparse=0, rmapbt=0<br>
                &gt;         =                       reflink=0<br>
                &gt;data     =                       bsize=4096 
                 blocks=2684354560,<br>
                &gt;imaxpct=25<br>
                &gt;         =                       sunit=0     
                swidth=0 blks<br>
                &gt;naming   =version 2              bsize=4096 
                 ascii-ci=0, ftype=1<br>
                &gt;log      =internal log           bsize=4096 
                 blocks=51200, version=2<br>
                &gt;        =                       sectsz=512   sunit=0
                blks, lazy-count=1<br>
                &gt;realtime =none                   extsz=4096 
                 blocks=0, rtextents=0<br>
                &gt;<br>
                &gt;Volume Name: SNIP_data1<br>
                &gt;Type: Replicate<br>
                &gt;Volume ID: SNIP<br>
                &gt;Status: Started<br>
                &gt;Snapshot Count: 0<br>
                &gt;Number of Bricks: 1 x 4 = 4<br>
                &gt;Transport-type: tcp<br>
                &gt;Bricks:<br>
                &gt;Brick1: nexus2:/mnt/SNIP_block1/SNIP_data1<br>
                &gt;Brick2: forge:/mnt/SNIP_block1/SNIP_data1<br>
                &gt;Brick3: hive:/mnt/SNIP_block1/SNIP_data1<br>
                &gt;Brick4: citadel:/mnt/SNIP_block1/SNIP_data1<br>
                &gt;Options Reconfigured:<br>
                &gt;cluster.quorum-count: 1<br>
                &gt;cluster.quorum-type: fixed<br>
                &gt;network.ping-timeout: 5<br>
                &gt;network.remote-dio: enable<br>
                &gt;performance.rda-cache-limit: 256MB<br>
                &gt;performance.readdir-ahead: on<br>
                &gt;performance.parallel-readdir: on<br>
                &gt;network.inode-lru-limit: 500000<br>
                &gt;performance.md-cache-timeout: 600<br>
                &gt;performance.cache-invalidation: on<br>
                &gt;performance.stat-prefetch: on<br>
                &gt;features.cache-invalidation-timeout: 600<br>
                &gt;features.cache-invalidation: on<br>
                &gt;cluster.readdir-optimize: on<br>
                &gt;performance.io-thread-count: 32<br>
                &gt;server.event-threads: 4<br>
                &gt;client.event-threads: 4<br>
                &gt;performance.read-ahead: off<br>
                &gt;cluster.lookup-optimize: on<br>
                &gt;performance.cache-size: 1GB<br>
                &gt;cluster.self-heal-daemon: enable<br>
                &gt;transport.address-family: inet<br>
                &gt;nfs.disable: on<br>
                &gt;performance.client-io-threads: on<br>
                &gt;cluster.granular-entry-heal: enable<br>
                &gt;cluster.data-self-heal-algorithm: full<br>
                &gt;<br>
                &gt;Sincerely,<br>
                &gt;Artem<br>
                &gt;<br>
                &gt;--<br>
                &gt;Founder, Android Police &lt;<a href="http://www.androidpolice.com" rel="noreferrer
                  noreferrer" target="_blank">http://www.androidpolice.com</a>&gt;,
                APK Mirror<br>
                &gt;&lt;<a href="http://www.apkmirror.com/" rel="noreferrer noreferrer" target="_blank">http://www.apkmirror.com/</a>&gt;,
                Illogical Robot LLC<br>
                &gt;<a href="http://beerpla.net" rel="noreferrer
                  noreferrer" target="_blank">beerpla.net</a>
                | @ArtemR &lt;<a href="http://twitter.com/ArtemR" rel="noreferrer noreferrer" target="_blank">http://twitter.com/ArtemR</a>&gt;<br>
                <br>
                Hi Artem,<br>
                <br>
                Have you checked the same on brick level ? How big is
                the difference ?<br>
                <br>
                Best Regards,<br>
                Strahil Nikolov<br>
              </blockquote>
            </div>
          </div>
        </div>
        <br>
        <fieldset></fieldset>
        <pre>________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: <a href="https://bluejeans.com/441850968" target="_blank">https://bluejeans.com/441850968</a>

Gluster-users mailing list
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a>
<a href="https://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a>
</pre>
      </blockquote>
      <br>
      <fieldset></fieldset>
      <pre>________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: <a href="https://bluejeans.com/441850968" target="_blank">https://bluejeans.com/441850968</a>

Gluster-users mailing list
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a>
<a href="https://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a>
</pre>
    </blockquote>
  </div>

________<br>
<br>
<br>
<br>
Community Meeting Calendar:<br>
<br>
Schedule -<br>
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br>
Bridge: <a href="https://bluejeans.com/441850968" rel="noreferrer" target="_blank">https://bluejeans.com/441850968</a><br>
<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
</blockquote></div>