<div dir="ltr">Hi,<div><br></div><div><br></div><div>This value is an ongoing rough estimate based on the amount of data rebalance has migrated since it started. The values will cange as the rebalance progresses.</div><div>A few questions:</div><div><ol><li>How many files/dirs do you have on this volume? <br></li><li>What is the average size of the files?<br></li><li>What is the total size of the data on the volume?<br></li></ol></div><div><br></div><div>Can you send us the rebalance log?</div><div><br></div><div><br></div><div>Thanks,</div><div>Nithya</div></div><div class="gmail_extra"><br><div class="gmail_quote">On 30 April 2018 at 10:33, kiwizhang618 <span dir="ltr">&lt;<a href="mailto:kiwizhang618@gmail.com" target="_blank">kiwizhang618@gmail.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">

    

<div>

<div style="font-family:&quot;\005fae\008f6f\0096c5\009ed1&quot;;line-height:1.6">
    <div> I met a big problem,the cluster rebalance takes a long time after adding a new node</div><div><br></div><div>gluster volume rebalance web status</div><div>                                    Node Rebalanced-files          size       scanned      failures       skipped               status  run time in h:m:s</div><div>                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------</div><div>                               localhost              900        43.5MB          2232             0            69          in progress        0:36:49</div><div>                                gluster2             1052        39.3MB          4393             0          1052          in progress        0:36:49</div><div>Estimated time left for rebalance to complete :     9919:44:34</div><div>volume rebalance: web: success</div><div><br></div><div>the rebalance log</div><div><div>[glusterfsd.c:2511:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.12.8 (args: /usr/sbin/glusterfs -s localhost --volfile-id rebalance/web --xlator-option *dht.use-readdirp=yes --xlator-option *dht.lookup-unhashed=yes --xlator-option *dht.assert-no-child-down=yes --xlator-option *replicate*.data-self-heal=off --xlator-option *replicate*.metadata-self-<wbr>heal=off --xlator-option *replicate*.entry-self-heal=<wbr>off --xlator-option *dht.readdir-optimize=on --xlator-option *dht.rebalance-cmd=1 --xlator-option *dht.node-uuid=d47ad89d-7979-<wbr>4ede-9aba-e04f020bb4f0 --xlator-option *dht.commit-hash=3610561770 --socket-file /var/run/gluster/gluster-<wbr>rebalance-bdef10eb-1c83-410c-<wbr>8ad3-fe286450004b.sock --pid-file /var/lib/glusterd/vols/web/<wbr>rebalance/d47ad89d-7979-4ede-<wbr>9aba-e04f020bb4f0.pid -l /var/log/glusterfs/web-<wbr>rebalance.log)</div><div>[2018-04-30 04:20:45.100902] W [MSGID: 101002] [options.c:995:xl_opt_<wbr>validate] 0-glusterfs: option &#39;address-family&#39; is deprecated, preferred is &#39;transport.address-family&#39;, continuing with correction</div><div>[2018-04-30 04:20:45.103927] I [MSGID: 101190] [event-epoll.c:613:event_<wbr>dispatch_epoll_worker] 0-epoll: Started thread with index 1</div><div>[2018-04-30 04:20:55.191261] E [MSGID: 109039] [dht-common.c:3113:dht_find_<wbr>local_subvol_cbk] 0-web-dht: getxattr err for dir [No data available]</div><div>[2018-04-30 04:21:19.783469] E [MSGID: 109023] [dht-rebalance.c:2669:gf_<wbr>defrag_migrate_single_file] 0-web-dht: Migrate file failed: /2018/02/x187f6596-36ac-45e6-<wbr>bd7a-019804dfe427.jpg, lookup failed [Stale file handle]</div><div>The message &quot;E [MSGID: 109039] [dht-common.c:3113:dht_find_<wbr>local_subvol_cbk] 0-web-dht: getxattr err for dir [No data available]&quot; repeated 2 times between [2018-04-30 04:20:55.191261] and [2018-04-30 04:20:55.193615]</div></div><div><br></div><div>the gluster info</div><div><div>Volume Name: web</div><div>Type: Distribute</div><div>Volume ID: bdef10eb-1c83-410c-8ad3-<wbr>fe286450004b</div><div>Status: Started</div><div>Snapshot Count: 0</div><div>Number of Bricks: 3</div><div>Transport-type: tcp</div><div>Bricks:</div><div>Brick1: gluster1:/home/export/md3/<wbr>brick</div><div>Brick2: gluster1:/export/md2/brick</div><div>Brick3: gluster2:/home/export/md3/<wbr>brick</div><div>Options Reconfigured:</div><div>nfs.trusted-sync: on</div><div>nfs.trusted-write: on</div><div>cluster.rebal-throttle: aggressive</div><div>features.inode-quota: off</div><div>features.quota: off</div><div>cluster.shd-wait-qlength: 1024</div><div>transport.address-family: inet</div><div>cluster.lookup-unhashed: auto</div><div>performance.cache-size: 1GB</div><div>performance.client-io-threads: on</div><div>performance.write-behind-<wbr>window-size: 4MB</div><div>performance.io-thread-count: 8</div><div>performance.force-readdirp: on</div><div>performance.readdir-ahead: on</div><div>cluster.readdir-optimize: on</div><div>performance.high-prio-threads: 8</div><div>performance.flush-behind: on</div><div>performance.write-behind: on</div><div>performance.quick-read: off</div><div>performance.io-cache: on</div><div>performance.read-ahead: off</div><div>server.event-threads: 8</div><div>cluster.lookup-optimize: on</div><div>features.cache-invalidation: on</div><div>features.cache-invalidation-<wbr>timeout: 600</div><div>performance.stat-prefetch: off</div><div>performance.md-cache-timeout: 60</div><div>network.inode-lru-limit: 90000</div><div>diagnostics.brick-log-level: ERROR</div><div>diagnostics.brick-sys-log-<wbr>level: ERROR</div><div>diagnostics.client-log-level: ERROR</div><div>diagnostics.client-sys-log-<wbr>level: ERROR</div><div>cluster.min-free-disk: 20%</div><div>cluster.self-heal-window-size: 16</div><div>cluster.self-heal-readdir-<wbr>size: 1024</div><div>cluster.background-self-heal-<wbr>count: 4</div><div>cluster.heal-wait-queue-<wbr>length: 128</div><div>client.event-threads: 8</div><div>performance.cache-<wbr>invalidation: on</div><div>nfs.disable: off</div><div>nfs.acl: off</div><div>cluster.brick-multiplex: disable</div></div><div><br></div>
</div>
</div>
<br>______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br></blockquote></div><br></div>