<div dir="auto">Hi,<div dir="auto">We will look into the &quot; failed to get index&quot; error.</div><div dir="auto">It shouldn&#39;t affect the normal working. Do let us know if you face any other issues.</div><div dir="auto"><br></div><div dir="auto">Regards,</div><div dir="auto">Hari.</div></div><div class="gmail_extra"><br><div class="gmail_quote">On 02-Aug-2017 11:55 PM, &quot;Dmitri Chebotarov&quot; &lt;<a href="mailto:4dimach@gmail.com">4dimach@gmail.com</a>&gt; wrote:<br type="attribution"><blockquote class="quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div>Hello</div><div><br></div><div>I reattached hot tier to a new empty EC volume and started to copy data to the volume.</div><div>Good news is I can see files now on SSD bricks (hot tier) - &#39;find /path/to/brick -type f&#39; shows files, before &#39;find&#39; would only show dirs. </div><div><br></div><div>But I&#39;ve got a &#39;rebalance&#39; error in glusterd.log file after I attached hot tier. </div><div><br></div><div>[2017-08-02 14:09:01.489891] E [MSGID: 106062] [glusterd-utils.c:9182:<wbr>glusterd_volume_rebalance_use_<wbr>rsp_dict] 0-glusterd: failed to get index</div><div>The message &quot;E [MSGID: 106062] [glusterd-utils.c:9182:<wbr>glusterd_volume_rebalance_use_<wbr>rsp_dict] 0-glusterd: failed to get index&quot; repeated 10 times between [2017-08-02 14:09:01.489891] and [2017-08-02 14:09:01.545027]</div><div><br></div><div>This is output from &#39;rebalance status&#39; command:</div><div><br></div><div># gluster volume rebalance voldata3 status</div><div>                                    Node Rebalanced-files          size       scanned      failures       skipped               status  run time in h:m:s</div><div>                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------</div><div>                               localhost                0        0Bytes             0             0             0          in progress        0:0:0</div><div>                                   GFSRV18                0        0Bytes             0             0             0          in progress        0:0:0</div><div>                                   GFSRV20                0        0Bytes             0             0             0          in progress        0:0:0</div><div>                                   GFSRV21                0        0Bytes             0             0             0          in progress        0:0:0</div><div>                                   GFSRV23                0        0Bytes             0             0             0          in progress        0:0:0</div><div>                                   GFSRV17                0        0Bytes             0             0             0          in progress        0:0:0</div><div>                                   GFSRV24                0        0Bytes             0             0             0          in progress        0:0:0</div><div>                                   GFSRV16                0        0Bytes             0             0             0          in progress        0:0:0</div><div>                                   GFSRV15                0        0Bytes             0             0             0          in progress        0:0:0</div><div>                                   GFSRV14                0        0Bytes             0             0             0          in progress        0:0:0</div><div>                                   GFSRV22                0        0Bytes             0             0             0          in progress        0:0:0</div><div>                                   GFSRV19                0        0Bytes             0             0             0          in progress        0:0:0</div><div>volume rebalance: voldata3: success</div><div><br></div><div>and &#39;tier status&#39; output:</div><div><br></div><div># gluster volume tier voldata3 status</div><div class="quoted-text"><div>Node                 Promoted files       Demoted files        Status</div><div>---------            ---------            ---------            ---------</div><div>localhost            0                    0                    in progress</div></div><div>GFSRV18                0                    0                    in progress</div><div>GFSRV20                0                    0                    in progress</div><div>GFSRV21                0                    0                    in progress</div><div>GFSRV23                0                    0                    in progress</div><div>GFSRV17                0                    0                    in progress</div><div>GFSRV24                0                    0                    in progress</div><div>GFSRV16                0                    0                    in progress</div><div>GFSRV15                0                    0                    in progress</div><div>GFSRV14                0                    0                    in progress</div><div>GFSRV22                0                    0                    in progress</div><div>GFSRV19                0                    0                    in progress</div><div>Tiering Migration Functionality: voldata3: success</div><div><br></div><div>&#39;vol status&#39; shows one active task:</div><div><br></div><div><div>Task Status of Volume voldata3</div><div>------------------------------<wbr>------------------------------<wbr>------------------</div><div>Task                 : Tier migration</div><div>ID                   : c4c33b04-2a1e-4e53-b1f5-<wbr>a96ec6d9d851</div><div>Status               : in progress</div></div><div><br></div><div><br></div><div>No errors reported in &#39;voldata3-tier-&lt;uuid&gt;.log&#39; file.</div><div><br></div><div>I&#39;ll keep monitoring it for few day. I expect to see some &#39;cooled&#39; data moving to &#39;cold tier&#39;.</div><div><br></div><div>Thank you.</div></div><div class="elided-text"><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Aug 1, 2017 at 1:32 AM, Hari Gowtham <span dir="ltr">&lt;<a href="mailto:hgowtham@redhat.com" target="_blank">hgowtham@redhat.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi,<br>
<br>
You have missed the log files.<br>
<br>
Can you attach them?<br>
<div class="m_-5085289834352298320HOEnZb"><div class="m_-5085289834352298320h5"><br>
<br>
On Mon, Jul 31, 2017 at 7:22 PM, Dmitri Chebotarov &lt;<a href="mailto:4dimach@gmail.com" target="_blank">4dimach@gmail.com</a>&gt; wrote:<br>
&gt; Hi<br>
&gt;<br>
&gt; At this point I already detached Hot Tier volume to run rebalance. Many<br>
&gt; volume settings only take effect for the new data (or rebalance), so I<br>
&gt; thought may this was the case with Hot Tier as well. Once rebalance<br>
&gt; finishes, I&#39;ll re-attache hot tier.<br>
&gt;<br>
&gt; cluster.write-freq-threshold and cluster.read-freq-threshold control number<br>
&gt; of times data is read/write before it moved to hot tier. In my case both are<br>
&gt; set to &#39;2&#39;, I didn&#39;t think I needed to disable<br>
&gt; performance.io-cache/quick-rea<wbr>d as well. Will give it a try.<br>
&gt;<br>
&gt; Here is the volume info (no hot tier at this time)<br>
&gt;<br>
&gt; ~]# gluster v info home<br>
&gt;<br>
&gt; Volume Name: home<br>
&gt; Type: Disperse<br>
&gt; Volume ID: 4583a3cf-4deb-4707-bd0d-e7defc<wbr>b1c39b<br>
&gt; Status: Started<br>
&gt; Snapshot Count: 0<br>
&gt; Number of Bricks: 1 x (8 + 4) = 12<br>
&gt; Transport-type: tcp<br>
&gt; Bricks:<br>
&gt; Brick1: MMR01:/rhgs/b0/data<br>
&gt; Brick2: MMR02:/rhgs/b0/data<br>
&gt; Brick3: MMR03:/rhgs/b0/data<br>
&gt; Brick4: MMR04:/rhgs/b0/data<br>
&gt; Brick5: MMR05:/rhgs/b0/data<br>
&gt; Brick6: MMR06:/rhgs/b0/data<br>
&gt; Brick7: MMR07:/rhgs/b0/data<br>
&gt; Brick8: MMR08:/rhgs/b0/data<br>
&gt; Brick9: MMR09:/rhgs/b0/data<br>
&gt; Brick10: MMR10:/rhgs/b0/data<br>
&gt; Brick11: MMR11:/rhgs/b0/data<br>
&gt; Brick12: MMR12:/rhgs/b0/data<br>
&gt; Options Reconfigured:<br>
&gt; diagnostics.client-log-level: CRITICAL<br>
&gt; cluster.write-freq-threshold: 2<br>
&gt; cluster.read-freq-threshold: 2<br>
&gt; features.record-counters: on<br>
&gt; nfs.disable: on<br>
&gt; performance.readdir-ahead: enable<br>
&gt; transport.address-family: inet<br>
&gt; client.event-threads: 4<br>
&gt; server.event-threads: 4<br>
&gt; cluster.lookup-optimize: on<br>
&gt; cluster.readdir-optimize: on<br>
&gt; cluster.locking-scheme: granular<br>
&gt; cluster.shd-max-threads: 8<br>
&gt; cluster.shd-wait-qlength: 10000<br>
&gt; cluster.data-self-heal-algorit<wbr>hm: full<br>
&gt; features.cache-invalidation: on<br>
&gt; features.cache-invalidation-ti<wbr>meout: 600<br>
&gt; performance.stat-prefetch: on<br>
&gt; performance.cache-invalidation<wbr>: on<br>
&gt; performance.md-cache-timeout: 600<br>
&gt; network.inode-lru-limit: 50000<br>
&gt; performance.write-behind-windo<wbr>w-size: 1MB<br>
&gt; performance.client-io-threads: on<br>
&gt; performance.read-ahead: disable<br>
&gt; performance.cache-size: 256MB<br>
&gt; performance.io-thread-count: 16<br>
&gt; performance.strict-o-direct: on<br>
&gt; network.ping-timeout: 30<br>
&gt; network.remote-dio: disable<br>
&gt; user.cifs: off<br>
&gt; features.quota: on<br>
&gt; features.inode-quota: on<br>
&gt; features.quota-deem-statfs: on<br>
&gt;<br>
&gt; ~]# gluster v get home  performance.io-cache<br>
&gt; performance.io-cache                    on<br>
&gt;<br>
&gt; ~]# gluster v get home  performance.quick-read<br>
&gt; performance.quick-read                  on<br>
&gt;<br>
&gt; Thank you.<br>
&gt;<br>
&gt; On Mon, Jul 31, 2017 at 5:16 AM, Hari Gowtham &lt;<a href="mailto:hgowtham@redhat.com" target="_blank">hgowtham@redhat.com</a>&gt; wrote:<br>
&gt;&gt;<br>
&gt;&gt; Hi,<br>
&gt;&gt;<br>
&gt;&gt; Before you try turning off the perf translators can you send us the<br>
&gt;&gt; following,<br>
&gt;&gt; So we will make sure that the other things haven&#39;t gone wrong.<br>
&gt;&gt;<br>
&gt;&gt; can you send us the log files for tier (would be better if you attach<br>
&gt;&gt; other logs too),<br>
&gt;&gt; the version of gluster you are using, the client, and the output for:<br>
&gt;&gt; gluster v info<br>
&gt;&gt; gluster v get v1 performance.io-cache<br>
&gt;&gt; gluster v get v1 performance.quick-read<br>
&gt;&gt;<br>
&gt;&gt; Do send us this and then we will let you know what should be done,<br>
&gt;&gt; as reads should also cause promotion<br>
&gt;&gt;<br>
&gt;&gt;<br>
&gt;&gt; On Mon, Jul 31, 2017 at 2:21 PM, Hari Gowtham &lt;<a href="mailto:hgowtham@redhat.com" target="_blank">hgowtham@redhat.com</a>&gt; wrote:<br>
&gt;&gt; &gt; For the tier daemon to migrate the files for read, few performance<br>
&gt;&gt; &gt; translators have to be turned off.<br>
&gt;&gt; &gt; By default the performance quick-read and io-cache are turned on. You<br>
&gt;&gt; &gt; can turn them off so that<br>
&gt;&gt; &gt; the files will be migrated for read.<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; On Mon, Jul 31, 2017 at 11:34 AM, Hari Gowtham &lt;<a href="mailto:hgowtham@redhat.com" target="_blank">hgowtham@redhat.com</a>&gt;<br>
&gt;&gt; &gt; wrote:<br>
&gt;&gt; &gt;&gt; Hi,<br>
&gt;&gt; &gt;&gt;<br>
&gt;&gt; &gt;&gt; If it was just reads then the tier daemon won&#39;t migrate the files to<br>
&gt;&gt; &gt;&gt; hot tier.<br>
&gt;&gt; &gt;&gt; If you create a file or write to a file that file will be made<br>
&gt;&gt; &gt;&gt; available on the hot tier.<br>
&gt;&gt; &gt;&gt;<br>
&gt;&gt; &gt;&gt;<br>
&gt;&gt; &gt;&gt; On Mon, Jul 31, 2017 at 11:06 AM, Nithya Balachandran<br>
&gt;&gt; &gt;&gt; &lt;<a href="mailto:nbalacha@redhat.com" target="_blank">nbalacha@redhat.com</a>&gt; wrote:<br>
&gt;&gt; &gt;&gt;&gt; Milind and Hari,<br>
&gt;&gt; &gt;&gt;&gt;<br>
&gt;&gt; &gt;&gt;&gt; Can you please take a look at this?<br>
&gt;&gt; &gt;&gt;&gt;<br>
&gt;&gt; &gt;&gt;&gt; Thanks,<br>
&gt;&gt; &gt;&gt;&gt; Nithya<br>
&gt;&gt; &gt;&gt;&gt;<br>
&gt;&gt; &gt;&gt;&gt; On 31 July 2017 at 05:12, Dmitri Chebotarov &lt;<a href="mailto:4dimach@gmail.com" target="_blank">4dimach@gmail.com</a>&gt; wrote:<br>
&gt;&gt; &gt;&gt;&gt;&gt;<br>
&gt;&gt; &gt;&gt;&gt;&gt; Hi<br>
&gt;&gt; &gt;&gt;&gt;&gt;<br>
&gt;&gt; &gt;&gt;&gt;&gt; I&#39;m looking for an advise on hot tier feature - how can I tell if the<br>
&gt;&gt; &gt;&gt;&gt;&gt; hot<br>
&gt;&gt; &gt;&gt;&gt;&gt; tier is working?<br>
&gt;&gt; &gt;&gt;&gt;&gt;<br>
&gt;&gt; &gt;&gt;&gt;&gt; I&#39;ve attached replicated-distributed hot tier to an EC volume.<br>
&gt;&gt; &gt;&gt;&gt;&gt; Yet, I don&#39;t think it&#39;s working, at least I don&#39;t see any files<br>
&gt;&gt; &gt;&gt;&gt;&gt; directly<br>
&gt;&gt; &gt;&gt;&gt;&gt; on the bricks (only folder structure). &#39;Status&#39; command has all 0s<br>
&gt;&gt; &gt;&gt;&gt;&gt; and &#39;In<br>
&gt;&gt; &gt;&gt;&gt;&gt; progress&#39; for all servers.<br>
&gt;&gt; &gt;&gt;&gt;&gt;<br>
&gt;&gt; &gt;&gt;&gt;&gt; ~]# gluster volume tier home status<br>
&gt;&gt; &gt;&gt;&gt;&gt; Node                 Promoted files       Demoted files        Status<br>
&gt;&gt; &gt;&gt;&gt;&gt; ---------            ---------            ---------<br>
&gt;&gt; &gt;&gt;&gt;&gt; ---------<br>
&gt;&gt; &gt;&gt;&gt;&gt; localhost            0                    0                    in<br>
&gt;&gt; &gt;&gt;&gt;&gt; progress<br>
&gt;&gt; &gt;&gt;&gt;&gt; MMR11                0                    0                    in<br>
&gt;&gt; &gt;&gt;&gt;&gt; progress<br>
&gt;&gt; &gt;&gt;&gt;&gt; MMR08                0                    0                    in<br>
&gt;&gt; &gt;&gt;&gt;&gt; progress<br>
&gt;&gt; &gt;&gt;&gt;&gt; MMR03                0                    0                    in<br>
&gt;&gt; &gt;&gt;&gt;&gt; progress<br>
&gt;&gt; &gt;&gt;&gt;&gt; MMR02                0                    0                    in<br>
&gt;&gt; &gt;&gt;&gt;&gt; progress<br>
&gt;&gt; &gt;&gt;&gt;&gt; MMR07                0                    0                    in<br>
&gt;&gt; &gt;&gt;&gt;&gt; progress<br>
&gt;&gt; &gt;&gt;&gt;&gt; MMR06                0                    0                    in<br>
&gt;&gt; &gt;&gt;&gt;&gt; progress<br>
&gt;&gt; &gt;&gt;&gt;&gt; MMR09                0                    0                    in<br>
&gt;&gt; &gt;&gt;&gt;&gt; progress<br>
&gt;&gt; &gt;&gt;&gt;&gt; MMR12                0                    0                    in<br>
&gt;&gt; &gt;&gt;&gt;&gt; progress<br>
&gt;&gt; &gt;&gt;&gt;&gt; MMR10                0                    0                    in<br>
&gt;&gt; &gt;&gt;&gt;&gt; progress<br>
&gt;&gt; &gt;&gt;&gt;&gt; MMR05                0                    0                    in<br>
&gt;&gt; &gt;&gt;&gt;&gt; progress<br>
&gt;&gt; &gt;&gt;&gt;&gt; MMR04                0                    0                    in<br>
&gt;&gt; &gt;&gt;&gt;&gt; progress<br>
&gt;&gt; &gt;&gt;&gt;&gt; Tiering Migration Functionality: home: success<br>
&gt;&gt; &gt;&gt;&gt;&gt;<br>
&gt;&gt; &gt;&gt;&gt;&gt;<br>
&gt;&gt; &gt;&gt;&gt;&gt; I have a folder with .yml files (Ansible) on the gluster volume,<br>
&gt;&gt; &gt;&gt;&gt;&gt; which as<br>
&gt;&gt; &gt;&gt;&gt;&gt; I understand is &#39;cache friendly&#39;.<br>
&gt;&gt; &gt;&gt;&gt;&gt; No matter how many times I read files, nothing is moved to the hot<br>
&gt;&gt; &gt;&gt;&gt;&gt; tier<br>
&gt;&gt; &gt;&gt;&gt;&gt; bricks.<br>
&gt;&gt; &gt;&gt;&gt;&gt;<br>
&gt;&gt; &gt;&gt;&gt;&gt; Thank you.<br>
&gt;&gt; &gt;&gt;&gt;&gt;<br>
&gt;&gt; &gt;&gt;&gt;&gt; ______________________________<wbr>_________________<br>
&gt;&gt; &gt;&gt;&gt;&gt; Gluster-users mailing list<br>
&gt;&gt; &gt;&gt;&gt;&gt; <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
&gt;&gt; &gt;&gt;&gt;&gt; <a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/mailm<wbr>an/listinfo/gluster-users</a><br>
&gt;&gt; &gt;&gt;&gt;<br>
&gt;&gt; &gt;&gt;&gt;<br>
&gt;&gt; &gt;&gt;&gt;<br>
&gt;&gt; &gt;&gt;&gt; ______________________________<wbr>_________________<br>
&gt;&gt; &gt;&gt;&gt; Gluster-users mailing list<br>
&gt;&gt; &gt;&gt;&gt; <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
&gt;&gt; &gt;&gt;&gt; <a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/mailm<wbr>an/listinfo/gluster-users</a><br>
&gt;&gt; &gt;&gt;<br>
&gt;&gt; &gt;&gt;<br>
&gt;&gt; &gt;&gt;<br>
&gt;&gt; &gt;&gt; --<br>
&gt;&gt; &gt;&gt; Regards,<br>
&gt;&gt; &gt;&gt; Hari Gowtham.<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; --<br>
&gt;&gt; &gt; Regards,<br>
&gt;&gt; &gt; Hari Gowtham.<br>
&gt;&gt;<br>
&gt;&gt;<br>
&gt;&gt;<br>
&gt;&gt; --<br>
&gt;&gt; Regards,<br>
&gt;&gt; Hari Gowtham.<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; ______________________________<wbr>_________________<br>
&gt; Gluster-users mailing list<br>
&gt; <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
&gt; <a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/mailm<wbr>an/listinfo/gluster-users</a><br>
<br>
<br>
<br>
--<br>
Regards,<br>
Hari Gowtham.<br>
</div></div></blockquote></div><br></div>
</div></blockquote></div><br></div>