<div dir="ltr"><div>Hello</div><div><br></div><div>I reattached hot tier to a new empty EC volume and started to copy data to the volume.</div><div>Good news is I can see files now on SSD bricks (hot tier) - 'find /path/to/brick -type f' shows files, before 'find' would only show dirs. </div><div><br></div><div>But I've got a 'rebalance' error in glusterd.log file after I attached hot tier. </div><div><br></div><div>[2017-08-02 14:09:01.489891] E [MSGID: 106062] [glusterd-utils.c:9182:glusterd_volume_rebalance_use_rsp_dict] 0-glusterd: failed to get index</div><div>The message "E [MSGID: 106062] [glusterd-utils.c:9182:glusterd_volume_rebalance_use_rsp_dict] 0-glusterd: failed to get index" repeated 10 times between [2017-08-02 14:09:01.489891] and [2017-08-02 14:09:01.545027]</div><div><br></div><div>This is output from 'rebalance status' command:</div><div><br></div><div># gluster volume rebalance voldata3 status</div><div> Node Rebalanced-files size scanned failures skipped status run time in h:m:s</div><div> --------- ----------- ----------- ----------- ----------- ----------- ------------ --------------</div><div> localhost 0 0Bytes 0 0 0 in progress 0:0:0</div><div> GFSRV18 0 0Bytes 0 0 0 in progress 0:0:0</div><div> GFSRV20 0 0Bytes 0 0 0 in progress 0:0:0</div><div> GFSRV21 0 0Bytes 0 0 0 in progress 0:0:0</div><div> GFSRV23 0 0Bytes 0 0 0 in progress 0:0:0</div><div> GFSRV17 0 0Bytes 0 0 0 in progress 0:0:0</div><div> GFSRV24 0 0Bytes 0 0 0 in progress 0:0:0</div><div> GFSRV16 0 0Bytes 0 0 0 in progress 0:0:0</div><div> GFSRV15 0 0Bytes 0 0 0 in progress 0:0:0</div><div> GFSRV14 0 0Bytes 0 0 0 in progress 0:0:0</div><div> GFSRV22 0 0Bytes 0 0 0 in progress 0:0:0</div><div> GFSRV19 0 0Bytes 0 0 0 in progress 0:0:0</div><div>volume rebalance: voldata3: success</div><div><br></div><div>and 'tier status' output:</div><div><br></div><div># gluster volume tier voldata3 status</div><div>Node Promoted files Demoted files Status</div><div>--------- --------- --------- ---------</div><div>localhost 0 0 in progress</div><div>GFSRV18 0 0 in progress</div><div>GFSRV20 0 0 in progress</div><div>GFSRV21 0 0 in progress</div><div>GFSRV23 0 0 in progress</div><div>GFSRV17 0 0 in progress</div><div>GFSRV24 0 0 in progress</div><div>GFSRV16 0 0 in progress</div><div>GFSRV15 0 0 in progress</div><div>GFSRV14 0 0 in progress</div><div>GFSRV22 0 0 in progress</div><div>GFSRV19 0 0 in progress</div><div>Tiering Migration Functionality: voldata3: success</div><div><br></div><div>'vol status' shows one active task:</div><div><br></div><div><div>Task Status of Volume voldata3</div><div>------------------------------------------------------------------------------</div><div>Task : Tier migration</div><div>ID : c4c33b04-2a1e-4e53-b1f5-a96ec6d9d851</div><div>Status : in progress</div></div><div><br></div><div><br></div><div>No errors reported in 'voldata3-tier-<uuid>.log' file.</div><div><br></div><div>I'll keep monitoring it for few day. I expect to see some 'cooled' data moving to 'cold tier'.</div><div><br></div><div>Thank you.</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Aug 1, 2017 at 1:32 AM, Hari Gowtham <span dir="ltr"><<a href="mailto:hgowtham@redhat.com" target="_blank">hgowtham@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi,<br>
<br>
You have missed the log files.<br>
<br>
Can you attach them?<br>
<div class="HOEnZb"><div class="h5"><br>
<br>
On Mon, Jul 31, 2017 at 7:22 PM, Dmitri Chebotarov <<a href="mailto:4dimach@gmail.com">4dimach@gmail.com</a>> wrote:<br>
> Hi<br>
><br>
> At this point I already detached Hot Tier volume to run rebalance. Many<br>
> volume settings only take effect for the new data (or rebalance), so I<br>
> thought may this was the case with Hot Tier as well. Once rebalance<br>
> finishes, I'll re-attache hot tier.<br>
><br>
> cluster.write-freq-threshold and cluster.read-freq-threshold control number<br>
> of times data is read/write before it moved to hot tier. In my case both are<br>
> set to '2', I didn't think I needed to disable<br>
> performance.io-cache/quick-<wbr>read as well. Will give it a try.<br>
><br>
> Here is the volume info (no hot tier at this time)<br>
><br>
> ~]# gluster v info home<br>
><br>
> Volume Name: home<br>
> Type: Disperse<br>
> Volume ID: 4583a3cf-4deb-4707-bd0d-<wbr>e7defcb1c39b<br>
> Status: Started<br>
> Snapshot Count: 0<br>
> Number of Bricks: 1 x (8 + 4) = 12<br>
> Transport-type: tcp<br>
> Bricks:<br>
> Brick1: MMR01:/rhgs/b0/data<br>
> Brick2: MMR02:/rhgs/b0/data<br>
> Brick3: MMR03:/rhgs/b0/data<br>
> Brick4: MMR04:/rhgs/b0/data<br>
> Brick5: MMR05:/rhgs/b0/data<br>
> Brick6: MMR06:/rhgs/b0/data<br>
> Brick7: MMR07:/rhgs/b0/data<br>
> Brick8: MMR08:/rhgs/b0/data<br>
> Brick9: MMR09:/rhgs/b0/data<br>
> Brick10: MMR10:/rhgs/b0/data<br>
> Brick11: MMR11:/rhgs/b0/data<br>
> Brick12: MMR12:/rhgs/b0/data<br>
> Options Reconfigured:<br>
> diagnostics.client-log-level: CRITICAL<br>
> cluster.write-freq-threshold: 2<br>
> cluster.read-freq-threshold: 2<br>
> features.record-counters: on<br>
> nfs.disable: on<br>
> performance.readdir-ahead: enable<br>
> transport.address-family: inet<br>
> client.event-threads: 4<br>
> server.event-threads: 4<br>
> cluster.lookup-optimize: on<br>
> cluster.readdir-optimize: on<br>
> cluster.locking-scheme: granular<br>
> cluster.shd-max-threads: 8<br>
> cluster.shd-wait-qlength: 10000<br>
> cluster.data-self-heal-<wbr>algorithm: full<br>
> features.cache-invalidation: on<br>
> features.cache-invalidation-<wbr>timeout: 600<br>
> performance.stat-prefetch: on<br>
> performance.cache-<wbr>invalidation: on<br>
> performance.md-cache-timeout: 600<br>
> network.inode-lru-limit: 50000<br>
> performance.write-behind-<wbr>window-size: 1MB<br>
> performance.client-io-threads: on<br>
> performance.read-ahead: disable<br>
> performance.cache-size: 256MB<br>
> performance.io-thread-count: 16<br>
> performance.strict-o-direct: on<br>
> network.ping-timeout: 30<br>
> network.remote-dio: disable<br>
> user.cifs: off<br>
> features.quota: on<br>
> features.inode-quota: on<br>
> features.quota-deem-statfs: on<br>
><br>
> ~]# gluster v get home performance.io-cache<br>
> performance.io-cache on<br>
><br>
> ~]# gluster v get home performance.quick-read<br>
> performance.quick-read on<br>
><br>
> Thank you.<br>
><br>
> On Mon, Jul 31, 2017 at 5:16 AM, Hari Gowtham <<a href="mailto:hgowtham@redhat.com">hgowtham@redhat.com</a>> wrote:<br>
>><br>
>> Hi,<br>
>><br>
>> Before you try turning off the perf translators can you send us the<br>
>> following,<br>
>> So we will make sure that the other things haven't gone wrong.<br>
>><br>
>> can you send us the log files for tier (would be better if you attach<br>
>> other logs too),<br>
>> the version of gluster you are using, the client, and the output for:<br>
>> gluster v info<br>
>> gluster v get v1 performance.io-cache<br>
>> gluster v get v1 performance.quick-read<br>
>><br>
>> Do send us this and then we will let you know what should be done,<br>
>> as reads should also cause promotion<br>
>><br>
>><br>
>> On Mon, Jul 31, 2017 at 2:21 PM, Hari Gowtham <<a href="mailto:hgowtham@redhat.com">hgowtham@redhat.com</a>> wrote:<br>
>> > For the tier daemon to migrate the files for read, few performance<br>
>> > translators have to be turned off.<br>
>> > By default the performance quick-read and io-cache are turned on. You<br>
>> > can turn them off so that<br>
>> > the files will be migrated for read.<br>
>> ><br>
>> > On Mon, Jul 31, 2017 at 11:34 AM, Hari Gowtham <<a href="mailto:hgowtham@redhat.com">hgowtham@redhat.com</a>><br>
>> > wrote:<br>
>> >> Hi,<br>
>> >><br>
>> >> If it was just reads then the tier daemon won't migrate the files to<br>
>> >> hot tier.<br>
>> >> If you create a file or write to a file that file will be made<br>
>> >> available on the hot tier.<br>
>> >><br>
>> >><br>
>> >> On Mon, Jul 31, 2017 at 11:06 AM, Nithya Balachandran<br>
>> >> <<a href="mailto:nbalacha@redhat.com">nbalacha@redhat.com</a>> wrote:<br>
>> >>> Milind and Hari,<br>
>> >>><br>
>> >>> Can you please take a look at this?<br>
>> >>><br>
>> >>> Thanks,<br>
>> >>> Nithya<br>
>> >>><br>
>> >>> On 31 July 2017 at 05:12, Dmitri Chebotarov <<a href="mailto:4dimach@gmail.com">4dimach@gmail.com</a>> wrote:<br>
>> >>>><br>
>> >>>> Hi<br>
>> >>>><br>
>> >>>> I'm looking for an advise on hot tier feature - how can I tell if the<br>
>> >>>> hot<br>
>> >>>> tier is working?<br>
>> >>>><br>
>> >>>> I've attached replicated-distributed hot tier to an EC volume.<br>
>> >>>> Yet, I don't think it's working, at least I don't see any files<br>
>> >>>> directly<br>
>> >>>> on the bricks (only folder structure). 'Status' command has all 0s<br>
>> >>>> and 'In<br>
>> >>>> progress' for all servers.<br>
>> >>>><br>
>> >>>> ~]# gluster volume tier home status<br>
>> >>>> Node Promoted files Demoted files Status<br>
>> >>>> --------- --------- ---------<br>
>> >>>> ---------<br>
>> >>>> localhost 0 0 in<br>
>> >>>> progress<br>
>> >>>> MMR11 0 0 in<br>
>> >>>> progress<br>
>> >>>> MMR08 0 0 in<br>
>> >>>> progress<br>
>> >>>> MMR03 0 0 in<br>
>> >>>> progress<br>
>> >>>> MMR02 0 0 in<br>
>> >>>> progress<br>
>> >>>> MMR07 0 0 in<br>
>> >>>> progress<br>
>> >>>> MMR06 0 0 in<br>
>> >>>> progress<br>
>> >>>> MMR09 0 0 in<br>
>> >>>> progress<br>
>> >>>> MMR12 0 0 in<br>
>> >>>> progress<br>
>> >>>> MMR10 0 0 in<br>
>> >>>> progress<br>
>> >>>> MMR05 0 0 in<br>
>> >>>> progress<br>
>> >>>> MMR04 0 0 in<br>
>> >>>> progress<br>
>> >>>> Tiering Migration Functionality: home: success<br>
>> >>>><br>
>> >>>><br>
>> >>>> I have a folder with .yml files (Ansible) on the gluster volume,<br>
>> >>>> which as<br>
>> >>>> I understand is 'cache friendly'.<br>
>> >>>> No matter how many times I read files, nothing is moved to the hot<br>
>> >>>> tier<br>
>> >>>> bricks.<br>
>> >>>><br>
>> >>>> Thank you.<br>
>> >>>><br>
>> >>>> ______________________________<wbr>_________________<br>
>> >>>> Gluster-users mailing list<br>
>> >>>> <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
>> >>>> <a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br>
>> >>><br>
>> >>><br>
>> >>><br>
>> >>> ______________________________<wbr>_________________<br>
>> >>> Gluster-users mailing list<br>
>> >>> <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
>> >>> <a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br>
>> >><br>
>> >><br>
>> >><br>
>> >> --<br>
>> >> Regards,<br>
>> >> Hari Gowtham.<br>
>> ><br>
>> ><br>
>> ><br>
>> > --<br>
>> > Regards,<br>
>> > Hari Gowtham.<br>
>><br>
>><br>
>><br>
>> --<br>
>> Regards,<br>
>> Hari Gowtham.<br>
><br>
><br>
><br>
> ______________________________<wbr>_________________<br>
> Gluster-users mailing list<br>
> <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
> <a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br>
<br>
<br>
<br>
--<br>
Regards,<br>
Hari Gowtham.<br>
</div></div></blockquote></div><br></div>