<div dir="ltr">Hi<div><br></div><div>At this point I already detached Hot Tier volume to run rebalance. Many volume settings only take effect for the new data (or rebalance), so I thought may this was the case with Hot Tier as well. Once rebalance finishes, I'll re-attache hot tier.</div><div><br></div><div><div>cluster.write-freq-threshold and cluster.read-freq-threshold control number of times data is read/write before it moved to hot tier. In my case both are set to '2', I didn't think I needed to disable performance.io-cache/quick-read as well. Will give it a try.</div></div><div><br></div><div>Here is the volume info (no hot tier at this time)</div><div><br></div><div><div>~]# gluster v info home</div><div><br></div><div>Volume Name: home</div><div>Type: Disperse</div><div>Volume ID: 4583a3cf-4deb-4707-bd0d-e7defcb1c39b</div><div>Status: Started</div><div>Snapshot Count: 0</div><div>Number of Bricks: 1 x (8 + 4) = 12</div><div>Transport-type: tcp</div><div>Bricks:</div><div>Brick1: MMR01:/rhgs/b0/data</div><div>Brick2: MMR02:/rhgs/b0/data</div><div>Brick3: MMR03:/rhgs/b0/data</div><div>Brick4: MMR04:/rhgs/b0/data</div><div>Brick5: MMR05:/rhgs/b0/data</div><div>Brick6: MMR06:/rhgs/b0/data</div><div>Brick7: MMR07:/rhgs/b0/data</div><div>Brick8: MMR08:/rhgs/b0/data</div><div>Brick9: MMR09:/rhgs/b0/data</div><div>Brick10: MMR10:/rhgs/b0/data</div><div>Brick11: MMR11:/rhgs/b0/data</div><div>Brick12: MMR12:/rhgs/b0/data</div><div>Options Reconfigured:</div><div>diagnostics.client-log-level: CRITICAL</div><div>cluster.write-freq-threshold: 2</div><div>cluster.read-freq-threshold: 2</div><div>features.record-counters: on</div><div>nfs.disable: on</div><div>performance.readdir-ahead: enable</div><div>transport.address-family: inet</div><div>client.event-threads: 4</div><div>server.event-threads: 4</div><div>cluster.lookup-optimize: on</div><div>cluster.readdir-optimize: on</div><div>cluster.locking-scheme: granular</div><div>cluster.shd-max-threads: 8</div><div>cluster.shd-wait-qlength: 10000</div><div>cluster.data-self-heal-algorithm: full</div><div>features.cache-invalidation: on</div><div>features.cache-invalidation-timeout: 600</div><div>performance.stat-prefetch: on</div><div>performance.cache-invalidation: on</div><div>performance.md-cache-timeout: 600</div><div>network.inode-lru-limit: 50000</div><div>performance.write-behind-window-size: 1MB</div><div>performance.client-io-threads: on</div><div>performance.read-ahead: disable</div><div>performance.cache-size: 256MB</div><div>performance.io-thread-count: 16</div><div>performance.strict-o-direct: on</div><div>network.ping-timeout: 30</div><div>network.remote-dio: disable</div><div>user.cifs: off</div><div>features.quota: on</div><div>features.inode-quota: on</div><div>features.quota-deem-statfs: on</div></div><div><br></div><div><div>~]# gluster v get home performance.io-cache</div><div>performance.io-cache on<br></div><div><br></div><div>~]# gluster v get home performance.quick-read</div><div>performance.quick-read on<br></div></div><div><br></div><div>Thank you.</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Jul 31, 2017 at 5:16 AM, Hari Gowtham <span dir="ltr"><<a href="mailto:hgowtham@redhat.com" target="_blank">hgowtham@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi,<br>
<br>
Before you try turning off the perf translators can you send us the following,<br>
So we will make sure that the other things haven't gone wrong.<br>
<br>
can you send us the log files for tier (would be better if you attach<br>
other logs too),<br>
the version of gluster you are using, the client, and the output for:<br>
gluster v info<br>
gluster v get v1 performance.io-cache<br>
gluster v get v1 performance.quick-read<br>
<br>
Do send us this and then we will let you know what should be done,<br>
as reads should also cause promotion<br>
<div class="HOEnZb"><div class="h5"><br>
<br>
On Mon, Jul 31, 2017 at 2:21 PM, Hari Gowtham <<a href="mailto:hgowtham@redhat.com">hgowtham@redhat.com</a>> wrote:<br>
> For the tier daemon to migrate the files for read, few performance<br>
> translators have to be turned off.<br>
> By default the performance quick-read and io-cache are turned on. You<br>
> can turn them off so that<br>
> the files will be migrated for read.<br>
><br>
> On Mon, Jul 31, 2017 at 11:34 AM, Hari Gowtham <<a href="mailto:hgowtham@redhat.com">hgowtham@redhat.com</a>> wrote:<br>
>> Hi,<br>
>><br>
>> If it was just reads then the tier daemon won't migrate the files to hot tier.<br>
>> If you create a file or write to a file that file will be made<br>
>> available on the hot tier.<br>
>><br>
>><br>
>> On Mon, Jul 31, 2017 at 11:06 AM, Nithya Balachandran<br>
>> <<a href="mailto:nbalacha@redhat.com">nbalacha@redhat.com</a>> wrote:<br>
>>> Milind and Hari,<br>
>>><br>
>>> Can you please take a look at this?<br>
>>><br>
>>> Thanks,<br>
>>> Nithya<br>
>>><br>
>>> On 31 July 2017 at 05:12, Dmitri Chebotarov <<a href="mailto:4dimach@gmail.com">4dimach@gmail.com</a>> wrote:<br>
>>>><br>
>>>> Hi<br>
>>>><br>
>>>> I'm looking for an advise on hot tier feature - how can I tell if the hot<br>
>>>> tier is working?<br>
>>>><br>
>>>> I've attached replicated-distributed hot tier to an EC volume.<br>
>>>> Yet, I don't think it's working, at least I don't see any files directly<br>
>>>> on the bricks (only folder structure). 'Status' command has all 0s and 'In<br>
>>>> progress' for all servers.<br>
>>>><br>
>>>> ~]# gluster volume tier home status<br>
>>>> Node Promoted files Demoted files Status<br>
>>>> --------- --------- --------- ---------<br>
>>>> localhost 0 0 in progress<br>
>>>> MMR11 0 0 in progress<br>
>>>> MMR08 0 0 in progress<br>
>>>> MMR03 0 0 in progress<br>
>>>> MMR02 0 0 in progress<br>
>>>> MMR07 0 0 in progress<br>
>>>> MMR06 0 0 in progress<br>
>>>> MMR09 0 0 in progress<br>
>>>> MMR12 0 0 in progress<br>
>>>> MMR10 0 0 in progress<br>
>>>> MMR05 0 0 in progress<br>
>>>> MMR04 0 0 in progress<br>
>>>> Tiering Migration Functionality: home: success<br>
>>>><br>
>>>><br>
>>>> I have a folder with .yml files (Ansible) on the gluster volume, which as<br>
>>>> I understand is 'cache friendly'.<br>
>>>> No matter how many times I read files, nothing is moved to the hot tier<br>
>>>> bricks.<br>
>>>><br>
>>>> Thank you.<br>
>>>><br>
>>>> ______________________________<wbr>_________________<br>
>>>> Gluster-users mailing list<br>
>>>> <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
>>>> <a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br>
>>><br>
>>><br>
>>><br>
>>> ______________________________<wbr>_________________<br>
>>> Gluster-users mailing list<br>
>>> <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
>>> <a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br>
>><br>
>><br>
>><br>
>> --<br>
>> Regards,<br>
>> Hari Gowtham.<br>
><br>
><br>
><br>
> --<br>
> Regards,<br>
> Hari Gowtham.<br>
<br>
<br>
<br>
</div></div><span class="HOEnZb"><font color="#888888">--<br>
Regards,<br>
Hari Gowtham.<br>
</font></span></blockquote></div><br></div>