[Gluster-users] Hot Tier
Hari Gowtham
hgowtham at redhat.com
Tue Aug 1 05:32:15 UTC 2017
Hi,
You have missed the log files.
Can you attach them?
On Mon, Jul 31, 2017 at 7:22 PM, Dmitri Chebotarov <4dimach at gmail.com> wrote:
> Hi
>
> At this point I already detached Hot Tier volume to run rebalance. Many
> volume settings only take effect for the new data (or rebalance), so I
> thought may this was the case with Hot Tier as well. Once rebalance
> finishes, I'll re-attache hot tier.
>
> cluster.write-freq-threshold and cluster.read-freq-threshold control number
> of times data is read/write before it moved to hot tier. In my case both are
> set to '2', I didn't think I needed to disable
> performance.io-cache/quick-read as well. Will give it a try.
>
> Here is the volume info (no hot tier at this time)
>
> ~]# gluster v info home
>
> Volume Name: home
> Type: Disperse
> Volume ID: 4583a3cf-4deb-4707-bd0d-e7defcb1c39b
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x (8 + 4) = 12
> Transport-type: tcp
> Bricks:
> Brick1: MMR01:/rhgs/b0/data
> Brick2: MMR02:/rhgs/b0/data
> Brick3: MMR03:/rhgs/b0/data
> Brick4: MMR04:/rhgs/b0/data
> Brick5: MMR05:/rhgs/b0/data
> Brick6: MMR06:/rhgs/b0/data
> Brick7: MMR07:/rhgs/b0/data
> Brick8: MMR08:/rhgs/b0/data
> Brick9: MMR09:/rhgs/b0/data
> Brick10: MMR10:/rhgs/b0/data
> Brick11: MMR11:/rhgs/b0/data
> Brick12: MMR12:/rhgs/b0/data
> Options Reconfigured:
> diagnostics.client-log-level: CRITICAL
> cluster.write-freq-threshold: 2
> cluster.read-freq-threshold: 2
> features.record-counters: on
> nfs.disable: on
> performance.readdir-ahead: enable
> transport.address-family: inet
> client.event-threads: 4
> server.event-threads: 4
> cluster.lookup-optimize: on
> cluster.readdir-optimize: on
> cluster.locking-scheme: granular
> cluster.shd-max-threads: 8
> cluster.shd-wait-qlength: 10000
> cluster.data-self-heal-algorithm: full
> features.cache-invalidation: on
> features.cache-invalidation-timeout: 600
> performance.stat-prefetch: on
> performance.cache-invalidation: on
> performance.md-cache-timeout: 600
> network.inode-lru-limit: 50000
> performance.write-behind-window-size: 1MB
> performance.client-io-threads: on
> performance.read-ahead: disable
> performance.cache-size: 256MB
> performance.io-thread-count: 16
> performance.strict-o-direct: on
> network.ping-timeout: 30
> network.remote-dio: disable
> user.cifs: off
> features.quota: on
> features.inode-quota: on
> features.quota-deem-statfs: on
>
> ~]# gluster v get home performance.io-cache
> performance.io-cache on
>
> ~]# gluster v get home performance.quick-read
> performance.quick-read on
>
> Thank you.
>
> On Mon, Jul 31, 2017 at 5:16 AM, Hari Gowtham <hgowtham at redhat.com> wrote:
>>
>> Hi,
>>
>> Before you try turning off the perf translators can you send us the
>> following,
>> So we will make sure that the other things haven't gone wrong.
>>
>> can you send us the log files for tier (would be better if you attach
>> other logs too),
>> the version of gluster you are using, the client, and the output for:
>> gluster v info
>> gluster v get v1 performance.io-cache
>> gluster v get v1 performance.quick-read
>>
>> Do send us this and then we will let you know what should be done,
>> as reads should also cause promotion
>>
>>
>> On Mon, Jul 31, 2017 at 2:21 PM, Hari Gowtham <hgowtham at redhat.com> wrote:
>> > For the tier daemon to migrate the files for read, few performance
>> > translators have to be turned off.
>> > By default the performance quick-read and io-cache are turned on. You
>> > can turn them off so that
>> > the files will be migrated for read.
>> >
>> > On Mon, Jul 31, 2017 at 11:34 AM, Hari Gowtham <hgowtham at redhat.com>
>> > wrote:
>> >> Hi,
>> >>
>> >> If it was just reads then the tier daemon won't migrate the files to
>> >> hot tier.
>> >> If you create a file or write to a file that file will be made
>> >> available on the hot tier.
>> >>
>> >>
>> >> On Mon, Jul 31, 2017 at 11:06 AM, Nithya Balachandran
>> >> <nbalacha at redhat.com> wrote:
>> >>> Milind and Hari,
>> >>>
>> >>> Can you please take a look at this?
>> >>>
>> >>> Thanks,
>> >>> Nithya
>> >>>
>> >>> On 31 July 2017 at 05:12, Dmitri Chebotarov <4dimach at gmail.com> wrote:
>> >>>>
>> >>>> Hi
>> >>>>
>> >>>> I'm looking for an advise on hot tier feature - how can I tell if the
>> >>>> hot
>> >>>> tier is working?
>> >>>>
>> >>>> I've attached replicated-distributed hot tier to an EC volume.
>> >>>> Yet, I don't think it's working, at least I don't see any files
>> >>>> directly
>> >>>> on the bricks (only folder structure). 'Status' command has all 0s
>> >>>> and 'In
>> >>>> progress' for all servers.
>> >>>>
>> >>>> ~]# gluster volume tier home status
>> >>>> Node Promoted files Demoted files Status
>> >>>> --------- --------- ---------
>> >>>> ---------
>> >>>> localhost 0 0 in
>> >>>> progress
>> >>>> MMR11 0 0 in
>> >>>> progress
>> >>>> MMR08 0 0 in
>> >>>> progress
>> >>>> MMR03 0 0 in
>> >>>> progress
>> >>>> MMR02 0 0 in
>> >>>> progress
>> >>>> MMR07 0 0 in
>> >>>> progress
>> >>>> MMR06 0 0 in
>> >>>> progress
>> >>>> MMR09 0 0 in
>> >>>> progress
>> >>>> MMR12 0 0 in
>> >>>> progress
>> >>>> MMR10 0 0 in
>> >>>> progress
>> >>>> MMR05 0 0 in
>> >>>> progress
>> >>>> MMR04 0 0 in
>> >>>> progress
>> >>>> Tiering Migration Functionality: home: success
>> >>>>
>> >>>>
>> >>>> I have a folder with .yml files (Ansible) on the gluster volume,
>> >>>> which as
>> >>>> I understand is 'cache friendly'.
>> >>>> No matter how many times I read files, nothing is moved to the hot
>> >>>> tier
>> >>>> bricks.
>> >>>>
>> >>>> Thank you.
>> >>>>
>> >>>> _______________________________________________
>> >>>> Gluster-users mailing list
>> >>>> Gluster-users at gluster.org
>> >>>> http://lists.gluster.org/mailman/listinfo/gluster-users
>> >>>
>> >>>
>> >>>
>> >>> _______________________________________________
>> >>> Gluster-users mailing list
>> >>> Gluster-users at gluster.org
>> >>> http://lists.gluster.org/mailman/listinfo/gluster-users
>> >>
>> >>
>> >>
>> >> --
>> >> Regards,
>> >> Hari Gowtham.
>> >
>> >
>> >
>> > --
>> > Regards,
>> > Hari Gowtham.
>>
>>
>>
>> --
>> Regards,
>> Hari Gowtham.
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
--
Regards,
Hari Gowtham.
More information about the Gluster-users
mailing list