[Gluster-users] Hot Tier

Hari Gowtham hgowtham at redhat.com
Thu Aug 3 05:19:56 UTC 2017


Hi,
We will look into the " failed to get index" error.
It shouldn't affect the normal working. Do let us know if you face any
other issues.

Regards,
Hari.

On 02-Aug-2017 11:55 PM, "Dmitri Chebotarov" <4dimach at gmail.com> wrote:

Hello

I reattached hot tier to a new empty EC volume and started to copy data to
the volume.
Good news is I can see files now on SSD bricks (hot tier) - 'find
/path/to/brick -type f' shows files, before 'find' would only show dirs.

But I've got a 'rebalance' error in glusterd.log file after I attached hot
tier.

[2017-08-02 14:09:01.489891] E [MSGID: 106062] [glusterd-utils.c:9182:
glusterd_volume_rebalance_use_rsp_dict] 0-glusterd: failed to get index
The message "E [MSGID: 106062] [glusterd-utils.c:9182:
glusterd_volume_rebalance_use_rsp_dict] 0-glusterd: failed to get index"
repeated 10 times between [2017-08-02 14:09:01.489891] and [2017-08-02
14:09:01.545027]

This is output from 'rebalance status' command:

# gluster volume rebalance voldata3 status
                                    Node Rebalanced-files          size
  scanned      failures       skipped               status  run time in
h:m:s
                               ---------      -----------   -----------
-----------   -----------   -----------         ------------
--------------
                               localhost                0        0Bytes
        0             0             0          in progress        0:0:0
                                   GFSRV18                0        0Bytes
          0             0             0          in progress        0:0:0
                                   GFSRV20                0        0Bytes
          0             0             0          in progress        0:0:0
                                   GFSRV21                0        0Bytes
          0             0             0          in progress        0:0:0
                                   GFSRV23                0        0Bytes
          0             0             0          in progress        0:0:0
                                   GFSRV17                0        0Bytes
          0             0             0          in progress        0:0:0
                                   GFSRV24                0        0Bytes
          0             0             0          in progress        0:0:0
                                   GFSRV16                0        0Bytes
          0             0             0          in progress        0:0:0
                                   GFSRV15                0        0Bytes
          0             0             0          in progress        0:0:0
                                   GFSRV14                0        0Bytes
          0             0             0          in progress        0:0:0
                                   GFSRV22                0        0Bytes
          0             0             0          in progress        0:0:0
                                   GFSRV19                0        0Bytes
          0             0             0          in progress        0:0:0
volume rebalance: voldata3: success

and 'tier status' output:

# gluster volume tier voldata3 status
Node                 Promoted files       Demoted files        Status
---------            ---------            ---------            ---------
localhost            0                    0                    in progress
GFSRV18                0                    0                    in progress
GFSRV20                0                    0                    in progress
GFSRV21                0                    0                    in progress
GFSRV23                0                    0                    in progress
GFSRV17                0                    0                    in progress
GFSRV24                0                    0                    in progress
GFSRV16                0                    0                    in progress
GFSRV15                0                    0                    in progress
GFSRV14                0                    0                    in progress
GFSRV22                0                    0                    in progress
GFSRV19                0                    0                    in progress
Tiering Migration Functionality: voldata3: success

'vol status' shows one active task:

Task Status of Volume voldata3
------------------------------------------------------------
------------------
Task                 : Tier migration
ID                   : c4c33b04-2a1e-4e53-b1f5-a96ec6d9d851
Status               : in progress


No errors reported in 'voldata3-tier-<uuid>.log' file.

I'll keep monitoring it for few day. I expect to see some 'cooled' data
moving to 'cold tier'.

Thank you.

On Tue, Aug 1, 2017 at 1:32 AM, Hari Gowtham <hgowtham at redhat.com> wrote:

> Hi,
>
> You have missed the log files.
>
> Can you attach them?
>
>
> On Mon, Jul 31, 2017 at 7:22 PM, Dmitri Chebotarov <4dimach at gmail.com>
> wrote:
> > Hi
> >
> > At this point I already detached Hot Tier volume to run rebalance. Many
> > volume settings only take effect for the new data (or rebalance), so I
> > thought may this was the case with Hot Tier as well. Once rebalance
> > finishes, I'll re-attache hot tier.
> >
> > cluster.write-freq-threshold and cluster.read-freq-threshold control
> number
> > of times data is read/write before it moved to hot tier. In my case both
> are
> > set to '2', I didn't think I needed to disable
> > performance.io-cache/quick-read as well. Will give it a try.
> >
> > Here is the volume info (no hot tier at this time)
> >
> > ~]# gluster v info home
> >
> > Volume Name: home
> > Type: Disperse
> > Volume ID: 4583a3cf-4deb-4707-bd0d-e7defcb1c39b
> > Status: Started
> > Snapshot Count: 0
> > Number of Bricks: 1 x (8 + 4) = 12
> > Transport-type: tcp
> > Bricks:
> > Brick1: MMR01:/rhgs/b0/data
> > Brick2: MMR02:/rhgs/b0/data
> > Brick3: MMR03:/rhgs/b0/data
> > Brick4: MMR04:/rhgs/b0/data
> > Brick5: MMR05:/rhgs/b0/data
> > Brick6: MMR06:/rhgs/b0/data
> > Brick7: MMR07:/rhgs/b0/data
> > Brick8: MMR08:/rhgs/b0/data
> > Brick9: MMR09:/rhgs/b0/data
> > Brick10: MMR10:/rhgs/b0/data
> > Brick11: MMR11:/rhgs/b0/data
> > Brick12: MMR12:/rhgs/b0/data
> > Options Reconfigured:
> > diagnostics.client-log-level: CRITICAL
> > cluster.write-freq-threshold: 2
> > cluster.read-freq-threshold: 2
> > features.record-counters: on
> > nfs.disable: on
> > performance.readdir-ahead: enable
> > transport.address-family: inet
> > client.event-threads: 4
> > server.event-threads: 4
> > cluster.lookup-optimize: on
> > cluster.readdir-optimize: on
> > cluster.locking-scheme: granular
> > cluster.shd-max-threads: 8
> > cluster.shd-wait-qlength: 10000
> > cluster.data-self-heal-algorithm: full
> > features.cache-invalidation: on
> > features.cache-invalidation-timeout: 600
> > performance.stat-prefetch: on
> > performance.cache-invalidation: on
> > performance.md-cache-timeout: 600
> > network.inode-lru-limit: 50000
> > performance.write-behind-window-size: 1MB
> > performance.client-io-threads: on
> > performance.read-ahead: disable
> > performance.cache-size: 256MB
> > performance.io-thread-count: 16
> > performance.strict-o-direct: on
> > network.ping-timeout: 30
> > network.remote-dio: disable
> > user.cifs: off
> > features.quota: on
> > features.inode-quota: on
> > features.quota-deem-statfs: on
> >
> > ~]# gluster v get home  performance.io-cache
> > performance.io-cache                    on
> >
> > ~]# gluster v get home  performance.quick-read
> > performance.quick-read                  on
> >
> > Thank you.
> >
> > On Mon, Jul 31, 2017 at 5:16 AM, Hari Gowtham <hgowtham at redhat.com>
> wrote:
> >>
> >> Hi,
> >>
> >> Before you try turning off the perf translators can you send us the
> >> following,
> >> So we will make sure that the other things haven't gone wrong.
> >>
> >> can you send us the log files for tier (would be better if you attach
> >> other logs too),
> >> the version of gluster you are using, the client, and the output for:
> >> gluster v info
> >> gluster v get v1 performance.io-cache
> >> gluster v get v1 performance.quick-read
> >>
> >> Do send us this and then we will let you know what should be done,
> >> as reads should also cause promotion
> >>
> >>
> >> On Mon, Jul 31, 2017 at 2:21 PM, Hari Gowtham <hgowtham at redhat.com>
> wrote:
> >> > For the tier daemon to migrate the files for read, few performance
> >> > translators have to be turned off.
> >> > By default the performance quick-read and io-cache are turned on. You
> >> > can turn them off so that
> >> > the files will be migrated for read.
> >> >
> >> > On Mon, Jul 31, 2017 at 11:34 AM, Hari Gowtham <hgowtham at redhat.com>
> >> > wrote:
> >> >> Hi,
> >> >>
> >> >> If it was just reads then the tier daemon won't migrate the files to
> >> >> hot tier.
> >> >> If you create a file or write to a file that file will be made
> >> >> available on the hot tier.
> >> >>
> >> >>
> >> >> On Mon, Jul 31, 2017 at 11:06 AM, Nithya Balachandran
> >> >> <nbalacha at redhat.com> wrote:
> >> >>> Milind and Hari,
> >> >>>
> >> >>> Can you please take a look at this?
> >> >>>
> >> >>> Thanks,
> >> >>> Nithya
> >> >>>
> >> >>> On 31 July 2017 at 05:12, Dmitri Chebotarov <4dimach at gmail.com>
> wrote:
> >> >>>>
> >> >>>> Hi
> >> >>>>
> >> >>>> I'm looking for an advise on hot tier feature - how can I tell if
> the
> >> >>>> hot
> >> >>>> tier is working?
> >> >>>>
> >> >>>> I've attached replicated-distributed hot tier to an EC volume.
> >> >>>> Yet, I don't think it's working, at least I don't see any files
> >> >>>> directly
> >> >>>> on the bricks (only folder structure). 'Status' command has all 0s
> >> >>>> and 'In
> >> >>>> progress' for all servers.
> >> >>>>
> >> >>>> ~]# gluster volume tier home status
> >> >>>> Node                 Promoted files       Demoted files
> Status
> >> >>>> ---------            ---------            ---------
> >> >>>> ---------
> >> >>>> localhost            0                    0                    in
> >> >>>> progress
> >> >>>> MMR11                0                    0                    in
> >> >>>> progress
> >> >>>> MMR08                0                    0                    in
> >> >>>> progress
> >> >>>> MMR03                0                    0                    in
> >> >>>> progress
> >> >>>> MMR02                0                    0                    in
> >> >>>> progress
> >> >>>> MMR07                0                    0                    in
> >> >>>> progress
> >> >>>> MMR06                0                    0                    in
> >> >>>> progress
> >> >>>> MMR09                0                    0                    in
> >> >>>> progress
> >> >>>> MMR12                0                    0                    in
> >> >>>> progress
> >> >>>> MMR10                0                    0                    in
> >> >>>> progress
> >> >>>> MMR05                0                    0                    in
> >> >>>> progress
> >> >>>> MMR04                0                    0                    in
> >> >>>> progress
> >> >>>> Tiering Migration Functionality: home: success
> >> >>>>
> >> >>>>
> >> >>>> I have a folder with .yml files (Ansible) on the gluster volume,
> >> >>>> which as
> >> >>>> I understand is 'cache friendly'.
> >> >>>> No matter how many times I read files, nothing is moved to the hot
> >> >>>> tier
> >> >>>> bricks.
> >> >>>>
> >> >>>> Thank you.
> >> >>>>
> >> >>>> _______________________________________________
> >> >>>> Gluster-users mailing list
> >> >>>> Gluster-users at gluster.org
> >> >>>> http://lists.gluster.org/mailman/listinfo/gluster-users
> >> >>>
> >> >>>
> >> >>>
> >> >>> _______________________________________________
> >> >>> Gluster-users mailing list
> >> >>> Gluster-users at gluster.org
> >> >>> http://lists.gluster.org/mailman/listinfo/gluster-users
> >> >>
> >> >>
> >> >>
> >> >> --
> >> >> Regards,
> >> >> Hari Gowtham.
> >> >
> >> >
> >> >
> >> > --
> >> > Regards,
> >> > Hari Gowtham.
> >>
> >>
> >>
> >> --
> >> Regards,
> >> Hari Gowtham.
> >
> >
> >
> > _______________________________________________
> > Gluster-users mailing list
> > Gluster-users at gluster.org
> > http://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>
> --
> Regards,
> Hari Gowtham.
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170803/dcf05c6e/attachment.html>


More information about the Gluster-users mailing list