[Gluster-users] Geo-Replication issue

Kotresh Hiremath Ravishankar khiremat at redhat.com
Thu Sep 6 09:31:38 UTC 2018


Hi Krishna,

Could you come online in #gluster channel on freenode? That would be faster.

On Thu, Sep 6, 2018 at 1:45 PM, Krishna Verma <kverma at cadence.com> wrote:

> Hi Kotresh,
>
>
>
> [root at gluster-poc-noida repvol]# tailf /var/log/glusterfs/glusterd.log
>
> [2018-09-06 07:57:03.443256] W [MSGID: 106028] [glusterd-geo-rep.c:2568:glusterd_get_statefile_name]
> 0-management: Config file (/var/lib/glusterd/geo-replication/glusterdist_
> gluster-poc-sj_gluster/gsyncd.conf) missing. Looking for template config
> file (/var/lib/glusterd/geo-replication/gsyncd_template.conf) [No such
> file or directory]
>
> [2018-09-06 07:57:03.443339] I [MSGID: 106294] [glusterd-geo-rep.c:2577:glusterd_get_statefile_name]
> 0-management: Using default config template(/var/lib/glusterd/
> geo-replication/gsyncd_template.conf).
>
> [2018-09-06 07:57:03.512014] E [MSGID: 106028] [glusterd-geo-rep.c:3577:glusterd_op_stage_gsync_set]
> 0-management: Geo-replication session between glusterdist and
> gluster-poc-sj::gluster does not exist.. statefile = /var/lib/glusterd/geo-
> replication/glusterdist_gluster-poc-sj_gluster/monitor.status [No such
> file or directory]
>
> [2018-09-06 07:57:03.512049] E [MSGID: 106322] [glusterd-geo-rep.c:3778:glusterd_op_stage_gsync_set]
> 0-management: Geo-replication session between glusterdist and
> gluster-poc-sj::gluster does not exist.
>
> [2018-09-06 07:57:03.512063] E [MSGID: 106301] [glusterd-syncop.c:1352:gd_stage_op_phase]
> 0-management: Staging of operation 'Volume Geo-replication' failed on
> localhost : Geo-replication session between glusterdist and
> gluster-poc-sj::gluster does not exist.
>
> [2018-09-06 07:57:24.869113] E [MSGID: 106316] [glusterd-geo-rep.c:2761:glusterd_verify_slave]
> 0-management: Not a valid slave
>
> [2018-09-06 07:57:24.869289] E [MSGID: 106316] [glusterd-geo-rep.c:3152:
> glusterd_op_stage_gsync_create] 0-management: gluster-poc-sj::gluster is
> not a valid slave volume. Error: Unable to mount and fetch slave volume
> details. Please check the log: /var/log/glusterfs/geo-
> replication/gverify-slavemnt.log
>
> [2018-09-06 07:57:24.869313] E [MSGID: 106301] [glusterd-syncop.c:1352:gd_stage_op_phase]
> 0-management: Staging of operation 'Volume Geo-replication Create' failed
> on localhost : Unable to mount and fetch slave volume details. Please check
> the log: /var/log/glusterfs/geo-replication/gverify-slavemnt.log
>
> [2018-09-06 07:56:38.421045] I [MSGID: 106308] [glusterd-geo-rep.c:4881:
> glusterd_get_gsync_status_mst_slv] 0-management: geo-replication status
> glusterdist gluster-poc-sj::gluster : session is not active
>
> [2018-09-06 07:56:38.486229] I [MSGID: 106028] [glusterd-geo-rep.c:4903:
> glusterd_get_gsync_status_mst_slv] 0-management: /var/lib/glusterd/geo-
> replication/glusterdist_gluster-poc-sj_gluster/monitor.status statefile
> not present. [No such file or directory]
>
>
>
> /Krishna
>
>
>
> *From:* Kotresh Hiremath Ravishankar <khiremat at redhat.com>
> *Sent:* Thursday, September 6, 2018 1:20 PM
> *To:* Krishna Verma <kverma at cadence.com>
> *Cc:* Gluster Users <gluster-users at gluster.org>
> *Subject:* Re: [Gluster-users] Geo-Replication issue
>
>
>
> EXTERNAL MAIL
>
> Hi Krishna,
>
> glusterd log file would help here
>
> Thanks,
>
> Kotresh HR
>
>
>
> On Thu, Sep 6, 2018 at 1:02 PM, Krishna Verma <kverma at cadence.com> wrote:
>
> Hi All,
>
>
>
> I am getting issue in geo-replication distributed gluster volume. In a
> session status it shows only peer node instead of 2. And I am also not able
> to delete/start/stop or anything on this session.
>
>
>
> geo-replication distributed gluster volume “glusterdist” status
>
> [root at gluster-poc-noida ~]# gluster volume status glusterdist
>
> Status of volume: glusterdist
>
> Gluster process                             TCP Port  RDMA Port  Online
> Pid
>
> ------------------------------------------------------------
> ------------------
>
> Brick gluster-poc-noida:/data/gluster-dist/
>
> distvol                                     49154     0          Y
> 23138
>
> Brick noi-poc-gluster:/data/gluster-dist/di
>
> stvol                                       49154     0          Y
> 14637
>
>
>
> Task Status of Volume glusterdist
>
> ------------------------------------------------------------
> ------------------
>
> There are no active volume tasks
>
>
>
> Geo-replication session status
>
> [root at gluster-poc-noida ~]# gluster volume geo-replication glusterdist
> gluster-poc-sj::glusterdist status
>
>
>
> MASTER NODE        MASTER VOL     MASTER BRICK                  SLAVE
> USER    SLAVE                          SLAVE NODE    STATUS     CRAWL
> STATUS    LAST_SYNCED
>
> ------------------------------------------------------------
> ------------------------------------------------------------
> ----------------------------------------
>
> noi-poc-gluster    glusterdist    /data/gluster-dist/distvol
> root          gluster-poc-sj::glusterdist    N/A           Stopped
> N/A             N/A
>
> [root at gluster-poc-noida ~]#
>
>
>
> Can’t stop/start/delete the session:
>
> [root at gluster-poc-noida ~]# gluster volume geo-replication glusterdist
> gluster-poc-sj::glusterdist stop
>
> Staging failed on localhost. Please check the log file for more details.
>
> geo-replication command failed
>
> [root at gluster-poc-noida ~]# gluster volume geo-replication glusterdist
> gluster-poc-sj::glusterdist stop force
>
> pid-file entry mising in config file and template config file.
>
> geo-replication command failed
>
> [root at gluster-poc-noida ~]# gluster volume geo-replication glusterdist
> gluster-poc-sj::glusterdist delete
>
> Staging failed on localhost. Please check the log file for more details.
>
> geo-replication command failed
>
> [root at gluster-poc-noida ~]# gluster volume geo-replication glusterdist
> gluster-poc-sj::glusterdist start
>
> Staging failed on localhost. Please check the log file for more details.
>
> geo-replication command failed
>
> [root at gluster-poc-noida ~]#
>
>
>
> gsyncd.log errors
>
> [2018-09-06 06:17:21.757195] I [monitor(monitor):269:monitor] Monitor: worker
> died before establishing connection       brick=/data/gluster-dist/distvol
>
> [2018-09-06 06:17:32.312093] I [monitor(monitor):158:monitor] Monitor:
> starting gsyncd worker   brick=/data/gluster-dist/distvol
> slave_node=gluster-poc-sj
>
> [2018-09-06 06:17:32.441817] I [monitor(monitor):261:monitor] Monitor: Changelog
> Agent died, Aborting Worker    brick=/data/gluster-dist/distvol
>
> [2018-09-06 06:17:32.442193] I [monitor(monitor):279:monitor] Monitor:
> worker died in startup phase     brick=/data/gluster-dist/distvol
>
> [2018-09-06 06:17:43.1177] I [monitor(monitor):158:monitor] Monitor:
> starting gsyncd worker     brick=/data/gluster-dist/distvol
> slave_node=gluster-poc-sj
>
> [2018-09-06 06:17:43.137794] I [monitor(monitor):261:monitor] Monitor:
> Changelog Agent died, Aborting Worker    brick=/data/gluster-dist/distvol
>
> [2018-09-06 06:17:43.138214] I [monitor(monitor):279:monitor] Monitor:
> worker died in startup phase     brick=/data/gluster-dist/distvol
>
> [2018-09-06 06:17:53.144072] I [monitor(monitor):158:monitor] Monitor:
> starting gsyncd worker   brick=/data/gluster-dist/distvol
> slave_node=gluster-poc-sj
>
> [2018-09-06 06:17:53.276853] I [monitor(monitor):261:monitor] Monitor:
> Changelog Agent died, Aborting Worker    brick=/data/gluster-dist/distvol
>
> [2018-09-06 06:17:53.277327] I [monitor(monitor):279:monitor] Monitor:
> worker died in startup phase     brick=/data/gluster-dist/distvol
>
>
>
> Could anyone please help?
>
>
>
> /Krishna
>
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.gluster.org_mailman_listinfo_gluster-2Dusers&d=DwMFaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=zfrxmLB-zXT7RSou-WQ9fI9aIoumCEbWelm3E2GuwQo&s=QXfkhcIvtb7xfvCBJ3HvqbnW_1eizMd8D1TEyDO75tc&e=>
>
>
>
>
> --
>
> Thanks and Regards,
>
> Kotresh H R
>



-- 
Thanks and Regards,
Kotresh H R
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180906/109f6935/attachment.html>


More information about the Gluster-users mailing list