[Gluster-users] Upgrade to 4.1.2 geo-replication does not work
Kotresh Hiremath Ravishankar
khiremat at redhat.com
Mon Sep 3 07:14:29 UTC 2018
Hi Krishna,
The log is not complete. If you are re-trying, could you please try it out
on 4.1.3 and share the logs.
Thanks,
Kotresh HR
On Mon, Sep 3, 2018 at 12:42 PM, Krishna Verma <kverma at cadence.com> wrote:
> Hi Kotresh,
>
>
>
> Please find the log files attached.
>
>
>
> Request you to please have a look.
>
>
>
> /Krishna
>
>
>
>
>
>
>
> *From:* Kotresh Hiremath Ravishankar <khiremat at redhat.com>
> *Sent:* Monday, September 3, 2018 10:19 AM
>
> *To:* Krishna Verma <kverma at cadence.com>
> *Cc:* Sunny Kumar <sunkumar at redhat.com>; Gluster Users <
> gluster-users at gluster.org>
> *Subject:* Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
> work
>
>
>
> EXTERNAL MAIL
>
> Hi Krishna,
>
> Indexing is the feature used by Hybrid crawl which only makes crawl
> faster. It has nothing to do with missing data sync.
>
> Could you please share the complete log file of the session where the
> issue is encountered ?
>
> Thanks,
>
> Kotresh HR
>
>
>
> On Mon, Sep 3, 2018 at 9:33 AM, Krishna Verma <kverma at cadence.com> wrote:
>
> Hi Kotresh/Support,
>
>
>
> Request your help to get it fix. My slave is not getting sync with master.
> When I restart the session after doing the indexing off then only it shows
> the file at slave but that is also blank with zero size.
>
>
>
> At master: file size is 5.8 GB.
>
>
>
> [root at gluster-poc-noida distvol]# du -sh 17.10.v001.20171023-201021_
> 17020_GPLV3.tar.gz
>
> 5.8G 17.10.v001.20171023-201021_17020_GPLV3.tar.gz
>
> [root at gluster-poc-noida distvol]#
>
>
>
> But at slave, after doing the “indexing off” and restart the session and
> then wait for 2 days. It shows only 4.9 GB copied.
>
>
>
> [root at gluster-poc-sj distvol]# du -sh 17.10.v001.20171023-201021_
> 17020_GPLV3.tar.gz
>
> 4.9G 17.10.v001.20171023-201021_17020_GPLV3.tar.gz
>
> [root at gluster-poc-sj distvol]#
>
>
>
> Similarly, I tested for small file of size 1.2 GB only that is still
> showing “0” size at slave after days waiting time.
>
>
>
> At Master:
>
>
>
> [root at gluster-poc-noida distvol]# du -sh rflowTestInt18.08-b001.t.Z
>
> 1.2G rflowTestInt18.08-b001.t.Z
>
> [root at gluster-poc-noida distvol]#
>
>
>
> At Slave:
>
>
>
> [root at gluster-poc-sj distvol]# du -sh rflowTestInt18.08-b001.t.Z
>
> 0 rflowTestInt18.08-b001.t.Z
>
> [root at gluster-poc-sj distvol]#
>
>
>
> Below is my distributed volume info :
>
>
>
> [root at gluster-poc-noida distvol]# gluster volume info glusterdist
>
>
>
> Volume Name: glusterdist
>
> Type: Distribute
>
> Volume ID: af5b2915-7170-4b5e-aee8-7e68757b9bf1
>
> Status: Started
>
> Snapshot Count: 0
>
> Number of Bricks: 2
>
> Transport-type: tcp
>
> Bricks:
>
> Brick1: gluster-poc-noida:/data/gluster-dist/distvol
>
> Brick2: noi-poc-gluster:/data/gluster-dist/distvol
>
> Options Reconfigured:
>
> changelog.changelog: on
>
> geo-replication.ignore-pid-check: on
>
> geo-replication.indexing: on
>
> transport.address-family: inet
>
> nfs.disable: on
>
> [root at gluster-poc-noida distvol]#
>
>
>
> Please help to fix, I believe its not a normal behavior of gluster rsync.
>
>
>
> /Krishna
>
> *From:* Krishna Verma
> *Sent:* Friday, August 31, 2018 12:42 PM
> *To:* 'Kotresh Hiremath Ravishankar' <khiremat at redhat.com>
> *Cc:* Sunny Kumar <sunkumar at redhat.com>; Gluster Users <
> gluster-users at gluster.org>
> *Subject:* RE: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
> work
>
>
>
> Hi Kotresh,
>
>
>
> I have tested the geo replication over distributed volumes with 2*2
> gluster setup.
>
>
>
> [root at gluster-poc-noida ~]# gluster volume geo-replication glusterdist
> gluster-poc-sj::glusterdist status
>
>
>
> MASTER NODE MASTER VOL MASTER BRICK SLAVE
> USER SLAVE SLAVE NODE STATUS CRAWL
> STATUS LAST_SYNCED
>
> ------------------------------------------------------------
> ------------------------------------------------------------
> ---------------------------------------------------------
>
> gluster-poc-noida glusterdist /data/gluster-dist/distvol
> root gluster-poc-sj::glusterdist gluster-poc-sj Active
> Changelog Crawl 2018-08-31 10:28:19
>
> noi-poc-gluster glusterdist /data/gluster-dist/distvol
> root gluster-poc-sj::glusterdist gluster-poc-sj2 Active
> History Crawl N/A
>
> [root at gluster-poc-noida ~]#
>
>
>
> Not at client I copied a 848MB file from local disk to master mounted
> volume and it took only 1 minute and 15 seconds. Its great….
>
>
>
> But even after waited for 2 hrs I was unable to see that file at slave
> site. Then I again erased the indexing by doing “gluster volume set
> glusterdist indexing off” and restart the session. Magically I received
> the file instantly at slave after doing this.
>
>
>
> Why I need to do “indexing off” every time to reflect data at slave site?
> Is there any fix/workaround of it?
>
>
>
> /Krishna
>
>
>
>
>
> *From:* Kotresh Hiremath Ravishankar <khiremat at redhat.com>
> *Sent:* Friday, August 31, 2018 10:10 AM
> *To:* Krishna Verma <kverma at cadence.com>
> *Cc:* Sunny Kumar <sunkumar at redhat.com>; Gluster Users <
> gluster-users at gluster.org>
> *Subject:* Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
> work
>
>
>
> EXTERNAL MAIL
>
>
>
>
>
> On Thu, Aug 30, 2018 at 3:51 PM, Krishna Verma <kverma at cadence.com> wrote:
>
> Hi Kotresh,
>
>
>
> Yes, this include the time take to write 1GB file to master. geo-rep was
> not stopped while the data was copying to master.
>
>
>
> This way, you can't really measure how much time geo-rep took.
>
>
>
>
>
> But now I am trouble, My putty session was timed out while copying data to
> master and geo replication was active. After I restart putty session My
> Master data is not syncing with slave. Its Last_synced time is 1hrs behind
> the current time.
>
>
>
> I restart the geo rep and also delete and again create the session but its
> “LAST_SYNCED” time is same.
>
>
>
> Unless, geo-rep is Faulty, it would be processing/syncing. You should
> check logs for any errors.
>
>
>
>
>
> Please help in this.
>
>
>
> …. It's better if gluster volume has more distribute count like 3*3 or
> 4*3 :- Are you refereeing to create a distributed volume with 3 master
> node and 3 slave node?
>
>
>
> Yes, that's correct. Please do the test with this. I recommend you to run
> the actual workload for which you are planning to use gluster instead of
> copying 1GB file and testing.
>
>
>
>
>
>
>
> /krishna
>
>
>
> *From:* Kotresh Hiremath Ravishankar <khiremat at redhat.com>
> *Sent:* Thursday, August 30, 2018 3:20 PM
>
>
> *To:* Krishna Verma <kverma at cadence.com>
> *Cc:* Sunny Kumar <sunkumar at redhat.com>; Gluster Users <
> gluster-users at gluster.org>
> *Subject:* Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
> work
>
>
>
> EXTERNAL MAIL
>
>
>
>
>
> On Thu, Aug 30, 2018 at 1:52 PM, Krishna Verma <kverma at cadence.com> wrote:
>
> Hi Kotresh,
>
>
>
> After fix the library link on node "noi-poc-gluster ", the status of one
> mater node is “Active” and another is “Passive”. Can I setup both the
> master as “Active” ?
>
>
>
> Nope, since it's replica, it's redundant to sync same files from two
> nodes. Both replicas can't be Active.
>
>
>
>
>
> Also, when I copy a 1GB size of file from gluster client to master gluster
> volume which is replicated with the slave volume, it tooks 35 minutes and
> 49 seconds. Is there any way to reduce its time taken to rsync data.
>
>
>
> How did you measure this time? Does this include the time take for you to
> write 1GB file to master?
>
> There are two aspects to consider while measuring this.
>
>
>
> 1. Time to write 1GB to master
>
> 2. Time for geo-rep to transfer 1GB to slave.
>
>
>
> In your case, since the setup is 1*2 and only one geo-rep worker is
> Active, Step2 above equals to time for step1 + network transfer time.
>
>
>
> You can measure time in two scenarios
>
> 1. If geo-rep is started while the data is still being written to master.
> It's one way.
>
> 2. Or stop geo-rep until the 1GB file is written to master and then start
> geo-rep to get actual geo-rep time.
>
>
>
> To improve replicating speed,
>
> 1. You can play around with rsync options depending on the kind of I/O
>
> and configure the same for geo-rep as it also uses rsync internally.
>
> 2. It's better if gluster volume has more distribute count like 3*3 or 4*3
>
> It will help in two ways.
>
> 1. The files gets distributed on master to multiple bricks
>
> 2. So above will help geo-rep as files on multiple bricks are
> synced in parallel (multiple Actives)
>
>
>
> NOTE: Gluster master server and one client is in Noida, India Location.
>
> Gluster Slave server and one client is in USA.
>
>
>
> Our approach is to transfer data from Noida gluster client will reach to
> the USA gluster client in a minimum time. Please suggest the best approach
> to achieve it.
>
>
>
> [root at noi-dcops ~]# date ; rsync -avh --progress /tmp/gentoo_root.img
> /glusterfs/ ; date
>
> Thu Aug 30 12:26:26 IST 2018
>
> sending incremental file list
>
> gentoo_root.img
>
> 1.07G 100% 490.70kB/s 0:35:36 (xfr#1, to-chk=0/1)
>
>
>
> Is this I/O time to write to master volume?
>
>
>
> sent 1.07G bytes received 35 bytes 499.65K bytes/sec
>
> total size is 1.07G speedup is 1.00
>
> Thu Aug 30 13:02:15 IST 2018
>
> [root at noi-dcops ~]#
>
>
>
>
>
>
>
> [root at gluster-poc-noida gluster]# gluster volume geo-replication status
>
>
>
> MASTER NODE MASTER VOL MASTER BRICK SLAVE USER
> SLAVE SLAVE NODE STATUS CRAWL
> STATUS LAST_SYNCED
>
> ------------------------------------------------------------
> ------------------------------------------------------------
> ---------------------------------------------------
>
> gluster-poc-noida glusterep /data/gluster/gv0 root
> ssh://gluster-poc-sj::glusterep gluster-poc-sj Active Changelog
> Crawl 2018-08-30 13:42:18
>
> noi-poc-gluster glusterep /data/gluster/gv0 root
> ssh://gluster-poc-sj::glusterep gluster-poc-sj Passive
> N/A N/A
>
> [root at gluster-poc-noida gluster]#
>
>
>
> Thanks in advance for your all time support.
>
>
>
> /Krishna
>
>
>
> *From:* Kotresh Hiremath Ravishankar <khiremat at redhat.com>
> *Sent:* Thursday, August 30, 2018 10:51 AM
>
>
> *To:* Krishna Verma <kverma at cadence.com>
> *Cc:* Sunny Kumar <sunkumar at redhat.com>; Gluster Users <
> gluster-users at gluster.org>
> *Subject:* Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
> work
>
>
>
> EXTERNAL MAIL
>
> Did you fix the library link on node "noi-poc-gluster " as well?
>
> If not please fix it. Please share the geo-rep log this node if it's
>
> as different issue.
>
> -Kotresh HR
>
>
>
> On Thu, Aug 30, 2018 at 12:17 AM, Krishna Verma <kverma at cadence.com>
> wrote:
>
> Hi Kotresh,
>
>
>
> Thank you so much for you input. Geo-replication is now showing “Active”
> atleast for 1 master node. But its still at faulty state for the 2nd
> master server.
>
>
>
> Below is the detail.
>
>
>
> [root at gluster-poc-noida glusterfs]# gluster volume geo-replication
> glusterep gluster-poc-sj::glusterep status
>
>
>
> MASTER NODE MASTER VOL MASTER BRICK SLAVE USER
> SLAVE SLAVE NODE STATUS CRAWL STATUS
> LAST_SYNCED
>
> ------------------------------------------------------------
> ------------------------------------------------------------
> --------------------------------------------
>
> gluster-poc-noida glusterep /data/gluster/gv0 root
> gluster-poc-sj::glusterep gluster-poc-sj Active Changelog Crawl
> 2018-08-29 23:56:06
>
> noi-poc-gluster glusterep /data/gluster/gv0 root
> gluster-poc-sj::glusterep N/A Faulty N/A
> N/A
>
>
>
>
>
> [root at gluster-poc-noida glusterfs]# gluster volume status
>
> Status of volume: glusterep
>
> Gluster process TCP Port RDMA Port Online
> Pid
>
> ------------------------------------------------------------
> ------------------
>
> Brick gluster-poc-noida:/data/gluster/gv0 49152 0 Y
> 22463
>
> Brick noi-poc-gluster:/data/gluster/gv0 49152 0 Y
> 19471
>
> Self-heal Daemon on localhost N/A N/A Y
> 32087
>
> Self-heal Daemon on noi-poc-gluster N/A N/A Y
> 6272
>
>
>
> Task Status of Volume glusterep
>
> ------------------------------------------------------------
> ------------------
>
> There are no active volume tasks
>
>
>
>
>
>
>
> [root at gluster-poc-noida glusterfs]# gluster volume info
>
>
>
> Volume Name: glusterep
>
> Type: Replicate
>
> Volume ID: 4a71bc94-14ce-4b2c-abc4-e6a9a9765161
>
> Status: Started
>
> Snapshot Count: 0
>
> Number of Bricks: 1 x 2 = 2
>
> Transport-type: tcp
>
> Bricks:
>
> Brick1: gluster-poc-noida:/data/gluster/gv0
>
> Brick2: noi-poc-gluster:/data/gluster/gv0
>
> Options Reconfigured:
>
> transport.address-family: inet
>
> nfs.disable: on
>
> performance.client-io-threads: off
>
> geo-replication.indexing: on
>
> geo-replication.ignore-pid-check: on
>
> changelog.changelog: on
>
> [root at gluster-poc-noida glusterfs]#
>
>
>
> Could you please help me in that also please?
>
>
>
> It would be really a great help from your side.
>
>
>
> /Krishna
>
> *From:* Kotresh Hiremath Ravishankar <khiremat at redhat.com>
> *Sent:* Wednesday, August 29, 2018 10:47 AM
>
>
> *To:* Krishna Verma <kverma at cadence.com>
> *Cc:* Sunny Kumar <sunkumar at redhat.com>; Gluster Users <
> gluster-users at gluster.org>
> *Subject:* Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
> work
>
>
>
> EXTERNAL MAIL
>
> Answer inline
>
>
>
> On Tue, Aug 28, 2018 at 4:28 PM, Krishna Verma <kverma at cadence.com> wrote:
>
> Hi Kotresh,
>
>
>
> I created the links before. Below is the detail.
>
>
>
> [root at gluster-poc-noida ~]# ls -l /usr/lib64 | grep libgfch
>
> lrwxrwxrwx 1 root root 30 Aug 28 14:59 libgfchangelog.so ->
> /usr/lib64/libgfchangelog.so.1
>
>
>
> The link created is pointing to wrong library. Please fix this
>
>
>
> #cd /usr/lib64
>
> #rm libgfchangelog.so
>
> #ln -s "libgfchangelog.so.0.0.1" libgfchangelog.so
>
>
>
> lrwxrwxrwx 1 root root 23 Aug 23 23:35 libgfchangelog.so.0 ->
> libgfchangelog.so.0.0.1
>
> -rwxr-xr-x 1 root root 63384 Jul 24 19:11 libgfchangelog.so.0.0.1
>
> [root at gluster-poc-noida ~]# locate libgfchangelog.so
>
> /usr/lib64/libgfchangelog.so.0
>
> /usr/lib64/libgfchangelog.so.0.0.1
>
> [root at gluster-poc-noida ~]#
>
>
>
> Is it looks good what we exactly need or di I need to create any more link
> or How to get “libgfchangelog.so” file if missing.
>
>
>
> /Krishna
>
>
>
> *From:* Kotresh Hiremath Ravishankar <khiremat at redhat.com>
> *Sent:* Tuesday, August 28, 2018 4:22 PM
> *To:* Krishna Verma <kverma at cadence.com>
> *Cc:* Sunny Kumar <sunkumar at redhat.com>; Gluster Users <
> gluster-users at gluster.org>
>
>
> *Subject:* Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
> work
>
>
>
> EXTERNAL MAIL
>
> Hi Krishna,
>
> As per the output shared, I don't see the file "libgfchangelog.so" which
> is what is required.
>
> I only see "libgfchangelog.so.0". Please confirm "libgfchangelog.so" is
> present in "/usr/lib64/".
>
> If not create a symlink similar to "libgfchangelog.so.0"
>
>
>
> It should be something like below.
>
>
>
> #ls -l /usr/lib64 | grep libgfch
> -rwxr-xr-x. 1 root root 1078 Aug 28 05:56 libgfchangelog.la
> <https://urldefense.proofpoint.com/v2/url?u=http-3A__libgfchangelog.la&d=DwMFaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=77GIqpHy9HY8RQd6lKzSJ-Z1PCuIhZJ3I3IvIuDX-xo&s=kIFnrBaSFV_DdqZezd6PXcDnD8Iy_gVN69ETZYtykEE&e=>
> lrwxrwxrwx. 1 root root 23 Aug 28 05:56 libgfchangelog.so ->
> libgfchangelog.so.0.0.1
> lrwxrwxrwx. 1 root root 23 Aug 28 05:56 libgfchangelog.so.0 ->
> libgfchangelog.so.0.0.1
> -rwxr-xr-x. 1 root root 336888 Aug 28 05:56 libgfchangelog.so.0.0.1
>
>
>
> On Tue, Aug 28, 2018 at 4:04 PM, Krishna Verma <kverma at cadence.com> wrote:
>
> Hi Kotresh,
>
>
>
> Thanks for the response, I did that also but nothing changed.
>
>
>
> [root at gluster-poc-noida ~]# ldconfig /usr/lib64
>
> [root at gluster-poc-noida ~]# ldconfig -p | grep libgfchangelog
>
> libgfchangelog.so.0 (libc6,x86-64) =>
> /usr/lib64/libgfchangelog.so.0
>
> [root at gluster-poc-noida ~]#
>
>
>
> [root at gluster-poc-noida ~]# gluster volume geo-replication glusterep
> gluster-poc-sj::glusterep stop
>
> Stopping geo-replication session between glusterep &
> gluster-poc-sj::glusterep has been successful
>
> [root at gluster-poc-noida ~]# gluster volume geo-replication glusterep
> gluster-poc-sj::glusterep start
>
> Starting geo-replication session between glusterep &
> gluster-poc-sj::glusterep has been successful
>
>
>
> [root at gluster-poc-noida ~]# gluster volume geo-replication glusterep
> gluster-poc-sj::glusterep status
>
>
>
> MASTER NODE MASTER VOL MASTER BRICK SLAVE USER
> SLAVE SLAVE NODE STATUS CRAWL STATUS
> LAST_SYNCED
>
> ------------------------------------------------------------
> ------------------------------------------------------------
> -----------------------------
>
> gluster-poc-noida glusterep /data/gluster/gv0 root
> gluster-poc-sj::glusterep N/A Faulty N/A N/A
>
> noi-poc-gluster glusterep /data/gluster/gv0 root
> gluster-poc-sj::glusterep N/A Faulty
> N/A N/A
>
> [root at gluster-poc-noida ~]#
>
>
>
> /Krishna
>
>
>
> *From:* Kotresh Hiremath Ravishankar <khiremat at redhat.com>
> *Sent:* Tuesday, August 28, 2018 4:00 PM
> *To:* Sunny Kumar <sunkumar at redhat.com>
> *Cc:* Krishna Verma <kverma at cadence.com>; Gluster Users <
> gluster-users at gluster.org>
>
>
> *Subject:* Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
> work
>
>
>
> EXTERNAL MAIL
>
> Hi Krishna,
>
> Since your libraries are in /usr/lib64, you should be doing
>
> #ldconfig /usr/lib64
>
> Confirm that below command lists the library
>
> #ldconfig -p | grep libgfchangelog
>
>
>
>
>
> On Tue, Aug 28, 2018 at 3:52 PM, Sunny Kumar <sunkumar at redhat.com> wrote:
>
> can you do ldconfig /usr/local/lib and share the output of ldconfig -p
> /usr/local/lib | grep libgf
>
> On Tue, Aug 28, 2018 at 3:45 PM Krishna Verma <kverma at cadence.com> wrote:
> >
> > Hi Sunny,
> >
> > I did the mentioned changes given in patch and restart the session for
> geo-replication. But again same errors in the logs.
> >
> > I have attaching the config files and logs here.
> >
> >
> > [root at gluster-poc-noida ~]# gluster volume geo-replication glusterep
> gluster-poc-sj::glusterep stop
> > Stopping geo-replication session between glusterep &
> gluster-poc-sj::glusterep has been successful
> > [root at gluster-poc-noida ~]# gluster volume geo-replication glusterep
> gluster-poc-sj::glusterep delete
> > Deleting geo-replication session between glusterep &
> gluster-poc-sj::glusterep has been successful
> > [root at gluster-poc-noida ~]# gluster volume geo-replication glusterep
> gluster-poc-sj::glusterep create push-pem force
> > Creating geo-replication session between glusterep &
> gluster-poc-sj::glusterep has been successful
> > [root at gluster-poc-noida ~]# gluster volume geo-replication glusterep
> gluster-poc-sj::glusterep start
> > geo-replication start failed for glusterep gluster-poc-sj::glusterep
> > geo-replication command failed
> > [root at gluster-poc-noida ~]# gluster volume geo-replication glusterep
> gluster-poc-sj::glusterep start
> > geo-replication start failed for glusterep gluster-poc-sj::glusterep
> > geo-replication command failed
> > [root at gluster-poc-noida ~]# vim /usr/libexec/glusterfs/python/
> syncdaemon/repce.py
> > [root at gluster-poc-noida ~]# systemctl restart glusterd
> > [root at gluster-poc-noida ~]# gluster volume geo-replication glusterep
> gluster-poc-sj::glusterep start
> > Starting geo-replication session between glusterep &
> gluster-poc-sj::glusterep has been successful
> > [root at gluster-poc-noida ~]# gluster volume geo-replication glusterep
> gluster-poc-sj::glusterep status
> >
> > MASTER NODE MASTER VOL MASTER BRICK SLAVE USER
> SLAVE SLAVE NODE STATUS CRAWL STATUS
> LAST_SYNCED
> > ------------------------------------------------------------
> ------------------------------------------------------------
> -----------------------------
> > gluster-poc-noida glusterep /data/gluster/gv0 root
> gluster-poc-sj::glusterep N/A Faulty N/A N/A
> > noi-poc-gluster glusterep /data/gluster/gv0 root
> gluster-poc-sj::glusterep N/A Faulty N/A N/A
> > [root at gluster-poc-noida ~]#
> >
> >
> > /Krishna.
> >
> > -----Original Message-----
> > From: Sunny Kumar <sunkumar at redhat.com>
> > Sent: Tuesday, August 28, 2018 3:17 PM
> > To: Krishna Verma <kverma at cadence.com>
> > Cc: gluster-users at gluster.org
> > Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
> work
> >
> > EXTERNAL MAIL
> >
> >
> > With same log message ?
> >
> > Can you please verify that
> > https://urldefense.proofpoint.com/v2/url?u=https-3A__review.
> gluster.org_-23_c_glusterfs_-2B_20207_&d=DwIBaQ&c=
> aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_
> 6ZItAWQ2jH8rVLG6tiXhoLFE&m=F0ExtFUfa_YCktOGvy82x3IAxvi2GrbPR72jZ8beuYk&s=
> fGtkmezHJj5YoLN3dUeVUCcYFnREHyOSk36mRjbTTEQ&e= patch is present if not
> can you please apply that.
> > and try with symlinking ln -s /usr/lib64/libgfchangelog.so.0
> /usr/lib64/libgfchangelog.so.
> >
> > Please share the log also.
> >
> > Regards,
> > Sunny
> > On Tue, Aug 28, 2018 at 3:02 PM Krishna Verma <kverma at cadence.com>
> wrote:
> > >
> > > Hi Sunny,
> > >
> > > Thanks for your response, I tried both, but still I am getting the
> same error.
> > >
> > >
> > > [root at noi-poc-gluster ~]# ldconfig /usr/lib [root at noi-poc-gluster ~]#
> > >
> > > [root at noi-poc-gluster ~]# ln -s /usr/lib64/libgfchangelog.so.1
> > > /usr/lib64/libgfchangelog.so [root at noi-poc-gluster ~]# ls -l
> > > /usr/lib64/libgfchangelog.so lrwxrwxrwx. 1 root root 30 Aug 28 14:59
> > > /usr/lib64/libgfchangelog.so -> /usr/lib64/libgfchangelog.so.1
> > >
> > > /Krishna
> > >
> > > -----Original Message-----
> > > From: Sunny Kumar <sunkumar at redhat.com>
> > > Sent: Tuesday, August 28, 2018 2:55 PM
> > > To: Krishna Verma <kverma at cadence.com>
> > > Cc: gluster-users at gluster.org
> > > Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
> > > work
> > >
> > > EXTERNAL MAIL
> > >
> > >
> > > Hi Krish,
> > >
> > > You can run -
> > > #ldconfig /usr/lib
> > >
> > > If that still does not solves your problem you can do manual symlink
> > > like - ln -s /usr/lib64/libgfchangelog.so.1
> > > /usr/lib64/libgfchangelog.so
> > >
> > > Thanks,
> > > Sunny Kumar
> > > On Tue, Aug 28, 2018 at 1:47 PM Krishna Verma <kverma at cadence.com>
> wrote:
> > > >
> > > > Hi
> > > >
> > > >
> > > >
> > > > I am getting below error in gsyncd.log
> > > >
> > > >
> > > >
> > > > OSError: libgfchangelog.so: cannot open shared object file: No such
> > > > file or directory
> > > >
> > > > [2018-08-28 07:19:41.446785] E [repce(worker
> /data/gluster/gv0):197:__call__] RepceClient: call failed
> call=26469:139794524604224:1535440781.44 method=init
> error=OSError
> > > >
> > > > [2018-08-28 07:19:41.447041] E [syncdutils(worker
> /data/gluster/gv0):330:log_raise_exception] <top>: FAIL:
> > > >
> > > > Traceback (most recent call last):
> > > >
> > > > File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line
> > > > 311, in main
> > > >
> > > > func(args)
> > > >
> > > > File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py", line
> > > > 72, in subcmd_worker
> > > >
> > > > local.service_loop(remote)
> > > >
> > > > File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line
> > > > 1236, in service_loop
> > > >
> > > > changelog_agent.init()
> > > >
> > > > File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line
> > > > 216, in __call__
> > > >
> > > > return self.ins(self.meth, *a)
> > > >
> > > > File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line
> > > > 198, in __call__
> > > >
> > > > raise res
> > > >
> > > > OSError: libgfchangelog.so: cannot open shared object file: No such
> > > > file or directory
> > > >
> > > > [2018-08-28 07:19:41.457555] I [repce(agent
> /data/gluster/gv0):80:service_loop] RepceServer: terminating on reaching
> EOF.
> > > >
> > > > [2018-08-28 07:19:42.440184] I [monitor(monitor):272:monitor]
> Monitor:
> > > > worker died in startup phase brick=/data/gluster/gv0
> > > >
> > > >
> > > >
> > > > Below is my file location:
> > > >
> > > >
> > > >
> > > > /usr/lib64/libgfchangelog.so.0
> > > >
> > > > /usr/lib64/libgfchangelog.so.0.0.1
> > > >
> > > >
> > > >
> > > > What I can do to fix it ?
> > > >
> > > >
> > > >
> > > > /Krish
> > > >
> > > > _______________________________________________
> > > > Gluster-users mailing list
> > > > Gluster-users at gluster.org
> > > > https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.gluster.o
> > > > rg
> > > > _mailman_listinfo_gluster-2Dusers&d=DwIBaQ&c=aUq983L2pue2FqKFoP6PGHM
> > > > JQ
> > > > yoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=_
> > > > u6
> > > > vGRjlVsype7Z8hXDgCONilqVe4sIWkXNqqz2n3IQ&s=i0EUwtUHurhJHyw9UPpepCdLB
> > > > 70
> > > > 1mkxoNZWYvU7XXug&e=
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
>
> https://lists.gluster.org/mailman/listinfo/gluster-users
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.gluster.org_mailman_listinfo_gluster-2Dusers&d=DwMFaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=BuUVeXUwqxdQDoP-IWDI_JPFCQGrRFdzV9g0enLn8kM&s=X0eVYgZ1emUQnEwRTg0AGbAWyoIwIyrE_gzmuolgiPE&e=>
>
>
>
>
> --
>
> Thanks and Regards,
>
> Kotresh H R
>
>
>
>
> --
>
> Thanks and Regards,
>
> Kotresh H R
>
>
>
>
> --
>
> Thanks and Regards,
>
> Kotresh H R
>
>
>
>
> --
>
> Thanks and Regards,
>
> Kotresh H R
>
>
>
>
> --
>
> Thanks and Regards,
>
> Kotresh H R
>
>
>
>
> --
>
> Thanks and Regards,
>
> Kotresh H R
>
>
>
>
> --
>
> Thanks and Regards,
>
> Kotresh H R
>
--
Thanks and Regards,
Kotresh H R
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180903/6193cdca/attachment.html>
More information about the Gluster-users
mailing list