[Gluster-users] Upgrade to 4.1.2 geo-replication does not work
Krishna Verma
kverma at cadence.com
Thu Sep 6 03:38:48 UTC 2018
Hi Kotresh,
Did you get a chance to look into this?
For replicated gluster volume, Still Master is not getting sync with slave.
At Master :
[root at gluster-poc-noida ~]# du -sh /repvol/rflowTestInt18.08-b001.t.Z
1.2G /repvol/rflowTestInt18.08-b001.t.Z
[root at gluster-poc-noida ~]#
At Slave:
[root at gluster-poc-sj ~]# du -sh /repvol/rflowTestInt18.08-b001.t.Z
du: cannot access ‘/repvol/rflowTestInt18.08-b001.t.Z’: No such file or directory
[root at gluster-poc-sj ~]#
File not reached at slave.
/Krishna
From: Krishna Verma
Sent: Monday, September 3, 2018 4:41 PM
To: 'Kotresh Hiremath Ravishankar' <khiremat at redhat.com>
Cc: Sunny Kumar <sunkumar at redhat.com>; Gluster Users <gluster-users at gluster.org>
Subject: RE: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work
Hi Kotesh:
Gluster Master Site Servers : gluster-poc-noida and noi-poc-gluster
Gluster Slave site servers: gluster-poc-sj and gluster-poc-sj2
Master Client : noi-foreman02
Slave Client: sj-kverma
Step1: Create a LVM partition of 10 GB on all 4 Gluster nodes (2 Master) * (2 slave) and format that in ext4 filesystem and mount that on server.
[root at gluster-poc-noida distvol]# df -hT /data/gluster-dist
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/centos-gluster--vol--dist ext4 9.8G 847M 8.4G 9% /data/gluster-dist
[root at gluster-poc-noida distvol]#
Step 2: Created a Trusted storage pool as below:
At Master:
[root at gluster-poc-noida distvol]# gluster peer status
Number of Peers: 1
Hostname: noi-poc-gluster
Uuid: 01316459-b5c8-461d-ad25-acc17a82e78f
State: Peer in Cluster (Connected)
[root at gluster-poc-noida distvol]#
At Slave:
[root at gluster-poc-sj ~]# gluster peer status
Number of Peers: 1
Hostname: gluster-poc-sj2
Uuid: 6ba85bfe-cd74-4a76-a623-db687f7136fa
State: Peer in Cluster (Connected)
[root at gluster-poc-sj ~]#
Step 3: Created distributed volume as below:
At Master: “gluster volume create glusterdist gluster-poc-noida:/data/gluster-dist/distvol noi-poc-gluster:/data/gluster-dist/distvol”
[root at gluster-poc-noida distvol]# gluster volume info glusterdist
Volume Name: glusterdist
Type: Distribute
Volume ID: af5b2915-7170-4b5e-aee8-7e68757b9bf1
Status: Started
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: gluster-poc-noida:/data/gluster-dist/distvol
Brick2: noi-poc-gluster:/data/gluster-dist/distvol
Options Reconfigured:
changelog.changelog: on
geo-replication.ignore-pid-check: on
geo-replication.indexing: on
transport.address-family: inet
nfs.disable: on
[root at gluster-poc-noida distvol]#
At Slave “ gluster volume create glusterdist gluster-poc-sj:/data/gluster-dist/distvol gluster-poc-sj2:/data/gluster-dist/distvol”
Volume Name: glusterdist
Type: Distribute
Volume ID: a982da53-a3d7-4b5a-be77-df85f584610d
Status: Started
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: gluster-poc-sj:/data/gluster-dist/distvol
Brick2: gluster-poc-sj2:/data/gluster-dist/distvol
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
Step 4 : Gluster Geo Replication configuration
On all Gluster node: “yum install glusterfs-geo-replication.x86_64”
On master node where I created session:
ssh-keygen
ssh-copy-id root at gluster-poc-sj
cp /root/.ssh/id_rsa.pub /var/lib/glusterd/geo-replication/secret.pem.pub
scp /var/lib/glusterd/geo-replication/secret.pem* root at gluster-poc-sj:/var/lib/glusterd/geo-replication/
On Slave Node:
ln -s /usr/libexec/glusterfs/gsyncd /nonexistent/gsyncd
On Master Node:
gluster volume geo-replication glusterdist gluster-poc-sj::glusterdist create push-pem force
gluster volume geo-replication glusterdist gluster-poc-sj::glusterdist start
[root at gluster-poc-noida distvol]# gluster volume geo-replication glusterdist gluster-poc-sj::glusterdist status
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
gluster-poc-noida glusterdist /data/gluster-dist/distvol root gluster-poc-sj::glusterdist gluster-poc-sj Active Changelog Crawl 2018-08-31 13:12:58
noi-poc-gluster glusterdist /data/gluster-dist/distvol root gluster-poc-sj::glusterdist gluster-poc-sj2 Active History Crawl N/A
[root at gluster-poc-noida distvol]#
On Gluster Client Node at master Site:
yum install -y glusterfs-client
mkdir /distvol
mount -t glusterfs gluster-poc-noida:/glusterdist /distvol
[root at noi-foreman02 ~]# df -hT /distvol
Filesystem Type Size Used Avail Use% Mounted on
gluster-poc-noida:/glusterdist fuse.glusterfs 20G 9.6G 9.1G 52% /distvol
[root at noi-foreman02 ~]#
On Gluster Client at Slave site:
yum install -y glusterfs-client
mkdir /distvol
mount -t glusterfs gluster-poc-sj:/glusterdist /distvol
Now To Test the Geo Replication Setup:
I have copied below file from client at Master site:
[root at noi-foreman02 distvol]# du -sh 17.10.v001.20171023-201021_17020_GPLV3.tar.gz
5.8G 17.10.v001.20171023-201021_17020_GPLV3.tar.gz
[root at noi-foreman02 distvol]#
But from last three days it synced to the slave only 5.4GB
[root at sj-kverma distvol]# du -sh 17.10.v001.20171023-201021_17020_GPLV3.tar.gz
5.4G 17.10.v001.20171023-201021_17020_GPLV3.tar.gz
[root at sj-kverma distvol]#
I have also tested a another file of size 1 GB only copied from master client and that is still shows 0 size at slave client after 3 days.
/Krishna
From: Kotresh Hiremath Ravishankar <khiremat at redhat.com<mailto:khiremat at redhat.com>>
Sent: Monday, September 3, 2018 3:17 PM
To: Krishna Verma <kverma at cadence.com<mailto:kverma at cadence.com>>
Cc: Sunny Kumar <sunkumar at redhat.com<mailto:sunkumar at redhat.com>>; Gluster Users <gluster-users at gluster.org<mailto:gluster-users at gluster.org>>
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work
EXTERNAL MAIL
Hi krishna,
I see no error in the shared logs. The only errro messages I see are during geo-rep stop. That is expected.
Could you share the steps you used to created geo-rep setup?
Thanks,
Kotresh HR
On Mon, Sep 3, 2018 at 1:02 PM, Krishna Verma <kverma at cadence.com<mailto:kverma at cadence.com>> wrote:
Hi Kotesh,
Below is the cat output of gsyncd.log file generating on my master server.
And I am using 4.1.3 version only all my gluster nodes.
[root at gluster-poc-noida distvol]# gluster --version | grep glusterfs
glusterfs 4.1.3
[root at gluster-poc-noida distvol]# cat /var/log/glusterfs/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.log
[2018-09-03 04:01:52.424609] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 04:01:52.526323] I [gsyncd(status):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 06:55:41.326411] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 06:55:49.676120] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 06:55:50.406042] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 06:56:52.847537] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 06:57:03.778448] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 06:57:25.86958] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 06:57:25.855273] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 06:58:09.294239] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 06:59:39.255487] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 06:59:39.355753] I [gsyncd(status):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:00:26.311767] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:03:29.205226] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:03:30.131258] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:10:34.679677] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:10:35.653928] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:26:24.438854] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:26:25.495117] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:26.159113] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:26.216475] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:26.932451] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:26.988286] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:26.992789] E [syncdutils(worker /data/gluster-dist/distvol):305:log_raise_exception] <top>: connection to peer is broken
[2018-09-03 07:27:26.994750] E [syncdutils(worker /data/gluster-dist/distvol):801:errlog] Popen: command returned error cmd=ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret.pem -p 22 -oControlMaster=auto -S /tmp/gsyncd-aux-ssh-X8iHv1/86bcbaf188167a3859c3267081671312.sock gluster-poc-sj /nonexistent/gsyncd slave glusterdist gluster-poc-sj::glusterdist --master-node gluster-poc-noida --master-node-id 098c16c6-8dff-490a-a2e8-c8cb328fcbb3 --master-brick /data/gluster-dist/distvol --local-node gluster-poc-sj --local-node-id e54f2759-4c56-40dd-89e1-e10c3037d48b --slave-timeout 120 --slave-log-level INFO --slave-gluster-log-level INFO --slave-gluster-command-dir /usr/local/sbin/ error=255
[2018-09-03 07:27:26.994971] E [syncdutils(worker /data/gluster-dist/distvol):805:logerr] Popen: ssh> Killed by signal 15.
[2018-09-03 07:27:27.7174] I [repce(agent /data/gluster-dist/distvol):80:service_loop] RepceServer: terminating on reaching EOF.
[2018-09-03 07:27:27.15156] I [gsyncdstatus(monitor):244:set_worker_status] GeorepStatus: Worker Status Change status=Faulty
[2018-09-03 07:27:28.52725] I [gsyncd(monitor-status):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:28.64521] I [subcmds(monitor-status):19:subcmd_monitor_status] <top>: Monitor Status Change status=Stopped
[2018-09-03 07:27:35.345937] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:35.444247] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:36.181122] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:36.281459] I [gsyncd(monitor):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:39.782480] I [gsyncdstatus(monitor):244:set_worker_status] GeorepStatus: Worker Status Change status=Initializing...
[2018-09-03 07:27:40.321157] I [monitor(monitor):158:monitor] Monitor: starting gsyncd worker brick=/data/gluster-dist/distvol slave_node=gluster-poc-sj
[2018-09-03 07:27:40.376172] I [gsyncd(agent /data/gluster-dist/distvol):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:40.377144] I [changelogagent(agent /data/gluster-dist/distvol):72:__init__] ChangelogAgent: Agent listining...
[2018-09-03 07:27:40.378150] I [gsyncd(worker /data/gluster-dist/distvol):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:40.391185] I [resource(worker /data/gluster-dist/distvol):1377:connect_remote] SSH: Initializing SSH connection between master and slave...
[2018-09-03 07:27:43.752819] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:43.848619] I [gsyncd(status):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:45.365627] I [resource(worker /data/gluster-dist/distvol):1424:connect_remote] SSH: SSH connection between master and slave established. duration=4.9743
[2018-09-03 07:27:45.365866] I [resource(worker /data/gluster-dist/distvol):1096:connect] GLUSTER: Mounting gluster volume locally...
[2018-09-03 07:27:46.388974] I [resource(worker /data/gluster-dist/distvol):1119:connect] GLUSTER: Mounted gluster volume duration=1.0230
[2018-09-03 07:27:46.389206] I [subcmds(worker /data/gluster-dist/distvol):70:subcmd_worker] <top>: Worker spawn successful. Acknowledging back to monitor
[2018-09-03 07:27:48.401196] I [master(worker /data/gluster-dist/distvol):1593:register] _GMaster: Working dir path=/var/lib/misc/gluster/gsyncd/glusterdist_gluster-poc-sj_glusterdist/data-gluster-dist-distvol
[2018-09-03 07:27:48.401477] I [resource(worker /data/gluster-dist/distvol):1282:service_loop] GLUSTER: Register time time=1535959668
[2018-09-03 07:27:49.176095] I [gsyncdstatus(worker /data/gluster-dist/distvol):277:set_active] GeorepStatus: Worker Status Change status=Active
[2018-09-03 07:27:49.177079] I [gsyncdstatus(worker /data/gluster-dist/distvol):249:set_worker_crawl_status] GeorepStatus: Crawl Status Change status=History Crawl
[2018-09-03 07:27:49.177339] I [master(worker /data/gluster-dist/distvol):1507:crawl] _GMaster: starting history crawl turns=1 stime=(1535701378, 0) entry_stime=(1535701378, 0) etime=1535959669
[2018-09-03 07:27:50.179210] I [master(worker /data/gluster-dist/distvol):1536:crawl] _GMaster: slave's time stime=(1535701378, 0)
[2018-09-03 07:27:51.300096] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:51.399027] I [gsyncd(status):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:52.510271] I [master(worker /data/gluster-dist/distvol):1944:syncjob] Syncer: Sync Time Taken duration=1.6146 num_files=1 job=2 return_code=0
[2018-09-03 07:27:52.514487] I [master(worker /data/gluster-dist/distvol):1374:process] _GMaster: Entry Time Taken MKD=0 MKN=0 LIN=0 SYM=0 REN=1 RMD=0 CRE=0 duration=0.2745 UNL=0
[2018-09-03 07:27:52.514615] I [master(worker /data/gluster-dist/distvol):1384:process] _GMaster: Data/Metadata Time Taken SETA=1 SETX=0 meta_duration=0.2691 data_duration=1.7883 DATA=1 XATT=0
[2018-09-03 07:27:52.514844] I [master(worker /data/gluster-dist/distvol):1394:process] _GMaster: Batch Completed changelog_end=1535701379entry_stime=(1535701378, 0) changelog_start=1535701379 stime=(1535701378, 0) duration=2.3353 num_changelogs=1 mode=history_changelog
[2018-09-03 07:27:52.515224] I [master(worker /data/gluster-dist/distvol):1552:crawl] _GMaster: finished history crawl endtime=1535959662 stime=(1535701378, 0) entry_stime=(1535701378, 0)
[2018-09-03 07:28:01.706876] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:28:01.803858] I [gsyncd(status):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:28:03.521949] I [master(worker /data/gluster-dist/distvol):1507:crawl] _GMaster: starting history crawl turns=2 stime=(1535701378, 0) entry_stime=(1535701378, 0) etime=1535959683
[2018-09-03 07:28:03.523086] I [master(worker /data/gluster-dist/distvol):1552:crawl] _GMaster: finished history crawl endtime=1535959677 stime=(1535701378, 0) entry_stime=(1535701378, 0)
[2018-09-03 07:28:04.62274] I [gsyncdstatus(worker /data/gluster-dist/distvol):249:set_worker_crawl_status] GeorepStatus: Crawl Status Change status=Changelog Crawl
[root at gluster-poc-noida distvol]#
From: Kotresh Hiremath Ravishankar <khiremat at redhat.com<mailto:khiremat at redhat.com>>
Sent: Monday, September 3, 2018 12:44 PM
To: Krishna Verma <kverma at cadence.com<mailto:kverma at cadence.com>>
Cc: Sunny Kumar <sunkumar at redhat.com<mailto:sunkumar at redhat.com>>; Gluster Users <gluster-users at gluster.org<mailto:gluster-users at gluster.org>>
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work
EXTERNAL MAIL
Hi Krishna,
The log is not complete. If you are re-trying, could you please try it out on 4.1.3 and share the logs.
Thanks,
Kotresh HR
On Mon, Sep 3, 2018 at 12:42 PM, Krishna Verma <kverma at cadence.com<mailto:kverma at cadence.com>> wrote:
Hi Kotresh,
Please find the log files attached.
Request you to please have a look.
/Krishna
From: Kotresh Hiremath Ravishankar <khiremat at redhat.com<mailto:khiremat at redhat.com>>
Sent: Monday, September 3, 2018 10:19 AM
To: Krishna Verma <kverma at cadence.com<mailto:kverma at cadence.com>>
Cc: Sunny Kumar <sunkumar at redhat.com<mailto:sunkumar at redhat.com>>; Gluster Users <gluster-users at gluster.org<mailto:gluster-users at gluster.org>>
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work
EXTERNAL MAIL
Hi Krishna,
Indexing is the feature used by Hybrid crawl which only makes crawl faster. It has nothing to do with missing data sync.
Could you please share the complete log file of the session where the issue is encountered ?
Thanks,
Kotresh HR
On Mon, Sep 3, 2018 at 9:33 AM, Krishna Verma <kverma at cadence.com<mailto:kverma at cadence.com>> wrote:
Hi Kotresh/Support,
Request your help to get it fix. My slave is not getting sync with master. When I restart the session after doing the indexing off then only it shows the file at slave but that is also blank with zero size.
At master: file size is 5.8 GB.
[root at gluster-poc-noida distvol]# du -sh 17.10.v001.20171023-201021_17020_GPLV3.tar.gz
5.8G 17.10.v001.20171023-201021_17020_GPLV3.tar.gz
[root at gluster-poc-noida distvol]#
But at slave, after doing the “indexing off” and restart the session and then wait for 2 days. It shows only 4.9 GB copied.
[root at gluster-poc-sj distvol]# du -sh 17.10.v001.20171023-201021_17020_GPLV3.tar.gz
4.9G 17.10.v001.20171023-201021_17020_GPLV3.tar.gz
[root at gluster-poc-sj distvol]#
Similarly, I tested for small file of size 1.2 GB only that is still showing “0” size at slave after days waiting time.
At Master:
[root at gluster-poc-noida distvol]# du -sh rflowTestInt18.08-b001.t.Z
1.2G rflowTestInt18.08-b001.t.Z
[root at gluster-poc-noida distvol]#
At Slave:
[root at gluster-poc-sj distvol]# du -sh rflowTestInt18.08-b001.t.Z
0 rflowTestInt18.08-b001.t.Z
[root at gluster-poc-sj distvol]#
Below is my distributed volume info :
[root at gluster-poc-noida distvol]# gluster volume info glusterdist
Volume Name: glusterdist
Type: Distribute
Volume ID: af5b2915-7170-4b5e-aee8-7e68757b9bf1
Status: Started
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: gluster-poc-noida:/data/gluster-dist/distvol
Brick2: noi-poc-gluster:/data/gluster-dist/distvol
Options Reconfigured:
changelog.changelog: on
geo-replication.ignore-pid-check: on
geo-replication.indexing: on
transport.address-family: inet
nfs.disable: on
[root at gluster-poc-noida distvol]#
Please help to fix, I believe its not a normal behavior of gluster rsync.
/Krishna
From: Krishna Verma
Sent: Friday, August 31, 2018 12:42 PM
To: 'Kotresh Hiremath Ravishankar' <khiremat at redhat.com<mailto:khiremat at redhat.com>>
Cc: Sunny Kumar <sunkumar at redhat.com<mailto:sunkumar at redhat.com>>; Gluster Users <gluster-users at gluster.org<mailto:gluster-users at gluster.org>>
Subject: RE: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work
Hi Kotresh,
I have tested the geo replication over distributed volumes with 2*2 gluster setup.
[root at gluster-poc-noida ~]# gluster volume geo-replication glusterdist gluster-poc-sj::glusterdist status
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
gluster-poc-noida glusterdist /data/gluster-dist/distvol root gluster-poc-sj::glusterdist gluster-poc-sj Active Changelog Crawl 2018-08-31 10:28:19
noi-poc-gluster glusterdist /data/gluster-dist/distvol root gluster-poc-sj::glusterdist gluster-poc-sj2 Active History Crawl N/A
[root at gluster-poc-noida ~]#
Not at client I copied a 848MB file from local disk to master mounted volume and it took only 1 minute and 15 seconds. Its great….
But even after waited for 2 hrs I was unable to see that file at slave site. Then I again erased the indexing by doing “gluster volume set glusterdist indexing off” and restart the session. Magically I received the file instantly at slave after doing this.
Why I need to do “indexing off” every time to reflect data at slave site? Is there any fix/workaround of it?
/Krishna
From: Kotresh Hiremath Ravishankar <khiremat at redhat.com<mailto:khiremat at redhat.com>>
Sent: Friday, August 31, 2018 10:10 AM
To: Krishna Verma <kverma at cadence.com<mailto:kverma at cadence.com>>
Cc: Sunny Kumar <sunkumar at redhat.com<mailto:sunkumar at redhat.com>>; Gluster Users <gluster-users at gluster.org<mailto:gluster-users at gluster.org>>
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work
EXTERNAL MAIL
On Thu, Aug 30, 2018 at 3:51 PM, Krishna Verma <kverma at cadence.com<mailto:kverma at cadence.com>> wrote:
Hi Kotresh,
Yes, this include the time take to write 1GB file to master. geo-rep was not stopped while the data was copying to master.
This way, you can't really measure how much time geo-rep took.
But now I am trouble, My putty session was timed out while copying data to master and geo replication was active. After I restart putty session My Master data is not syncing with slave. Its Last_synced time is 1hrs behind the current time.
I restart the geo rep and also delete and again create the session but its “LAST_SYNCED” time is same.
Unless, geo-rep is Faulty, it would be processing/syncing. You should check logs for any errors.
Please help in this.
…. It's better if gluster volume has more distribute count like 3*3 or 4*3 :- Are you refereeing to create a distributed volume with 3 master node and 3 slave node?
Yes, that's correct. Please do the test with this. I recommend you to run the actual workload for which you are planning to use gluster instead of copying 1GB file and testing.
/krishna
From: Kotresh Hiremath Ravishankar <khiremat at redhat.com<mailto:khiremat at redhat.com>>
Sent: Thursday, August 30, 2018 3:20 PM
To: Krishna Verma <kverma at cadence.com<mailto:kverma at cadence.com>>
Cc: Sunny Kumar <sunkumar at redhat.com<mailto:sunkumar at redhat.com>>; Gluster Users <gluster-users at gluster.org<mailto:gluster-users at gluster.org>>
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work
EXTERNAL MAIL
On Thu, Aug 30, 2018 at 1:52 PM, Krishna Verma <kverma at cadence.com<mailto:kverma at cadence.com>> wrote:
Hi Kotresh,
After fix the library link on node "noi-poc-gluster ", the status of one mater node is “Active” and another is “Passive”. Can I setup both the master as “Active” ?
Nope, since it's replica, it's redundant to sync same files from two nodes. Both replicas can't be Active.
Also, when I copy a 1GB size of file from gluster client to master gluster volume which is replicated with the slave volume, it tooks 35 minutes and 49 seconds. Is there any way to reduce its time taken to rsync data.
How did you measure this time? Does this include the time take for you to write 1GB file to master?
There are two aspects to consider while measuring this.
1. Time to write 1GB to master
2. Time for geo-rep to transfer 1GB to slave.
In your case, since the setup is 1*2 and only one geo-rep worker is Active, Step2 above equals to time for step1 + network transfer time.
You can measure time in two scenarios
1. If geo-rep is started while the data is still being written to master. It's one way.
2. Or stop geo-rep until the 1GB file is written to master and then start geo-rep to get actual geo-rep time.
To improve replicating speed,
1. You can play around with rsync options depending on the kind of I/O
and configure the same for geo-rep as it also uses rsync internally.
2. It's better if gluster volume has more distribute count like 3*3 or 4*3
It will help in two ways.
1. The files gets distributed on master to multiple bricks
2. So above will help geo-rep as files on multiple bricks are synced in parallel (multiple Actives)
NOTE: Gluster master server and one client is in Noida, India Location.
Gluster Slave server and one client is in USA.
Our approach is to transfer data from Noida gluster client will reach to the USA gluster client in a minimum time. Please suggest the best approach to achieve it.
[root at noi-dcops ~]# date ; rsync -avh --progress /tmp/gentoo_root.img /glusterfs/ ; date
Thu Aug 30 12:26:26 IST 2018
sending incremental file list
gentoo_root.img
1.07G 100% 490.70kB/s 0:35:36 (xfr#1, to-chk=0/1)
Is this I/O time to write to master volume?
sent 1.07G bytes received 35 bytes 499.65K bytes/sec
total size is 1.07G speedup is 1.00
Thu Aug 30 13:02:15 IST 2018
[root at noi-dcops ~]#
[root at gluster-poc-noida gluster]# gluster volume geo-replication status
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root ssh://gluster-poc-sj::glusterep gluster-poc-sj Active Changelog Crawl 2018-08-30 13:42:18
noi-poc-gluster glusterep /data/gluster/gv0 root ssh://gluster-poc-sj::glusterep gluster-poc-sj Passive N/A N/A
[root at gluster-poc-noida gluster]#
Thanks in advance for your all time support.
/Krishna
From: Kotresh Hiremath Ravishankar <khiremat at redhat.com<mailto:khiremat at redhat.com>>
Sent: Thursday, August 30, 2018 10:51 AM
To: Krishna Verma <kverma at cadence.com<mailto:kverma at cadence.com>>
Cc: Sunny Kumar <sunkumar at redhat.com<mailto:sunkumar at redhat.com>>; Gluster Users <gluster-users at gluster.org<mailto:gluster-users at gluster.org>>
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work
EXTERNAL MAIL
Did you fix the library link on node "noi-poc-gluster " as well?
If not please fix it. Please share the geo-rep log this node if it's
as different issue.
-Kotresh HR
On Thu, Aug 30, 2018 at 12:17 AM, Krishna Verma <kverma at cadence.com<mailto:kverma at cadence.com>> wrote:
Hi Kotresh,
Thank you so much for you input. Geo-replication is now showing “Active” atleast for 1 master node. But its still at faulty state for the 2nd master server.
Below is the detail.
[root at gluster-poc-noida glusterfs]# gluster volume geo-replication glusterep gluster-poc-sj::glusterep status
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
--------------------------------------------------------------------------------------------------------------------------------------------------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep gluster-poc-sj Active Changelog Crawl 2018-08-29 23:56:06
noi-poc-gluster glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A
[root at gluster-poc-noida glusterfs]# gluster volume status
Status of volume: glusterep
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick gluster-poc-noida:/data/gluster/gv0 49152 0 Y 22463
Brick noi-poc-gluster:/data/gluster/gv0 49152 0 Y 19471
Self-heal Daemon on localhost N/A N/A Y 32087
Self-heal Daemon on noi-poc-gluster N/A N/A Y 6272
Task Status of Volume glusterep
------------------------------------------------------------------------------
There are no active volume tasks
[root at gluster-poc-noida glusterfs]# gluster volume info
Volume Name: glusterep
Type: Replicate
Volume ID: 4a71bc94-14ce-4b2c-abc4-e6a9a9765161
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: gluster-poc-noida:/data/gluster/gv0
Brick2: noi-poc-gluster:/data/gluster/gv0
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
geo-replication.indexing: on
geo-replication.ignore-pid-check: on
changelog.changelog: on
[root at gluster-poc-noida glusterfs]#
Could you please help me in that also please?
It would be really a great help from your side.
/Krishna
From: Kotresh Hiremath Ravishankar <khiremat at redhat.com<mailto:khiremat at redhat.com>>
Sent: Wednesday, August 29, 2018 10:47 AM
To: Krishna Verma <kverma at cadence.com<mailto:kverma at cadence.com>>
Cc: Sunny Kumar <sunkumar at redhat.com<mailto:sunkumar at redhat.com>>; Gluster Users <gluster-users at gluster.org<mailto:gluster-users at gluster.org>>
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work
EXTERNAL MAIL
Answer inline
On Tue, Aug 28, 2018 at 4:28 PM, Krishna Verma <kverma at cadence.com<mailto:kverma at cadence.com>> wrote:
Hi Kotresh,
I created the links before. Below is the detail.
[root at gluster-poc-noida ~]# ls -l /usr/lib64 | grep libgfch
lrwxrwxrwx 1 root root 30 Aug 28 14:59 libgfchangelog.so -> /usr/lib64/libgfchangelog.so.1
The link created is pointing to wrong library. Please fix this
#cd /usr/lib64
#rm libgfchangelog.so
#ln -s "libgfchangelog.so.0.0.1" libgfchangelog.so
lrwxrwxrwx 1 root root 23 Aug 23 23:35 libgfchangelog.so.0 -> libgfchangelog.so.0.0.1
-rwxr-xr-x 1 root root 63384 Jul 24 19:11 libgfchangelog.so.0.0.1
[root at gluster-poc-noida ~]# locate libgfchangelog.so
/usr/lib64/libgfchangelog.so.0
/usr/lib64/libgfchangelog.so.0.0.1
[root at gluster-poc-noida ~]#
Is it looks good what we exactly need or di I need to create any more link or How to get “libgfchangelog.so” file if missing.
/Krishna
From: Kotresh Hiremath Ravishankar <khiremat at redhat.com<mailto:khiremat at redhat.com>>
Sent: Tuesday, August 28, 2018 4:22 PM
To: Krishna Verma <kverma at cadence.com<mailto:kverma at cadence.com>>
Cc: Sunny Kumar <sunkumar at redhat.com<mailto:sunkumar at redhat.com>>; Gluster Users <gluster-users at gluster.org<mailto:gluster-users at gluster.org>>
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work
EXTERNAL MAIL
Hi Krishna,
As per the output shared, I don't see the file "libgfchangelog.so" which is what is required.
I only see "libgfchangelog.so.0". Please confirm "libgfchangelog.so" is present in "/usr/lib64/".
If not create a symlink similar to "libgfchangelog.so.0"
It should be something like below.
#ls -l /usr/lib64 | grep libgfch
-rwxr-xr-x. 1 root root 1078 Aug 28 05:56 libgfchangelog.la<https://urldefense.proofpoint.com/v2/url?u=http-3A__libgfchangelog.la&d=DwMFaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=77GIqpHy9HY8RQd6lKzSJ-Z1PCuIhZJ3I3IvIuDX-xo&s=kIFnrBaSFV_DdqZezd6PXcDnD8Iy_gVN69ETZYtykEE&e=>
lrwxrwxrwx. 1 root root 23 Aug 28 05:56 libgfchangelog.so -> libgfchangelog.so.0.0.1
lrwxrwxrwx. 1 root root 23 Aug 28 05:56 libgfchangelog.so.0 -> libgfchangelog.so.0.0.1
-rwxr-xr-x. 1 root root 336888 Aug 28 05:56 libgfchangelog.so.0.0.1
On Tue, Aug 28, 2018 at 4:04 PM, Krishna Verma <kverma at cadence.com<mailto:kverma at cadence.com>> wrote:
Hi Kotresh,
Thanks for the response, I did that also but nothing changed.
[root at gluster-poc-noida ~]# ldconfig /usr/lib64
[root at gluster-poc-noida ~]# ldconfig -p | grep libgfchangelog
libgfchangelog.so.0 (libc6,x86-64) => /usr/lib64/libgfchangelog.so.0
[root at gluster-poc-noida ~]#
[root at gluster-poc-noida ~]# gluster volume geo-replication glusterep gluster-poc-sj::glusterep stop
Stopping geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
[root at gluster-poc-noida ~]# gluster volume geo-replication glusterep gluster-poc-sj::glusterep start
Starting geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
[root at gluster-poc-noida ~]# gluster volume geo-replication glusterep gluster-poc-sj::glusterep status
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
-----------------------------------------------------------------------------------------------------------------------------------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A
noi-poc-gluster glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A
[root at gluster-poc-noida ~]#
/Krishna
From: Kotresh Hiremath Ravishankar <khiremat at redhat.com<mailto:khiremat at redhat.com>>
Sent: Tuesday, August 28, 2018 4:00 PM
To: Sunny Kumar <sunkumar at redhat.com<mailto:sunkumar at redhat.com>>
Cc: Krishna Verma <kverma at cadence.com<mailto:kverma at cadence.com>>; Gluster Users <gluster-users at gluster.org<mailto:gluster-users at gluster.org>>
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work
EXTERNAL MAIL
Hi Krishna,
Since your libraries are in /usr/lib64, you should be doing
#ldconfig /usr/lib64
Confirm that below command lists the library
#ldconfig -p | grep libgfchangelog
On Tue, Aug 28, 2018 at 3:52 PM, Sunny Kumar <sunkumar at redhat.com<mailto:sunkumar at redhat.com>> wrote:
can you do ldconfig /usr/local/lib and share the output of ldconfig -p
/usr/local/lib | grep libgf
On Tue, Aug 28, 2018 at 3:45 PM Krishna Verma <kverma at cadence.com<mailto:kverma at cadence.com>> wrote:
>
> Hi Sunny,
>
> I did the mentioned changes given in patch and restart the session for geo-replication. But again same errors in the logs.
>
> I have attaching the config files and logs here.
>
>
> [root at gluster-poc-noida ~]# gluster volume geo-replication glusterep gluster-poc-sj::glusterep stop
> Stopping geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
> [root at gluster-poc-noida ~]# gluster volume geo-replication glusterep gluster-poc-sj::glusterep delete
> Deleting geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
> [root at gluster-poc-noida ~]# gluster volume geo-replication glusterep gluster-poc-sj::glusterep create push-pem force
> Creating geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
> [root at gluster-poc-noida ~]# gluster volume geo-replication glusterep gluster-poc-sj::glusterep start
> geo-replication start failed for glusterep gluster-poc-sj::glusterep
> geo-replication command failed
> [root at gluster-poc-noida ~]# gluster volume geo-replication glusterep gluster-poc-sj::glusterep start
> geo-replication start failed for glusterep gluster-poc-sj::glusterep
> geo-replication command failed
> [root at gluster-poc-noida ~]# vim /usr/libexec/glusterfs/python/syncdaemon/repce.py
> [root at gluster-poc-noida ~]# systemctl restart glusterd
> [root at gluster-poc-noida ~]# gluster volume geo-replication glusterep gluster-poc-sj::glusterep start
> Starting geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
> [root at gluster-poc-noida ~]# gluster volume geo-replication glusterep gluster-poc-sj::glusterep status
>
> MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
> -----------------------------------------------------------------------------------------------------------------------------------------------------
> gluster-poc-noida glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A
> noi-poc-gluster glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A
> [root at gluster-poc-noida ~]#
>
>
> /Krishna.
>
> -----Original Message-----
> From: Sunny Kumar <sunkumar at redhat.com<mailto:sunkumar at redhat.com>>
> Sent: Tuesday, August 28, 2018 3:17 PM
> To: Krishna Verma <kverma at cadence.com<mailto:kverma at cadence.com>>
> Cc: gluster-users at gluster.org<mailto:gluster-users at gluster.org>
> Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work
>
> EXTERNAL MAIL
>
>
> With same log message ?
>
> Can you please verify that
> https://urldefense.proofpoint.com/v2/url?u=https-3A__review.gluster.org_-23_c_glusterfs_-2B_20207_&d=DwIBaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=F0ExtFUfa_YCktOGvy82x3IAxvi2GrbPR72jZ8beuYk&s=fGtkmezHJj5YoLN3dUeVUCcYFnREHyOSk36mRjbTTEQ&e= patch is present if not can you please apply that.
> and try with symlinking ln -s /usr/lib64/libgfchangelog.so.0 /usr/lib64/libgfchangelog.so.
>
> Please share the log also.
>
> Regards,
> Sunny
> On Tue, Aug 28, 2018 at 3:02 PM Krishna Verma <kverma at cadence.com<mailto:kverma at cadence.com>> wrote:
> >
> > Hi Sunny,
> >
> > Thanks for your response, I tried both, but still I am getting the same error.
> >
> >
> > [root at noi-poc-gluster ~]# ldconfig /usr/lib [root at noi-poc-gluster ~]#
> >
> > [root at noi-poc-gluster ~]# ln -s /usr/lib64/libgfchangelog.so.1
> > /usr/lib64/libgfchangelog.so [root at noi-poc-gluster ~]# ls -l
> > /usr/lib64/libgfchangelog.so lrwxrwxrwx. 1 root root 30 Aug 28 14:59
> > /usr/lib64/libgfchangelog.so -> /usr/lib64/libgfchangelog.so.1
> >
> > /Krishna
> >
> > -----Original Message-----
> > From: Sunny Kumar <sunkumar at redhat.com<mailto:sunkumar at redhat.com>>
> > Sent: Tuesday, August 28, 2018 2:55 PM
> > To: Krishna Verma <kverma at cadence.com<mailto:kverma at cadence.com>>
> > Cc: gluster-users at gluster.org<mailto:gluster-users at gluster.org>
> > Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
> > work
> >
> > EXTERNAL MAIL
> >
> >
> > Hi Krish,
> >
> > You can run -
> > #ldconfig /usr/lib
> >
> > If that still does not solves your problem you can do manual symlink
> > like - ln -s /usr/lib64/libgfchangelog.so.1
> > /usr/lib64/libgfchangelog.so
> >
> > Thanks,
> > Sunny Kumar
> > On Tue, Aug 28, 2018 at 1:47 PM Krishna Verma <kverma at cadence.com<mailto:kverma at cadence.com>> wrote:
> > >
> > > Hi
> > >
> > >
> > >
> > > I am getting below error in gsyncd.log
> > >
> > >
> > >
> > > OSError: libgfchangelog.so: cannot open shared object file: No such
> > > file or directory
> > >
> > > [2018-08-28 07:19:41.446785] E [repce(worker /data/gluster/gv0):197:__call__] RepceClient: call failed call=26469:139794524604224:1535440781.44 method=init error=OSError
> > >
> > > [2018-08-28 07:19:41.447041] E [syncdutils(worker /data/gluster/gv0):330:log_raise_exception] <top>: FAIL:
> > >
> > > Traceback (most recent call last):
> > >
> > > File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line
> > > 311, in main
> > >
> > > func(args)
> > >
> > > File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py", line
> > > 72, in subcmd_worker
> > >
> > > local.service_loop(remote)
> > >
> > > File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line
> > > 1236, in service_loop
> > >
> > > changelog_agent.init()
> > >
> > > File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line
> > > 216, in __call__
> > >
> > > return self.ins(self.meth, *a)
> > >
> > > File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line
> > > 198, in __call__
> > >
> > > raise res
> > >
> > > OSError: libgfchangelog.so: cannot open shared object file: No such
> > > file or directory
> > >
> > > [2018-08-28 07:19:41.457555] I [repce(agent /data/gluster/gv0):80:service_loop] RepceServer: terminating on reaching EOF.
> > >
> > > [2018-08-28 07:19:42.440184] I [monitor(monitor):272:monitor] Monitor:
> > > worker died in startup phase brick=/data/gluster/gv0
> > >
> > >
> > >
> > > Below is my file location:
> > >
> > >
> > >
> > > /usr/lib64/libgfchangelog.so.0
> > >
> > > /usr/lib64/libgfchangelog.so.0.0.1
> > >
> > >
> > >
> > > What I can do to fix it ?
> > >
> > >
> > >
> > > /Krish
> > >
> > > _______________________________________________
> > > Gluster-users mailing list
> > > Gluster-users at gluster.org<mailto:Gluster-users at gluster.org>
> > > https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.gluster.o
> > > rg
> > > _mailman_listinfo_gluster-2Dusers&d=DwIBaQ&c=aUq983L2pue2FqKFoP6PGHM
> > > JQ
> > > yoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=_
> > > u6
> > > vGRjlVsype7Z8hXDgCONilqVe4sIWkXNqqz2n3IQ&s=i0EUwtUHurhJHyw9UPpepCdLB
> > > 70
> > > 1mkxoNZWYvU7XXug&e=
_______________________________________________
Gluster-users mailing list
Gluster-users at gluster.org<mailto:Gluster-users at gluster.org>
https://lists.gluster.org/mailman/listinfo/gluster-users<https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.gluster.org_mailman_listinfo_gluster-2Dusers&d=DwMFaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=BuUVeXUwqxdQDoP-IWDI_JPFCQGrRFdzV9g0enLn8kM&s=X0eVYgZ1emUQnEwRTg0AGbAWyoIwIyrE_gzmuolgiPE&e=>
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180906/b9b1392a/attachment.html>
More information about the Gluster-users
mailing list