[Gluster-users] Geo-Replication in "FAULTY" state after files are added to master volume: gsyncd worker crashed in syncdutils with "OSError: [Errno 22] Invalid argument

Boubacar Cisse cboubacar at gmail.com
Sat Feb 23 22:28:59 UTC 2019


Hello all,

I having trouble making gluster geo-replication on Ubuntu 18.04 (Bionic).
Gluster version is 5.3. I'm able to successfully create the geo-replication
session but status goes from "Initializing" to "Faulty" in a loop after
session is started. I've created a bug report with all the necessary
information at https://bugzilla.redhat.com/show_bug.cgi?id=1680324
Any assistance/tips fixing this issue will be greatly appreciated.

5/ Log entries
[MASTER SERVER GEO REP LOG]
root at media01:/var/log/glusterfs/geo-replication/gfs1_media03_gfs1# cat
gsyncd.log
[2019-02-23 21:36:43.851184] I
[gsyncdstatus(monitor):248:set_worker_status] GeorepStatus: Worker Status
Change status=Initializing...
[2019-02-23 21:36:43.851489] I [monitor(monitor):157:monitor] Monitor:
starting gsyncd worker brick=/gfs1-data/brick slave_node=media03
[2019-02-23 21:36:43.856857] D [monitor(monitor):228:monitor] Monitor:
Worker would mount volume privately
[2019-02-23 21:36:43.895652] I [gsyncd(agent /gfs1-data/brick):308:main]
<top>: Using session config file
path=/var/lib/glusterd/geo-replication/gfs1_media03_gfs1/gsyncd.conf
[2019-02-23 21:36:43.896118] D [subcmds(agent
/gfs1-data/brick):103:subcmd_agent] <top>: RPC FD rpc_fd='8,11,10,9'
[2019-02-23 21:36:43.896435] I [changelogagent(agent
/gfs1-data/brick):72:__init__] ChangelogAgent: Agent listining...
[2019-02-23 21:36:43.897432] I [gsyncd(worker /gfs1-data/brick):308:main]
<top>: Using session config file
path=/var/lib/glusterd/geo-replication/gfs1_media03_gfs1/gsyncd.conf
[2019-02-23 21:36:43.904604] I [resource(worker
/gfs1-data/brick):1366:connect_remote] SSH: Initializing SSH connection
between master and slave...
[2019-02-23 21:36:43.905631] D [repce(worker /gfs1-data/brick):196:push]
RepceClient: call 22733:140323447641920:1550957803.9055686
__repce_version__() ...
[2019-02-23 21:36:45.751853] D [repce(worker
/gfs1-data/brick):216:__call__] RepceClient: call
22733:140323447641920:1550957803.9055686 __repce_version__ -> 1.0
[2019-02-23 21:36:45.752202] D [repce(worker /gfs1-data/brick):196:push]
RepceClient: call 22733:140323447641920:1550957805.7521348 version() ...
[2019-02-23 21:36:45.785690] D [repce(worker
/gfs1-data/brick):216:__call__] RepceClient: call
22733:140323447641920:1550957805.7521348 version -> 1.0
[2019-02-23 21:36:45.786081] D [repce(worker /gfs1-data/brick):196:push]
RepceClient: call 22733:140323447641920:1550957805.7860181 pid() ...
[2019-02-23 21:36:45.820014] D [repce(worker
/gfs1-data/brick):216:__call__] RepceClient: call
22733:140323447641920:1550957805.7860181 pid -> 24141
[2019-02-23 21:36:45.820337] I [resource(worker
/gfs1-data/brick):1413:connect_remote] SSH: SSH connection between master
and slave established. duration=1.9156
[2019-02-23 21:36:45.820520] I [resource(worker
/gfs1-data/brick):1085:connect] GLUSTER: Mounting gluster volume locally...
[2019-02-23 21:36:45.837300] D [resource(worker
/gfs1-data/brick):859:inhibit] DirectMounter: auxiliary glusterfs mount in
place
[2019-02-23 21:36:46.843754] D [resource(worker
/gfs1-data/brick):933:inhibit] DirectMounter: auxiliary glusterfs mount
prepared
[2019-02-23 21:36:46.844113] I [resource(worker
/gfs1-data/brick):1108:connect] GLUSTER: Mounted gluster volume
duration=1.0234
[2019-02-23 21:36:46.844283] I [subcmds(worker
/gfs1-data/brick):80:subcmd_worker] <top>: Worker spawn successful.
Acknowledging back to monitor
[2019-02-23 21:36:46.844623] D [master(worker
/gfs1-data/brick):101:gmaster_builder] <top>: setting up change detection
mode mode=xsync
[2019-02-23 21:36:46.844768] D [monitor(monitor):271:monitor] Monitor:
worker(/gfs1-data/brick) connected
[2019-02-23 21:36:46.846079] D [master(worker
/gfs1-data/brick):101:gmaster_builder] <top>: setting up change detection
mode mode=changelog
[2019-02-23 21:36:46.847300] D [master(worker
/gfs1-data/brick):101:gmaster_builder] <top>: setting up change detection
mode mode=changeloghistory
[2019-02-23 21:36:46.884938] D [repce(worker /gfs1-data/brick):196:push]
RepceClient: call 22733:140323447641920:1550957806.8848307 version() ...
[2019-02-23 21:36:46.885751] D [repce(worker
/gfs1-data/brick):216:__call__] RepceClient: call
22733:140323447641920:1550957806.8848307 version -> 1.0
[2019-02-23 21:36:46.886019] D [master(worker
/gfs1-data/brick):774:setup_working_dir] _GMaster: changelog working dir
/var/lib/misc/gluster/gsyncd/gfs1_media03_gfs1/gfs1-data-brick
[2019-02-23 21:36:46.886212] D [repce(worker /gfs1-data/brick):196:push]
RepceClient: call 22733:140323447641920:1550957806.8861625 init() ...
[2019-02-23 21:36:46.892709] D [repce(worker
/gfs1-data/brick):216:__call__] RepceClient: call
22733:140323447641920:1550957806.8861625 init -> None
[2019-02-23 21:36:46.892794] D [repce(worker /gfs1-data/brick):196:push]
RepceClient: call 22733:140323447641920:1550957806.892774
register('/gfs1-data/brick',
'/var/lib/misc/gluster/gsyncd/gfs1_media03_gfs1/gfs1-data-brick',
'/var/log/glusterfs/geo-replication/gfs1_media03_gfs1/changes-gfs1-data-brick.log',
8, 5) ...
[2019-02-23 21:36:48.896220] D [repce(worker
/gfs1-data/brick):216:__call__] RepceClient: call
22733:140323447641920:1550957806.892774 register -> None
[2019-02-23 21:36:48.896590] D [master(worker
/gfs1-data/brick):774:setup_working_dir] _GMaster: changelog working dir
/var/lib/misc/gluster/gsyncd/gfs1_media03_gfs1/gfs1-data-brick
[2019-02-23 21:36:48.896823] D [master(worker
/gfs1-data/brick):774:setup_working_dir] _GMaster: changelog working dir
/var/lib/misc/gluster/gsyncd/gfs1_media03_gfs1/gfs1-data-brick
[2019-02-23 21:36:48.897012] D [master(worker
/gfs1-data/brick):774:setup_working_dir] _GMaster: changelog working dir
/var/lib/misc/gluster/gsyncd/gfs1_media03_gfs1/gfs1-data-brick
[2019-02-23 21:36:48.897159] I [master(worker
/gfs1-data/brick):1603:register] _GMaster: Working dir
path=/var/lib/misc/gluster/gsyncd/gfs1_media03_gfs1/gfs1-data-brick
[2019-02-23 21:36:48.897512] I [resource(worker
/gfs1-data/brick):1271:service_loop] GLUSTER: Register time time=1550957808
[2019-02-23 21:36:48.898130] D [repce(worker /gfs1-data/brick):196:push]
RepceClient: call 22733:140322604570368:1550957808.898032 keep_alive(None,)
...
[2019-02-23 21:36:48.907820] D [master(worker
/gfs1-data/brick):536:crawlwrap] _GMaster: primary master with volume id
f720f1cb-16de-47a4-b1da-49d348736b53 ...
[2019-02-23 21:36:48.932170] D [repce(worker
/gfs1-data/brick):216:__call__] RepceClient: call
22733:140322604570368:1550957808.898032 keep_alive -> 1
[2019-02-23 21:36:49.77565] I [gsyncdstatus(worker
/gfs1-data/brick):281:set_active] GeorepStatus: Worker Status Change
status=Active
[2019-02-23 21:36:49.201132] I [gsyncdstatus(worker
/gfs1-data/brick):253:set_worker_crawl_status] GeorepStatus: Crawl Status
Change status=History Crawl
[2019-02-23 21:36:49.201822] I [master(worker /gfs1-data/brick):1517:crawl]
_GMaster: starting history crawl turns=1 stime=(1550858209, 637241)
etime=1550957809 entry_stime=None
[2019-02-23 21:36:49.202147] D [repce(worker /gfs1-data/brick):196:push]
RepceClient: call 22733:140323447641920:1550957809.202051
history('/gfs1-data/brick/.glusterfs/changelogs', 1550858209, 1550957809,
3) ...
[2019-02-23 21:36:49.203344] D [repce(worker
/gfs1-data/brick):216:__call__] RepceClient: call
22733:140323447641920:1550957809.202051 history -> (0, 1550957807)
[2019-02-23 21:36:49.203582] D [repce(worker /gfs1-data/brick):196:push]
RepceClient: call 22733:140323447641920:1550957809.2035315 history_scan()
...
[2019-02-23 21:36:49.204280] D [repce(worker
/gfs1-data/brick):216:__call__] RepceClient: call
22733:140323447641920:1550957809.2035315 history_scan -> 1
[2019-02-23 21:36:49.204572] D [repce(worker /gfs1-data/brick):196:push]
RepceClient: call 22733:140323447641920:1550957809.2045026
history_getchanges() ...
[2019-02-23 21:36:49.205424] D [repce(worker
/gfs1-data/brick):216:__call__] RepceClient: call
22733:140323447641920:1550957809.2045026 history_getchanges ->
['/var/lib/misc/gluster/gsyncd/gfs1_media03_gfs1/gfs1-data-brick/.history/.processing/CHANGELOG.1550858215']
[2019-02-23 21:36:49.205678] I [master(worker /gfs1-data/brick):1546:crawl]
_GMaster: slave's time stime=(1550858209, 637241)
[2019-02-23 21:36:49.205953] D [master(worker
/gfs1-data/brick):1454:changelogs_batch_process] _GMaster: processing
changes
batch=['/var/lib/misc/gluster/gsyncd/gfs1_media03_gfs1/gfs1-data-brick/.history/.processing/CHANGELOG.1550858215']
[2019-02-23 21:36:49.206196] D [master(worker
/gfs1-data/brick):1289:process] _GMaster: processing change
changelog=/var/lib/misc/gluster/gsyncd/gfs1_media03_gfs1/gfs1-data-brick/.history/.processing/CHANGELOG.1550858215
[2019-02-23 21:36:49.206844] D [master(worker
/gfs1-data/brick):1170:process_change] _GMaster: entries: []
[2019-02-23 21:36:49.295979] E [syncdutils(worker
/gfs1-data/brick):338:log_raise_exception] <top>: FAIL:
Traceback (most recent call last):
  File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/gsyncd.py",
line 322, in main
    func(args)
  File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/subcmds.py",
line 82, in subcmd_worker
    local.service_loop(remote)
  File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/resource.py",
line 1277, in service_loop
    g3.crawlwrap(oneshot=True)
  File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py",
line 599, in crawlwrap
    self.crawl()
  File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py",
line 1555, in crawl
    self.changelogs_batch_process(changes)
  File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py",
line 1455, in changelogs_batch_process
    self.process(batch)
  File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py",
line 1290, in process
    self.process_change(change, done, retry)
  File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py",
line 1229, in process_change
    st = lstat(go[0])
  File
"/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/syncdutils.py", line
564, in lstat
    return errno_wrap(os.lstat, [e], [ENOENT], [ESTALE, EBUSY])
  File
"/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/syncdutils.py", line
546, in errno_wrap
    return call(*arg)
OSError: [Errno 22] Invalid argument:
'.gfid/00000000-0000-0000-0000-000000000001'
[2019-02-23 21:36:49.323695] I [repce(agent
/gfs1-data/brick):97:service_loop] RepceServer: terminating on reaching EOF.
[2019-02-23 21:36:49.849243] I [monitor(monitor):278:monitor] Monitor:
worker died in startup phase brick=/gfs1-data/brick
[2019-02-23 21:36:49.896026] I
[gsyncdstatus(monitor):248:set_worker_status] GeorepStatus: Worker Status
Change status=Faulty


[SLAVE SERVER GEO REP LOG]
root at media03:/var/log/glusterfs/geo-replication-slaves/gfs1_media03_gfs1#
cat gsyncd.log
[2019-02-23 21:39:10.407784] W [gsyncd(slave
media01/gfs1-data/brick):304:main] <top>: Session config file not exists,
using the default config
path=/var/lib/glusterd/geo-replication/gfs1_media03_gfs1/gsyncd.conf
[2019-02-23 21:39:10.414549] I [resource(slave
media01/gfs1-data/brick):1085:connect] GLUSTER: Mounting gluster volume
locally...
[2019-02-23 21:39:10.472665] D [resource(slave
media01/gfs1-data/brick):859:inhibit] MountbrokerMounter: auxiliary
glusterfs mount in place
[2019-02-23 21:39:11.555885] D [resource(slave
media01/gfs1-data/brick):926:inhibit] MountbrokerMounter: Lazy umount done:
/var/mountbroker-root/mb_hive/mntBkK4D5
[2019-02-23 21:39:11.556459] D [resource(slave
media01/gfs1-data/brick):933:inhibit] MountbrokerMounter: auxiliary
glusterfs mount prepared
[2019-02-23 21:39:11.556585] I [resource(slave
media01/gfs1-data/brick):1108:connect] GLUSTER: Mounted gluster volume
duration=1.1420
[2019-02-23 21:39:11.556830] I [resource(slave
media01/gfs1-data/brick):1135:service_loop] GLUSTER: slave listening
[2019-02-23 21:39:15.55945] I [repce(slave
media01/gfs1-data/brick):97:service_loop] RepceServer: terminating on
reaching EOF.


6/ OS and Gluster Info
[MASTER OS INFO]
root at media01:/var/run/gluster# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.2 LTS
Release: 18.04
Codename: bionic


[SLAVE OS INFO]
root at media03:~# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.2 LTS
Release: 18.04
Codename: bionic


[MASTER GLUSTER VERSION]
root at media01:/var/run/gluster# glusterfs --version
glusterfs 5.3
Repository revision: git://git.gluster.org/glusterfs.git
Copyright (c) 2006-2016 Red Hat, Inc. <https://www.gluster.org/>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.


[SLAVE GLUSTER VERSION]
root at media03:~# glusterfs --version
glusterfs 5.3
Repository revision: git://git.gluster.org/glusterfs.git
Copyright (c) 2006-2016 Red Hat, Inc. <https://www.gluster.org/>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.


7/ Master and Slave Servers Config
[MASTER /etc/glusterfs/glusterd.vol]
root at media01:/var/run/gluster# cat /etc/glusterfs/glusterd.vol
volume management
    type mgmt/glusterd
    option working-directory /var/lib/glusterd
    option transport-type socket,rdma
    option transport.socket.keepalive-time 10
    option transport.socket.keepalive-interval 2
    option transport.socket.read-fail-log off
    option ping-timeout 0
    option event-threads 1
    option rpc-auth-allow-insecure on
#   option lock-timer 180
#   option transport.address-family inet6
#   option base-port 49152
#   option max-port  65535
end-volume


[SLAVE /etc/glusterfs/glusterd.vol]
root at media03:~# cat /etc/glusterfs/glusterd.vol
volume management
    type mgmt/glusterd
    option working-directory /var/lib/glusterd
    option transport-type socket,rdma
    option transport.socket.keepalive-time 10
    option transport.socket.keepalive-interval 2
    option transport.socket.read-fail-log off
    option ping-timeout 0
    option event-threads 1
    option mountbroker-root /var/mountbroker-root
    option geo-replication-log-group geo-group
    option mountbroker-geo-replication.geo-user gfs2,gfs1
    option rpc-auth-allow-insecure on
    #   option lock-timer 180
    #   option transport.address-family inet6
    #   option base-port 49152
    #   option max-port  65535


[MASTER /etc/glusterfs/gsyncd.conf]
root at media01:/var/run/gluster# cat /etc/glusterfs/gsyncd.conf
[__meta__]
version = 4.0

[master-bricks]
configurable=false

[slave-bricks]
configurable=false

[master-volume-id]
configurable=false

[slave-volume-id]
configurable=false

[master-replica-count]
configurable=false
type=int
value=1

[master-disperse-count]
configurable=false
type=int
value=1

[glusterd-workdir]
value = /var/lib/glusterd

[gluster-logdir]
value = /var/log/glusterfs

[gluster-rundir]
value = /var/run/gluster

[gsyncd-miscdir]
value = /var/lib/misc/gluster/gsyncd

[stime-xattr-prefix]
value=

[checkpoint]
value=0
help=Set Checkpoint
validation=unixtime
type=int

[gluster-cli-options]
value=
help=Gluster CLI Options

[pid-file]
value=${gluster_rundir}/gsyncd-${master}-${primary_slave_host}-${slavevol}.pid
configurable=false
template = true
help=PID file path

[state-file]
value=${glusterd_workdir}/geo-replication/${master}_${primary_slave_host}_${slavevol}/monitor.status
configurable=false
template=true
help=Status File path

[georep-session-working-dir]
value=${glusterd_workdir}/geo-replication/${master}_${primary_slave_host}_${slavevol}/
template=true
help=Session Working directory
configurable=false

[access-mount]
value=false
type=bool
validation=bool
help=Do not lazy unmount the master volume. This allows admin to access the
mount for debugging.

[slave-access-mount]
value=false
type=bool
validation=bool
help=Do not lazy unmount the slave volume. This allows admin to access the
mount for debugging.

[isolated-slaves]
value=
help=List of Slave nodes which are isolated

[changelog-batch-size]
# Max size of Changelogs to process per batch, Changelogs Processing is
# not limited by the number of changelogs but instead based on
# size of the changelog file, One sample changelog file size was 145408
# with ~1000 CREATE and ~1000 DATA. 5 such files in one batch is 727040
# If geo-rep worker crashes while processing a batch, it has to retry only
# that batch since stime will get updated after each batch.
value=727040
help=Max size of Changelogs to process per batch.
type=int

[slave-timeout]
value=120
type=int
help=Timeout in seconds for Slave Gsyncd. If no activity from master for
this timeout, Slave gsyncd will be disconnected. Set Timeout to zero to
skip this check.

[connection-timeout]
value=60
type=int
help=Timeout for mounts

[replica-failover-interval]
value=1
type=int
help=Minimum time interval in seconds for passive worker to become Active

[changelog-archive-format]
value=%%Y%%m
help=Processed changelogs will be archived in working directory. Pattern
for archive file

[use-meta-volume]
value=false
type=bool
help=Use this to set Active Passive mode to meta-volume.

[meta-volume-mnt]
value=/var/run/gluster/shared_storage
help=Meta Volume or Shared Volume mount path

[allow-network]
value=

[change-interval]
value=5
type=int

[use-tarssh]
value=false
type=bool
help=Use sync-mode as tarssh

[remote-gsyncd]
value=/usr/lib/x86_64-linux-gnu/glusterfs/gsyncd
help=If SSH keys are not secured with gsyncd prefix then use this
configuration to set the actual path of gsyncd(Usually
/usr/libexec/glusterfs/gsyncd)

[gluster-command-dir]
value=/usr/sbin
help=Directory where Gluster binaries exist on master

[slave-gluster-command-dir]
value=/usr/sbin
help=Directory where Gluster binaries exist on slave

[gluster-params]
value = aux-gfid-mount acl
help=Parameters for Gluster Geo-rep mount in Master

[slave-gluster-params]
value = aux-gfid-mount acl
help=Parameters for Gluster Geo-rep mount in Slave

[ignore-deletes]
value = false
type=bool
help=Do not sync deletes in Slave

[special-sync-mode]
# tunables for failover/failback mechanism:
# None   - gsyncd behaves as normal
# blind  - gsyncd works with xtime pairs to identify
#          candidates for synchronization
# wrapup - same as normal mode but does not assign
#          xtimes to orphaned files
# see crawl() for usage of the above tunables
value =
help=

[gfid-conflict-resolution]
value = true
validation=bool
type=bool
help=Disables automatic gfid conflict resolution while syncing

[working-dir]
value = ${gsyncd_miscdir}/${master}_${primary_slave_host}_${slavevol}/
template=true
configurable=false
help=Working directory for storing Changelogs

[change-detector]
value=changelog
help=Change detector
validation=choice
allowed_values=changelog,xsync

[cli-log-file]
value=${gluster_logdir}/geo-replication/cli.log
template=true
configurable=false

[cli-log-level]
value=DEBUG
help=Set CLI Log Level
validation=choice
allowed_values=ERROR,INFO,WARNING,DEBUG

[log-file]
value=${gluster_logdir}/geo-replication/${master}_${primary_slave_host}_${slavevol}/gsyncd.log
configurable=false
template=true

[changelog-log-file]
value=${gluster_logdir}/geo-replication/${master}_${primary_slave_host}_${slavevol}/changes-${local_id}.log
configurable=false
template=true

[gluster-log-file]
value=${gluster_logdir}/geo-replication/${master}_${primary_slave_host}_${slavevol}/mnt-${local_id}.log
template=true
configurable=false

[slave-log-file]
value=${gluster_logdir}/geo-replication-slaves/${master}_${primary_slave_host}_${slavevol}/gsyncd.log
template=true
configurable=false

[slave-gluster-log-file]
value=${gluster_logdir}/geo-replication-slaves/${master}_${primary_slave_host}_${slavevol}/mnt-${master_node}-${master_brick_id}.log
template=true
configurable=false

[slave-gluster-log-file-mbr]
value=${gluster_logdir}/geo-replication-slaves/${master}_${primary_slave_host}_${slavevol}/mnt-mbr-${master_node}-${master_brick_id}.log
template=true
configurable=false

[log-level]
value=DEBUG
help=Set Log Level
validation=choice
allowed_values=ERROR,INFO,WARNING,DEBUG

[gluster-log-level]
value=DEBUG
help=Set Gluster mount Log Level
validation=choice
allowed_values=ERROR,INFO,WARNING,DEBUG

[changelog-log-level]
value=DEBUG
help=Set Changelog Log Level
validation=choice
allowed_values=ERROR,INFO,WARNING,DEBUG

[slave-log-level]
value=DEBUG
help=Set Slave Gsyncd Log Level
validation=choice
allowed_values=ERROR,INFO,WARNING,DEBUG

[slave-gluster-log-level]
value=DEBUG
help=Set Slave Gluster mount Log Level
validation=choice
allowed_values=ERROR,INFO,WARNING,DEBUG

[ssh-port]
value=2202
validation=int
help=Set SSH port
type=int

[ssh-command]
value=ssh
help=Set ssh binary path
validation=execpath

[tar-command]
value=tar
help=Set tar command path
validation=execpath

[ssh-options]
value = -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i
${glusterd_workdir}/geo-replication/secret.pem
template=true

[ssh-options-tar]
value = -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i
${glusterd_workdir}/geo-replication/tar_ssh.pem
template=true

[gluster-command]
value=gluster
help=Set gluster binary path
validation=execpath

[sync-jobs]
value=3
help=Number of Syncer jobs
validation=minmax
min=1
max=100
type=int

[rsync-command]
value=rsync
help=Set rsync command path
validation=execpath

[rsync-options]
value=

[rsync-ssh-options]
value=

[rsync-opt-ignore-missing-args]
value=true
type=bool

[rsync-opt-existing]
value=true
type=bool

[log-rsync-performance]
value=false
help=Log Rsync performance
validation=bool
type=bool

[use-rsync-xattrs]
value=false
type=bool

[sync-xattrs]
value=true
type=bool

[sync-acls]
value=true
type=bool

[max-rsync-retries]
value=10
type=int

[state_socket_unencoded]
# Unused, For backward compatibility
value=


[SLAVE /etc/glusterfs/gsyncd.conf]
root at media03:~# cat /etc/glusterfs/gsyncd.conf
[__meta__]
version = 4.0

[master-bricks]
configurable=false

[slave-bricks]
configurable=false

[master-volume-id]
configurable=false

[slave-volume-id]
configurable=false

[master-replica-count]
configurable=false
type=int
value=1

[master-disperse-count]
configurable=false
type=int
value=1

[glusterd-workdir]
value = /var/lib/glusterd

[gluster-logdir]
value = /var/log/glusterfs

[gluster-rundir]
value = /var/run/gluster

[gsyncd-miscdir]
value = /var/lib/misc/gluster/gsyncd

[stime-xattr-prefix]
value=

[checkpoint]
value=0
help=Set Checkpoint
validation=unixtime
type=int

[gluster-cli-options]
value=
help=Gluster CLI Options

[pid-file]
value=${gluster_rundir}/gsyncd-${master}-${primary_slave_host}-${slavevol}.pid
configurable=false
template = true
help=PID file path

[state-file]
value=${glusterd_workdir}/geo-replication/${master}_${primary_slave_host}_${slavevol}/monitor.status
configurable=false
template=true
help=Status File path

[georep-session-working-dir]
value=${glusterd_workdir}/geo-replication/${master}_${primary_slave_host}_${slavevol}/
template=true
help=Session Working directory
configurable=false

[access-mount]
value=false
type=bool
validation=bool
help=Do not lazy unmount the master volume. This allows admin to access the
mount for debugging.

[slave-access-mount]
value=false
type=bool
validation=bool
help=Do not lazy unmount the slave volume. This allows admin to access the
mount for debugging.

[isolated-slaves]
value=
help=List of Slave nodes which are isolated

[changelog-batch-size]
# Max size of Changelogs to process per batch, Changelogs Processing is
# not limited by the number of changelogs but instead based on
# size of the changelog file, One sample changelog file size was 145408
# with ~1000 CREATE and ~1000 DATA. 5 such files in one batch is 727040
# If geo-rep worker crashes while processing a batch, it has to retry only
# that batch since stime will get updated after each batch.
value=727040
help=Max size of Changelogs to process per batch.
type=int

[slave-timeout]
value=120
type=int
help=Timeout in seconds for Slave Gsyncd. If no activity from master for
this timeout, Slave gsyncd will be disconnected. Set Timeout to zero to
skip this check.

[connection-timeout]
value=60
type=int
help=Timeout for mounts

[replica-failover-interval]
value=1
type=int
help=Minimum time interval in seconds for passive worker to become Active

[changelog-archive-format]
value=%%Y%%m
help=Processed changelogs will be archived in working directory. Pattern
for archive file

[use-meta-volume]
value=false
type=bool
help=Use this to set Active Passive mode to meta-volume.

[meta-volume-mnt]
value=/var/run/gluster/shared_storage
help=Meta Volume or Shared Volume mount path

[allow-network]
value=

[change-interval]
value=5
type=int

[use-tarssh]
value=false
type=bool
help=Use sync-mode as tarssh

[remote-gsyncd]
value=/usr/lib/x86_64-linux-gnu/glusterfs/gsyncd
help=If SSH keys are not secured with gsyncd prefix then use this
configuration to set the actual path of gsyncd(Usually
/usr/libexec/glusterfs/gsyncd)

[gluster-command-dir]
value=/usr/sbin
help=Directory where Gluster binaries exist on master

[slave-gluster-command-dir]
value=/usr/sbin
help=Directory where Gluster binaries exist on slave

[gluster-params]
value = aux-gfid-mount acl
help=Parameters for Gluster Geo-rep mount in Master

[slave-gluster-params]
value = aux-gfid-mount acl
help=Parameters for Gluster Geo-rep mount in Slave

[ignore-deletes]
value = false
type=bool
help=Do not sync deletes in Slave

[special-sync-mode]
# tunables for failover/failback mechanism:
# None   - gsyncd behaves as normal
# blind  - gsyncd works with xtime pairs to identify
#          candidates for synchronization
# wrapup - same as normal mode but does not assign
#          xtimes to orphaned files
# see crawl() for usage of the above tunables
value =
help=

[gfid-conflict-resolution]
value = true
validation=bool
type=bool
help=Disables automatic gfid conflict resolution while syncing

[working-dir]
value = ${gsyncd_miscdir}/${master}_${primary_slave_host}_${slavevol}/
template=true
configurable=false
help=Working directory for storing Changelogs

[change-detector]
value=changelog
help=Change detector
validation=choice
allowed_values=changelog,xsync

[cli-log-file]
value=${gluster_logdir}/geo-replication/cli.log
template=true
configurable=false

[cli-log-level]
value=DEBUG
help=Set CLI Log Level
validation=choice
allowed_values=ERROR,INFO,WARNING,DEBUG

[log-file]
value=${gluster_logdir}/geo-replication/${master}_${primary_slave_host}_${slavevol}/gsyncd.log
configurable=false
template=true

[changelog-log-file]
value=${gluster_logdir}/geo-replication/${master}_${primary_slave_host}_${slavevol}/changes-${local_id}.log
configurable=false
template=true

[gluster-log-file]
value=${gluster_logdir}/geo-replication/${master}_${primary_slave_host}_${slavevol}/mnt-${local_id}.log
template=true
configurable=false

[slave-log-file]
value=${gluster_logdir}/geo-replication-slaves/${master}_${primary_slave_host}_${slavevol}/gsyncd.log
template=true
configurable=false

[slave-gluster-log-file]
value=${gluster_logdir}/geo-replication-slaves/${master}_${primary_slave_host}_${slavevol}/mnt-${master_node}-${master_brick_id}.log
template=true
configurable=false

[slave-gluster-log-file-mbr]
value=${gluster_logdir}/geo-replication-slaves/${master}_${primary_slave_host}_${slavevol}/mnt-mbr-${master_node}-${master_brick_id}.log
template=true
configurable=false

[log-level]
value=DEBUG
help=Set Log Level
validation=choice
allowed_values=ERROR,INFO,WARNING,DEBUG

[gluster-log-level]
value=DEBUG
help=Set Gluster mount Log Level
validation=choice
allowed_values=ERROR,INFO,WARNING,DEBUG

[changelog-log-level]
value=DEBUG
help=Set Changelog Log Level
validation=choice
allowed_values=ERROR,INFO,WARNING,DEBUG

[slave-log-level]
value=DEBUG
help=Set Slave Gsyncd Log Level
validation=choice
allowed_values=ERROR,INFO,WARNING,DEBUG

[slave-gluster-log-level]
value=DEBUG
help=Set Slave Gluster mount Log Level
validation=choice
allowed_values=ERROR,INFO,WARNING,DEBUG

[ssh-port]
value=2202
validation=int
help=Set SSH port
type=int

[ssh-command]
value=ssh
help=Set ssh binary path
validation=execpath

[tar-command]
value=tar
help=Set tar command path
validation=execpath

[ssh-options]
value = -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i
${glusterd_workdir}/geo-replication/secret.pem
template=true

[ssh-options-tar]
value = -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i
${glusterd_workdir}/geo-replication/tar_ssh.pem
template=true

[gluster-command]
value=gluster
help=Set gluster binary path
validation=execpath

[sync-jobs]
value=3
help=Number of Syncer jobs
validation=minmax
min=1
max=100
type=int

[rsync-command]
value=rsync
help=Set rsync command path
validation=execpath

[rsync-options]
value=

[rsync-ssh-options]
value=

[rsync-opt-ignore-missing-args]
value=true
type=bool

[rsync-opt-existing]
value=true
type=bool

[log-rsync-performance]
value=false
help=Log Rsync performance
validation=bool
type=bool

[use-rsync-xattrs]
value=false
type=bool

[sync-xattrs]
value=true
type=bool

[sync-acls]
value=true
type=bool

[max-rsync-retries]
value=10
type=int

[state_socket_unencoded]
# Unused, For backward compatibility
value=


8/ Master volume status
root at media01:/var/run/gluster# gluster volume status
Status of volume: gfs1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick media01:/gfs1-data/brick              49153     0          Y
 8366
Brick media02:/gfs1-data/brick              49153     0          Y
 5560
Self-heal Daemon on localhost               N/A       N/A        Y
 9170
Bitrot Daemon on localhost                  N/A       N/A        Y
 9186
Scrubber Daemon on localhost                N/A       N/A        Y
 9212
Self-heal Daemon on media02                 N/A       N/A        Y
 6034
Bitrot Daemon on media02                    N/A       N/A        Y
 6050
Scrubber Daemon on media02                  N/A       N/A        Y
 6076

Task Status of Volume gfs1
------------------------------------------------------------------------------
There are no active volume tasks

Status of volume: gfs2
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick media01:/gfs2-data/brick              49154     0          Y
 8460
Brick media02:/gfs2-data/brick              49154     0          Y
 5650
Self-heal Daemon on localhost               N/A       N/A        Y
 9170
Bitrot Daemon on localhost                  N/A       N/A        Y
 9186
Scrubber Daemon on localhost                N/A       N/A        Y
 9212
Self-heal Daemon on media02                 N/A       N/A        Y
 6034
Bitrot Daemon on media02                    N/A       N/A        Y
 6050
Scrubber Daemon on media02                  N/A       N/A        Y
 6076

Task Status of Volume gfs2
------------------------------------------------------------------------------
There are no active volume tasks

Status of volume: gluster_shared_storage
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick media02:/var/lib/glusterd/ss_brick    49152     0          Y
 2767
Brick media01:/var/lib/glusterd/ss_brick    49152     0          Y
 3288
Self-heal Daemon on localhost               N/A       N/A        Y
 9170
Self-heal Daemon on media02                 N/A       N/A        Y
 6034

Task Status of Volume gluster_shared_storage
------------------------------------------------------------------------------
There are no active volume tasks


9/ Master gluster config
root at media01:/var/run/gluster# gluster volume info

Volume Name: gfs1
Type: Replicate
Volume ID: f720f1cb-16de-47a4-b1da-49d348736b53
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: media01:/gfs1-data/brick
Brick2: media02:/gfs1-data/brick
Options Reconfigured:
geo-replication.ignore-pid-check: on
diagnostics.count-fop-hits: on
diagnostics.latency-measurement: on
changelog.changelog: on
geo-replication.indexing: on
encryption.data-key-size: 512
encryption.master-key: /var/lib/glusterd/vols/gfs1/gfs1-encryption.key
performance.open-behind: off
performance.write-behind: off
performance.quick-read: off
features.encryption: on
server.ssl: on
client.ssl: on
performance.client-io-threads: off
nfs.disable: on
transport.address-family: inet
features.utime: on
performance.ctime-invalidation: on
cluster.lookup-optimize: on
cluster.self-heal-daemon: on
server.allow-insecure: on
cluster.ensure-durability: on
cluster.nufa: enable
auth.allow: *
auth.ssl-allow: *
features.bitrot: on
features.scrub: Active
cluster.enable-shared-storage: enable

Volume Name: gfs2
Type: Replicate
Volume ID: 3b506d7f-26cc-47e1-85f0-5e4047b3a526
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: media01:/gfs2-data/brick
Brick2: media02:/gfs2-data/brick
Options Reconfigured:
geo-replication.ignore-pid-check: on
diagnostics.count-fop-hits: on
diagnostics.latency-measurement: on
changelog.changelog: on
geo-replication.indexing: on
encryption.data-key-size: 512
encryption.master-key: /var/lib/glusterd/vols/gfs2/gfs2-encryption.key
performance.open-behind: off
performance.write-behind: off
performance.quick-read: off
features.encryption: on
server.ssl: on
client.ssl: on
performance.client-io-threads: off
nfs.disable: on
transport.address-family: inet
features.utime: on
performance.ctime-invalidation: on
cluster.lookup-optimize: on
cluster.self-heal-daemon: on
server.allow-insecure: on
cluster.ensure-durability: on
cluster.nufa: enable
auth.allow: *
auth.ssl-allow: *
features.bitrot: on
features.scrub: Active
cluster.enable-shared-storage: enable

Volume Name: gluster_shared_storage
Type: Replicate
Volume ID: 1aa8c5c9-a950-490a-8e7f-486d06fe68fa
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: media02:/var/lib/glusterd/ss_brick
Brick2: media01:/var/lib/glusterd/ss_brick
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
cluster.enable-shared-storage: enable


10/ Slave gluster config
root at media03:~# gluster volume info

Volume Name: gfs1
Type: Distribute
Volume ID: 45f73890-72f2-48a7-84e5-3bc87d995b62
Status: Started
Snapshot Count: 0
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: media03:/gfs1-data/brick
Options Reconfigured:
features.scrub: Active
features.bitrot: on
auth.ssl-allow: *
auth.allow: *
cluster.nufa: enable
cluster.ensure-durability: on
server.allow-insecure: on
cluster.lookup-optimize: on
performance.ctime-invalidation: on
features.utime: on
transport.address-family: inet
nfs.disable: on
client.ssl: on
server.ssl: on
features.encryption: on
performance.quick-read: off
performance.write-behind: off
performance.open-behind: off
encryption.master-key: /var/lib/glusterd/vols/gfs1/gfs1-encryption.key
encryption.data-key-size: 512
geo-replication.indexing: on
diagnostics.latency-measurement: on
diagnostics.count-fop-hits: on
features.shard: disable

Volume Name: gfs2
Type: Distribute
Volume ID: 98f4619a-c0c8-4fa0-b467-98ada511375a
Status: Started
Snapshot Count: 0
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: media03:/gfs2-data/brick
Options Reconfigured:
features.scrub: Active
features.bitrot: on
auth.ssl-allow: *
auth.allow: *
cluster.nufa: enable
cluster.ensure-durability: on
server.allow-insecure: on
cluster.lookup-optimize: on
performance.ctime-invalidation: on
features.utime: on
transport.address-family: inet
nfs.disable: on
client.ssl: on
server.ssl: on
features.encryption: on
performance.quick-read: off
performance.write-behind: off
performance.open-behind: off
encryption.master-key: /var/lib/glusterd/vols/gfs2/gfs2-encryption.key
encryption.data-key-size: 512
geo-replication.indexing: on
diagnostics.latency-measurement: on
diagnostics.count-fop-hits: on
features.shard: disable





-Boubacar Cisse
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190223/fa0c7497/attachment.html>


More information about the Gluster-users mailing list