<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
  </head>
  <body>
    Thanks Felix - looking through some more of the logs I may have
    found the reason...<br>
    <br>
    From
/var/log/glusterfs/geo-replication/storage_10.0.231.81_pcic-backup/mnt-data-storage_a-storage.log<br>
    <tt><br>
    </tt><tt>[2020-10-05 18:13:35.736838] E
      [fuse-bridge.c:4288:fuse_xattr_cbk] 0-glusterfs-fuse: extended
      attribute not supported by the backend storage</tt><tt><br>
    </tt><tt>[2020-10-05 18:18:53.885591] E
      [fuse-bridge.c:4288:fuse_xattr_cbk] 0-glusterfs-fuse: extended
      attribute not supported by the backend storage</tt><tt><br>
    </tt><tt>[2020-10-05 18:22:14.405234] E
      [fuse-bridge.c:4288:fuse_xattr_cbk] 0-glusterfs-fuse: extended
      attribute not supported by the backend storage</tt><tt><br>
    </tt><tt>[2020-10-05 18:25:53.971679] E
      [fuse-bridge.c:4288:fuse_xattr_cbk] 0-glusterfs-fuse: extended
      attribute not supported by the backend storage</tt><tt><br>
    </tt><tt>[2020-10-05 18:31:44.571557] E
      [fuse-bridge.c:4288:fuse_xattr_cbk] 0-glusterfs-fuse: extended
      attribute not supported by the backend storage</tt><tt><br>
    </tt><tt>[2020-10-05 18:36:36.508772] E
      [fuse-bridge.c:4288:fuse_xattr_cbk] 0-glusterfs-fuse: extended
      attribute not supported by the backend storage</tt><tt><br>
    </tt><tt>[2020-10-05 18:40:10.401055] E
      [fuse-bridge.c:4288:fuse_xattr_cbk] 0-glusterfs-fuse: extended
      attribute not supported by the backend storage</tt><tt><br>
    </tt><tt>[2020-10-05 18:42:57.833536] E
      [fuse-bridge.c:4288:fuse_xattr_cbk] 0-glusterfs-fuse: extended
      attribute not supported by the backend storage</tt><tt><br>
    </tt><tt>[2020-10-05 18:45:19.691953] E
      [fuse-bridge.c:4288:fuse_xattr_cbk] 0-glusterfs-fuse: extended
      attribute not supported by the backend storage</tt><tt><br>
    </tt><tt>[2020-10-05 18:48:26.478532] E
      [fuse-bridge.c:4288:fuse_xattr_cbk] 0-glusterfs-fuse: extended
      attribute not supported by the backend storage</tt><tt><br>
    </tt><tt>[2020-10-05 18:52:24.466914] E
      [fuse-bridge.c:4288:fuse_xattr_cbk] 0-glusterfs-fuse: extended
      attribute not supported by the backend storage</tt><br>
    <br>
    <br>
    The slave nodes are running gluster on top of ZFS, but I had
    configured ACLs - is there something else missing to make this work
    with ZFS? <br>
    <br>
    <tt>[root@pcic-backup01 ~]# gluster volume info<br>
       <br>
      Volume Name: pcic-backup<br>
      Type: Distribute<br>
      Volume ID: 7af8a424-f4b6-4405-bba1-0dbafb0fa231<br>
      Status: Started<br>
      Snapshot Count: 0<br>
      Number of Bricks: 2<br>
      Transport-type: tcp<br>
      Bricks:<br>
      Brick1: 10.0.231.81:/pcic-backup01-zpool/brick<br>
      Brick2: 10.0.231.82:/pcic-backup02-zpool/brick<br>
      Options Reconfigured:<br>
      network.ping-timeout: 10<br>
      performance.cache-size: 256MB<br>
      server.event-threads: 4<br>
      client.event-threads: 4<br>
      cluster.lookup-optimize: on<br>
      performance.parallel-readdir: on<br>
      performance.readdir-ahead: on<br>
      features.quota-deem-statfs: on<br>
      features.inode-quota: on<br>
      features.quota: on<br>
      transport.address-family: inet<br>
      nfs.disable: on<br>
      features.read-only: off<br>
      performance.open-behind: off<br>
      <br>
      <br>
      [root@pcic-backup01 ~]# zfs get acltype pcic-backup01-zpool</tt><tt><br>
    </tt><tt>NAME                 PROPERTY  VALUE     SOURCE</tt><tt><br>
    </tt><tt>pcic-backup01-zpool  acltype   posixacl  local<br>
      <br>
      [root@pcic-backup01 ~]# grep "pcic-backup0" /proc/mounts<br>
      pcic-backup01-zpool /pcic-backup01-zpool zfs
      rw,seclabel,xattr,posixacl 0 0<br>
      <br>
    </tt><tt></tt><tt><br>
    </tt><tt>[root@pcic-backup02 ~]# zfs get acltype pcic-backup02-zpool</tt><tt><br>
    </tt><tt>NAME                 PROPERTY  VALUE     SOURCE</tt><tt><br>
    </tt><tt>pcic-backup02-zpool  acltype   posixacl  local<br>
      <br>
      [root@pcic-backup02 ~]# grep "pcic-backup0" /proc/mounts <br>
      pcic-backup02-zpool /pcic-backup02-zpool zfs
      rw,seclabel,xattr,posixacl 0 0<br>
      <br>
    </tt>Thanks,<br>
     -Matthew<br>
    <br>
    <br>
    <div class="moz-signature"><font size="-1">
        <p>--<br>
          Matthew Benstead<br>
          <font size="-2">System Administrator<br>
            <a href="https://pacificclimate.org/">Pacific Climate
              Impacts Consortium</a><br>
            University of Victoria, UH1<br>
            PO Box 1800, STN CSC<br>
            Victoria, BC, V8W 2Y2<br>
            Phone: +1-250-721-8432<br>
            Email: <a class="moz-txt-link-abbreviated" href="mailto:matthewb@uvic.ca">matthewb@uvic.ca</a></font></p>
      </font>
    </div>
    <div class="moz-cite-prefix">On 10/5/20 1:39 AM, Felix Kölzow wrote:<br>
    </div>
    <blockquote type="cite"
      cite="mid:9e8f3994-5116-1f3e-5dc0-bda19bba1f1d@gmx.de">Dear
      Matthew,
      <br>
      <br>
      <br>
      can you provide more information regarding to the geo-replication
      brick
      <br>
      logs.
      <br>
      <br>
      These files area also located in:
      <br>
      <br>
/var/log/glusterfs/geo-replication/storage_10.0.231.81_pcic-backup/
      <br>
      <br>
      <br>
      Usually, these log files are more precise to figure out the root
      cause
      <br>
      of the error.
      <br>
      <br>
      Additionally, it is also worth to look at the log-files on the
      slave side.
      <br>
      <br>
      <br>
      Regards,
      <br>
      <br>
      Felix
      <br>
      <br>
      <br>
      On 01/10/2020 23:08, Matthew Benstead wrote:
      <br>
      <blockquote type="cite">Hello,
        <br>
        <br>
        I'm looking for some help with a GeoReplication Error in my
        Gluster
        <br>
        7/CentOS 7 setup. Replication progress has basically stopped,
        and the
        <br>
        status of the replication keeps switching.
        <br>
        <br>
        The gsyncd log has errors like "Operation not permitted",
        "incomplete
        <br>
        sync", etc... help? I'm not sure how to proceed in
        troubleshooting this.
        <br>
        <br>
        The log is here, it basically just repeats - from:
        <br>
/var/log/glusterfs/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.log
        <br>
        <br>
        [2020-10-01 20:52:15.291923] I [master(worker
        <br>
        /data/storage_a/storage):1991:syncjob] Syncer: Sync Time Taken
        <br>
        duration=32.8466        num_files=1749  job=3   return_code=23
        <br>
        [2020-10-01 20:52:18.700062] I [master(worker
        <br>
        /data/storage_c/storage):1991:syncjob] Syncer: Sync Time Taken
        <br>
        duration=43.1210        num_files=3167  job=6   return_code=23
        <br>
        [2020-10-01 20:52:23.383234] W [master(worker
        <br>
        /data/storage_c/storage):1393:process] _GMaster: incomplete
        sync,
        <br>
        retrying changelogs     files=['XSYNC-CHANGELOG.1601585397']
        <br>
        [2020-10-01 20:52:28.537657] E [repce(worker
        <br>
        /data/storage_b/storage):213:__call__] RepceClient: call failed
        <br>
        call=258187:140538843596608:1601585515.63       method=entry_ops
        <br>
        error=OSError
        <br>
        [2020-10-01 20:52:28.538064] E [syncdutils(worker
        <br>
        /data/storage_b/storage):339:log_raise_exception] &lt;top&gt;:
        FAIL:
        <br>
        Traceback (most recent call last):
        <br>
           File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py",
        line 332,
        <br>
        in main
        <br>
             func(args)
        <br>
           File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py",
        line 86,
        <br>
        in subcmd_worker
        <br>
             local.service_loop(remote)
        <br>
           File "/usr/libexec/glusterfs/python/syncdaemon/resource.py",
        line
        <br>
        1308, in service_loop
        <br>
             g1.crawlwrap(oneshot=True, register_time=register_time)
        <br>
           File "/usr/libexec/glusterfs/python/syncdaemon/master.py",
        line 602,
        <br>
        in crawlwrap
        <br>
             self.crawl()
        <br>
           File "/usr/libexec/glusterfs/python/syncdaemon/master.py",
        line 1682,
        <br>
        in crawl
        <br>
             self.process([item[1]], 0)
        <br>
           File "/usr/libexec/glusterfs/python/syncdaemon/master.py",
        line 1327,
        <br>
        in process
        <br>
             self.process_change(change, done, retry)
        <br>
           File "/usr/libexec/glusterfs/python/syncdaemon/master.py",
        line 1221,
        <br>
        in process_change
        <br>
             failures = self.slave.server.entry_ops(entries)
        <br>
           File "/usr/libexec/glusterfs/python/syncdaemon/repce.py",
        line 232, in
        <br>
        __call__
        <br>
             return self.ins(self.meth, *a)
        <br>
           File "/usr/libexec/glusterfs/python/syncdaemon/repce.py",
        line 214, in
        <br>
        __call__
        <br>
             raise res
        <br>
        OSError: [Errno 1] Operation not permitted
        <br>
        [2020-10-01 20:52:28.570316] I [repce(agent
        <br>
        /data/storage_b/storage):96:service_loop] RepceServer:
        terminating on
        <br>
        reaching EOF.
        <br>
        [2020-10-01 20:52:28.613603] I
        <br>
        [gsyncdstatus(monitor):248:set_worker_status] GeorepStatus:
        Worker
        <br>
        Status Change status=Faulty
        <br>
        [2020-10-01 20:52:29.619797] I [master(worker
        <br>
        /data/storage_c/storage):1991:syncjob] Syncer: Sync Time Taken
        <br>
        duration=5.6458 num_files=455   job=3   return_code=23
        <br>
        [2020-10-01 20:52:38.286245] I [master(worker
        <br>
        /data/storage_c/storage):1991:syncjob] Syncer: Sync Time Taken
        <br>
        duration=14.1824        num_files=1333  job=2   return_code=23
        <br>
        [2020-10-01 20:52:38.628156] I
        <br>
        [gsyncdstatus(monitor):248:set_worker_status] GeorepStatus:
        Worker
        <br>
        Status Change status=Initializing...
        <br>
        [2020-10-01 20:52:38.628325] I [monitor(monitor):159:monitor]
        Monitor:
        <br>
        starting gsyncd worker   brick=/data/storage_b/storage
        <br>
        slave_node=10.0.231.82
        <br>
        [2020-10-01 20:52:38.684736] I [gsyncd(agent
        <br>
        /data/storage_b/storage):318:main] &lt;top&gt;: Using session
        config
        <br>
        file
        <br>
path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf
        <br>
        [2020-10-01 20:52:38.687213] I [gsyncd(worker
        <br>
        /data/storage_b/storage):318:main] &lt;top&gt;: Using session
        config
        <br>
        file
        <br>
path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf
        <br>
        [2020-10-01 20:52:38.687401] I [changelogagent(agent
        <br>
        /data/storage_b/storage):72:__init__] ChangelogAgent: Agent
        listining...
        <br>
        [2020-10-01 20:52:38.703295] I [resource(worker
        <br>
        /data/storage_b/storage):1386:connect_remote] SSH: Initializing
        SSH
        <br>
        connection between master and slave...
        <br>
        [2020-10-01 20:52:40.388372] I [resource(worker
        <br>
        /data/storage_b/storage):1435:connect_remote] SSH: SSH
        connection
        <br>
        between master and slave established. duration=1.6849
        <br>
        [2020-10-01 20:52:40.388582] I [resource(worker
        <br>
        /data/storage_b/storage):1105:connect] GLUSTER: Mounting gluster
        volume
        <br>
        locally...
        <br>
        [2020-10-01 20:52:41.501105] I [resource(worker
        <br>
        /data/storage_b/storage):1128:connect] GLUSTER: Mounted gluster
        volume
        <br>
        duration=1.1123
        <br>
        [2020-10-01 20:52:41.501405] I [subcmds(worker
        <br>
        /data/storage_b/storage):84:subcmd_worker] &lt;top&gt;: Worker
        spawn
        <br>
        successful. Acknowledging back to monitor
        <br>
        [2020-10-01 20:52:43.531146] I [master(worker
        <br>
        /data/storage_b/storage):1640:register] _GMaster: Working dir
        <br>
path=/var/lib/misc/gluster/gsyncd/storage_10.0.231.81_pcic-backup/data-storage_b-storage
        <br>
        [2020-10-01 20:52:43.533953] I [resource(worker
        <br>
        /data/storage_b/storage):1291:service_loop] GLUSTER: Register
        time
        <br>
        time=1601585563
        <br>
        [2020-10-01 20:52:43.547092] I [gsyncdstatus(worker
        <br>
        /data/storage_b/storage):281:set_active] GeorepStatus: Worker
        Status
        <br>
        Change status=Active
        <br>
        [2020-10-01 20:52:43.561920] I [gsyncdstatus(worker
        <br>
        /data/storage_b/storage):253:set_worker_crawl_status]
        GeorepStatus:
        <br>
        Crawl Status Change     status=History Crawl
        <br>
        [2020-10-01 20:52:43.562184] I [master(worker
        <br>
        /data/storage_b/storage):1554:crawl] _GMaster: starting history
        <br>
        crawl     turns=1 stime=None      entry_stime=None       
        etime=1601585563
        <br>
        [2020-10-01 20:52:43.562269] I [resource(worker
        <br>
        /data/storage_b/storage):1307:service_loop] GLUSTER: No stime
        available,
        <br>
        using xsync crawl
        <br>
        [2020-10-01 20:52:43.569799] I [master(worker
        <br>
        /data/storage_b/storage):1670:crawl] _GMaster: starting hybrid
        <br>
        crawl      stime=None
        <br>
        [2020-10-01 20:52:43.573528] I [gsyncdstatus(worker
        <br>
        /data/storage_b/storage):253:set_worker_crawl_status]
        GeorepStatus:
        <br>
        Crawl Status Change     status=Hybrid Crawl
        <br>
        [2020-10-01 20:52:44.370985] I [master(worker
        <br>
        /data/storage_c/storage):1991:syncjob] Syncer: Sync Time Taken
        <br>
        duration=20.4307        num_files=2609  job=5   return_code=23
        <br>
        [2020-10-01 20:52:49.431854] W [master(worker
        <br>
        /data/storage_c/storage):1393:process] _GMaster: incomplete
        sync,
        <br>
        retrying changelogs     files=['XSYNC-CHANGELOG.1601585397']
        <br>
        [2020-10-01 20:52:54.801500] I [master(worker
        <br>
        /data/storage_a/storage):1991:syncjob] Syncer: Sync Time Taken
        <br>
        duration=72.7492        num_files=4227  job=2   return_code=23
        <br>
        [2020-10-01 20:52:56.766547] I [master(worker
        <br>
        /data/storage_a/storage):1991:syncjob] Syncer: Sync Time Taken
        <br>
        duration=74.3569        num_files=4674  job=5   return_code=23
        <br>
        [2020-10-01 20:53:18.853333] I [master(worker
        <br>
        /data/storage_c/storage):1991:syncjob] Syncer: Sync Time Taken
        <br>
        duration=28.7125        num_files=4397  job=3   return_code=23
        <br>
        [2020-10-01 20:53:21.224921] W [master(worker
        <br>
        /data/storage_a/storage):1393:process] _GMaster: incomplete
        sync,
        <br>
        retrying changelogs     files=['CHANGELOG.1601044033',
        <br>
        'CHANGELOG.1601044048', 'CHANGELOG.1601044063',
        'CHANGELOG.1601044078',
        <br>
        'CHANGELOG.1601044093', 'CHANGELOG.1601044108',
        'CHANGELOG.1601044123']
        <br>
        [2020-10-01 20:53:22.134536] I [master(worker
        <br>
        /data/storage_a/storage):1991:syncjob] Syncer: Sync Time Taken
        <br>
        duration=0.2159 num_files=3     job=3   return_code=23
        <br>
        [2020-10-01 20:53:25.615712] I [master(worker
        <br>
        /data/storage_b/storage):1681:crawl] _GMaster: processing xsync
        <br>
        changelog
        <br>
path=/var/lib/misc/gluster/gsyncd/storage_10.0.231.81_pcic-backup/data-storage_b-storage/xsync/XSYNC-CHANGELOG.1601585563
        <br>
        [2020-10-01 20:53:25.634970] W [master(worker
        <br>
        /data/storage_c/storage):1393:process] _GMaster: incomplete
        sync,
        <br>
        retrying changelogs     files=['XSYNC-CHANGELOG.1601585397']
        <br>
        <br>
        GeoReplication status - see it change from Active to Faulty:
        <br>
        <br>
        [root@storage01 ~]# gluster volume geo-replication status
        <br>
        <br>
        MASTER NODE    MASTER VOL    MASTER BRICK               SLAVE
        USER
        <br>
        SLAVE                                        SLAVE NODE    
        STATUS
        <br>
        CRAWL STATUS       LAST_SYNCED
        <br>
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
        <br>
        10.0.231.91    storage       /data/storage_a/storage   
        geoaccount
        <br>
        <a class="moz-txt-link-abbreviated" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">ssh://geoaccount@10.0.231.81::pcic-backup</a>    10.0.231.81   
        Active
        <br>
        Changelog Crawl    2020-09-25 07:26:57
        <br>
        10.0.231.91    storage       /data/storage_c/storage   
        geoaccount
        <br>
        <a class="moz-txt-link-abbreviated" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">ssh://geoaccount@10.0.231.81::pcic-backup</a>    10.0.231.82   
        Active
        <br>
        Hybrid Crawl       N/A
        <br>
        10.0.231.91    storage       /data/storage_b/storage   
        geoaccount
        <br>
        <a class="moz-txt-link-abbreviated" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">ssh://geoaccount@10.0.231.81::pcic-backup</a>    10.0.231.82   
        Active
        <br>
        Hybrid Crawl       N/A
        <br>
        10.0.231.92    storage       /data/storage_b/storage   
        geoaccount
        <br>
        <a class="moz-txt-link-abbreviated" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">ssh://geoaccount@10.0.231.81::pcic-backup</a>    10.0.231.82   
        Active
        <br>
        History Crawl      2020-09-23 01:56:05
        <br>
        10.0.231.92    storage       /data/storage_a/storage   
        geoaccount
        <br>
        <a class="moz-txt-link-abbreviated" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">ssh://geoaccount@10.0.231.81::pcic-backup</a>    10.0.231.82   
        Active
        <br>
        Hybrid Crawl       N/A
        <br>
        10.0.231.92    storage       /data/storage_c/storage   
        geoaccount
        <br>
        <a class="moz-txt-link-abbreviated" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">ssh://geoaccount@10.0.231.81::pcic-backup</a>    10.0.231.81   
        Active
        <br>
        Hybrid Crawl       N/A
        <br>
        10.0.231.93    storage       /data/storage_c/storage   
        geoaccount
        <br>
        <a class="moz-txt-link-abbreviated" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">ssh://geoaccount@10.0.231.81::pcic-backup</a>    10.0.231.81   
        Active
        <br>
        Changelog Crawl    2020-09-25 06:55:57
        <br>
        10.0.231.93    storage       /data/storage_b/storage   
        geoaccount
        <br>
        <a class="moz-txt-link-abbreviated" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">ssh://geoaccount@10.0.231.81::pcic-backup</a>    10.0.231.81   
        Active
        <br>
        Hybrid Crawl       N/A
        <br>
        10.0.231.93    storage       /data/storage_a/storage   
        geoaccount
        <br>
        <a class="moz-txt-link-abbreviated" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">ssh://geoaccount@10.0.231.81::pcic-backup</a>    10.0.231.81   
        Active
        <br>
        Hybrid Crawl       N/A
        <br>
        <br>
        [root@storage01 ~]# gluster volume geo-replication status
        <br>
        <br>
        MASTER NODE    MASTER VOL    MASTER BRICK               SLAVE
        USER
        <br>
        SLAVE                                        SLAVE NODE    
        STATUS
        <br>
        CRAWL STATUS       LAST_SYNCED
        <br>
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
        <br>
        10.0.231.91    storage       /data/storage_a/storage   
        geoaccount
        <br>
        <a class="moz-txt-link-abbreviated" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">ssh://geoaccount@10.0.231.81::pcic-backup</a>    10.0.231.81   
        Active
        <br>
        Changelog Crawl    2020-09-25 07:26:57
        <br>
        10.0.231.91    storage       /data/storage_c/storage   
        geoaccount
        <br>
        <a class="moz-txt-link-abbreviated" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">ssh://geoaccount@10.0.231.81::pcic-backup</a>    10.0.231.82   
        Active
        <br>
        Hybrid Crawl       N/A
        <br>
        10.0.231.91    storage       /data/storage_b/storage   
        geoaccount
        <br>
        <a class="moz-txt-link-abbreviated" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">ssh://geoaccount@10.0.231.81::pcic-backup</a>    N/A           
        Faulty
        <br>
        N/A                N/A
        <br>
        10.0.231.92    storage       /data/storage_b/storage   
        geoaccount
        <br>
        <a class="moz-txt-link-abbreviated" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">ssh://geoaccount@10.0.231.81::pcic-backup</a>    10.0.231.82   
        Active
        <br>
        History Crawl      2020-09-23 01:58:05
        <br>
        10.0.231.92    storage       /data/storage_a/storage   
        geoaccount
        <br>
        <a class="moz-txt-link-abbreviated" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">ssh://geoaccount@10.0.231.81::pcic-backup</a>    10.0.231.82   
        Active
        <br>
        Hybrid Crawl       N/A
        <br>
        10.0.231.92    storage       /data/storage_c/storage   
        geoaccount
        <br>
        <a class="moz-txt-link-abbreviated" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">ssh://geoaccount@10.0.231.81::pcic-backup</a>    N/A           
        Faulty
        <br>
        N/A                N/A
        <br>
        10.0.231.93    storage       /data/storage_c/storage   
        geoaccount
        <br>
        <a class="moz-txt-link-abbreviated" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">ssh://geoaccount@10.0.231.81::pcic-backup</a>    10.0.231.81   
        Active
        <br>
        Changelog Crawl    2020-09-25 06:58:56
        <br>
        10.0.231.93    storage       /data/storage_b/storage   
        geoaccount
        <br>
        <a class="moz-txt-link-abbreviated" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">ssh://geoaccount@10.0.231.81::pcic-backup</a>    10.0.231.81   
        Active
        <br>
        Hybrid Crawl       N/A
        <br>
        10.0.231.93    storage       /data/storage_a/storage   
        geoaccount
        <br>
        <a class="moz-txt-link-abbreviated" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">ssh://geoaccount@10.0.231.81::pcic-backup</a>    N/A           
        Faulty
        <br>
        N/A                N/A
        <br>
        <br>
        <br>
        Cluster information: (Note - disabled performance.open-behind to
        work
        <br>
        around <a class="moz-txt-link-freetext" href="https://github.com/gluster/glusterfs/issues/1440">https://github.com/gluster/glusterfs/issues/1440</a> )
        <br>
        <br>
        [root@storage01 ~]# gluster --version | head -1; cat
        <br>
        /etc/centos-release; uname -r
        <br>
        glusterfs 7.7
        <br>
        CentOS Linux release 7.8.2003 (Core)
        <br>
        3.10.0-1127.10.1.el7.x86_64
        <br>
        <br>
        [root@storage01 ~]# df -h /storage2/
        <br>
        Filesystem            Size  Used Avail Use% Mounted on
        <br>
        10.0.231.91:/storage  328T  228T  100T  70% /storage2
        <br>
        <br>
        [root@storage01 ~]# gluster volume info
        <br>
        <br>
        Volume Name: storage
        <br>
        Type: Distributed-Replicate
        <br>
        Volume ID: cf94a8f2-324b-40b3-bf72-c3766100ea99
        <br>
        Status: Started
        <br>
        Snapshot Count: 0
        <br>
        Number of Bricks: 3 x (2 + 1) = 9
        <br>
        Transport-type: tcp
        <br>
        Bricks:
        <br>
        Brick1: 10.0.231.91:/data/storage_a/storage
        <br>
        Brick2: 10.0.231.92:/data/storage_b/storage
        <br>
        Brick3: 10.0.231.93:/data/storage_c/storage (arbiter)
        <br>
        Brick4: 10.0.231.92:/data/storage_a/storage
        <br>
        Brick5: 10.0.231.93:/data/storage_b/storage
        <br>
        Brick6: 10.0.231.91:/data/storage_c/storage (arbiter)
        <br>
        Brick7: 10.0.231.93:/data/storage_a/storage
        <br>
        Brick8: 10.0.231.91:/data/storage_b/storage
        <br>
        Brick9: 10.0.231.92:/data/storage_c/storage (arbiter)
        <br>
        Options Reconfigured:
        <br>
        changelog.changelog: on
        <br>
        geo-replication.ignore-pid-check: on
        <br>
        geo-replication.indexing: on
        <br>
        network.ping-timeout: 10
        <br>
        features.inode-quota: on
        <br>
        features.quota: on
        <br>
        nfs.disable: on
        <br>
        features.quota-deem-statfs: on
        <br>
        storage.fips-mode-rchecksum: on
        <br>
        performance.readdir-ahead: on
        <br>
        performance.parallel-readdir: on
        <br>
        cluster.lookup-optimize: on
        <br>
        client.event-threads: 4
        <br>
        server.event-threads: 4
        <br>
        performance.cache-size: 256MB
        <br>
        performance.open-behind: off
        <br>
        <br>
        Thanks,
        <br>
          -Matthew
        <br>
        ________
        <br>
        <br>
        <br>
        <br>
        Community Meeting Calendar:
        <br>
        <br>
        Schedule -
        <br>
        Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
        <br>
        Bridge: <a class="moz-txt-link-freetext" href="https://bluejeans.com/441850968">https://bluejeans.com/441850968</a>
        <br>
        <br>
        Gluster-users mailing list
        <br>
        <a class="moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a>
        <br>
        <a class="moz-txt-link-freetext" href="https://lists.gluster.org/mailman/listinfo/gluster-users">https://lists.gluster.org/mailman/listinfo/gluster-users</a>
        <br>
      </blockquote>
      ________
      <br>
      <br>
      <br>
      <br>
      Community Meeting Calendar:
      <br>
      <br>
      Schedule -
      <br>
      Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
      <br>
      Bridge: <a class="moz-txt-link-freetext" href="https://bluejeans.com/441850968">https://bluejeans.com/441850968</a>
      <br>
      <br>
      Gluster-users mailing list
      <br>
      <a class="moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a>
      <br>
      <a class="moz-txt-link-freetext" href="https://lists.gluster.org/mailman/listinfo/gluster-users">https://lists.gluster.org/mailman/listinfo/gluster-users</a>
      <br>
    </blockquote>
    <br>
  </body>
</html>