[Gluster-users] geo-replication sync issue

Etem Bayoğlu etembayoglu at gmail.com
Wed Mar 18 11:41:15 UTC 2020


Yes I had tried.. my observation in my issue is : glusterfs crawler did not
exit from a specific directory that had been synced already.  Like a
infinite loop. It was crawling that directory endlessly. I tried so many
things an time goes on.
So I gave up and switched to nfs + rsync for now. This issue is getting me
angry.


Thank community for help. ;)

On 18 Mar 2020 Wed at 09:00 Kotresh Hiremath Ravishankar <
khiremat at redhat.com> wrote:

> Could you try disabling syncing xattrs and check ?
>
> gluster vol geo-rep <mastervol> <slavehost>::<savevol> config sync-xattrs
> false
>
> On Fri, Mar 13, 2020 at 1:42 AM Strahil Nikolov <hunter86_bg at yahoo.com>
> wrote:
>
>> On March 12, 2020 9:41:45 AM GMT+02:00, "Etem Bayoğlu" <
>> etembayoglu at gmail.com> wrote:
>> >Hello again,
>> >
>> >These are gsyncd.log from master on DEBUG level. It tells entering
>> >directory, synced files , and gfid information
>> >
>> >[2020-03-12 07:18:16.702286] D [master(worker
>> >/srv/media-storage):324:regjob] _GMaster: synced
>> >file=.gfid/358fe62c-c7e8-449a-90dd-1cc1a3b7a346
>> >[2020-03-12 07:18:16.702420] D [master(worker
>> >/srv/media-storage):324:regjob] _GMaster: synced
>> >file=.gfid/04eb63e3-7fcb-45d2-9f29-6292a5072adb
>> >[2020-03-12 07:18:16.702574] D [master(worker
>> >/srv/media-storage):324:regjob] _GMaster: synced
>> >file=.gfid/4363e521-d81a-4a0f-bfa4-5ee6b92da2b4
>> >[2020-03-12 07:18:16.702704] D [master(worker
>> >/srv/media-storage):324:regjob] _GMaster: synced
>> >file=.gfid/bed30509-2c5f-4c77-b2f9-81916a99abd9
>> >[2020-03-12 07:18:16.702828] D [master(worker
>> >/srv/media-storage):324:regjob] _GMaster: synced
>> >file=.gfid/d86f44cc-3001-4bdf-8bae-6bed2a9c8381
>> >[2020-03-12 07:18:16.702950] D [master(worker
>> >/srv/media-storage):324:regjob] _GMaster: synced
>> >file=.gfid/da40d429-d89e-4dc9-9dda-07922d87b3c8
>> >[2020-03-12 07:18:16.703075] D [master(worker
>> >/srv/media-storage):324:regjob] _GMaster: synced
>> >file=.gfid/befc5e03-b7a1-43dc-b6c2-0a186019b6d5
>> >[2020-03-12 07:18:16.703198] D [master(worker
>> >/srv/media-storage):324:regjob] _GMaster: synced
>> >file=.gfid/4e66035f-99f9-4802-b876-2e01686d18f2
>> >[2020-03-12 07:18:16.703378] D [master(worker
>> >/srv/media-storage):324:regjob] _GMaster: synced
>> >file=.gfid/d1295b51-e461-4766-b504-8e9a941a056f
>> >[2020-03-12 07:18:16.719875] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/06-02/1557813
>> >[2020-03-12 07:18:17.72679] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/06-02/1557205
>> >[2020-03-12 07:18:17.297362] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/06-02/1556880
>> >[2020-03-12 07:18:17.488224] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/06-02/1557769
>> >[2020-03-12 07:18:17.730181] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/06-02/1557028
>> >[2020-03-12 07:18:17.869410] I [gsyncd(config-get):318:main] <top>:
>> >Using
>> >session config file
>>
>> >path=/var/lib/glusterd/geo-replication/media-storage_slave-node_dr-media/gsyncd.conf
>> >[2020-03-12 07:18:18.65431] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/06-02/1558442
>> >[2020-03-12 07:18:18.352381] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/06-02/1557391
>> >[2020-03-12 07:18:18.374876] I [gsyncd(config-get):318:main] <top>:
>> >Using
>> >session config file
>>
>> >path=/var/lib/glusterd/geo-replication/media-storage_slave-node_dr-media/gsyncd.conf
>> >[2020-03-12 07:18:18.482299] I [gsyncd(config-set):318:main] <top>:
>> >Using
>> >session config file
>>
>> >path=/var/lib/glusterd/geo-replication/media-storage_slave-nodem_dr-media/gsyncd.conf
>> >[2020-03-12 07:18:18.507585] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/06-02/1558577
>> >[2020-03-12 07:18:18.576061] I [gsyncd(config-get):318:main] <top>:
>> >Using
>> >session config file
>>
>> >path=/var/lib/glusterd/geo-replication/media-storage_slave-node_dr-media/gsyncd.conf
>> >[2020-03-12 07:18:18.582772] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/06-02/1556831
>> >[2020-03-12 07:18:18.684170] I [gsyncd(config-get):318:main] <top>:
>> >Using
>> >session config file
>>
>> >path=/var/lib/glusterd/geo-replication/media-storage_slave-node_dr-media/gsyncd.conf
>> >[2020-03-12 07:18:18.691845] E [syncdutils(worker
>> >/srv/media-storage):312:log_raise_exception] <top>: connection to peer
>> >is
>> >broken
>> >[2020-03-12 07:18:18.692106] E [syncdutils(worker
>> >/srv/media-storage):312:log_raise_exception] <top>: connection to peer
>> >is
>> >broken
>> >[2020-03-12 07:18:18.694910] E [syncdutils(worker
>> >/srv/media-storage):822:errlog] Popen: command returned error cmd=ssh
>> >-oPasswordAuthentication=no -oStrictHostKeyChecking=no -i
>> >/var/lib/glusterd/geo-replication/secret.pem -p 22 -oControlMaster=auto
>> >-S
>> >/tmp/gsyncd-aux-ssh-WaMqpG/241afba5343394352fc3f9c251909232.sock
>> >slave-node
>> >/nonexistent/gsyncd slave media-storage slave-node::dr-media
>> >--master-node
>> >master-node --master-node-id 023cdb20-2737-4278-93c2-0927917ee314
>> >--master-brick /srv/media-storage --local-node slave-node
>> >--local-node-id
>> >cf34fc96-a08a-49c2-b8eb-a3df5a05f757 --slave-timeout 120
>> >--slave-log-level
>> >DEBUG --slave-gluster-log-level INFO --slave-gluster-command-dir
>> >/usr/sbin
>> >--master-dist-count 1 error=255
>> >[2020-03-12 07:18:18.701545] E [syncdutils(worker
>> >/srv/media-storage):826:logerr] Popen: ssh> Killed by signal 15.
>> >[2020-03-12 07:18:18.721456] I [repce(agent
>> >/srv/media-storage):96:service_loop] RepceServer: terminating on
>> >reaching
>> >EOF.
>> >[2020-03-12 07:18:18.778527] I
>> >[gsyncdstatus(monitor):248:set_worker_status] GeorepStatus: Worker
>> >Status
>> >Change status=Faulty
>> >[2020-03-12 07:18:19.791198] I [gsyncd(config-get):318:main] <top>:
>> >Using
>> >session config file
>>
>> >path=/var/lib/glusterd/geo-replication/media-storage_slave-node_dr-media/gsyncd.conf
>> >[2020-03-12 07:18:19.931610] I [gsyncd(monitor):318:main] <top>: Using
>> >session config file
>>
>> >path=/var/lib/glusterd/geo-replication/media-storage_slave-node_dr-media/gsyncd.conf
>> >[2020-03-12 07:18:20.97094] D [monitor(monitor):375:distribute] <top>:
>> >master bricks: [{'host': 'master-node', 'uuid':
>> >'023cdb20-2737-4278-93c2-0927917ee314', 'dir': '/srv/media-storage'}]
>> >[2020-03-12 07:18:20.97544] D [monitor(monitor):385:distribute] <top>:
>> >slave SSH gateway: slave-node
>> >[2020-03-12 07:18:20.597170] D [monitor(monitor):405:distribute] <top>:
>> >slave bricks: [{'host': 'slave-node', 'uuid':
>> >'cf34fc96-a08a-49c2-b8eb-a3df5a05f757', 'dir': '/data/dr-media'}]
>> >[2020-03-12 07:18:20.598018] D [syncdutils(monitor):907:is_hot]
>> >Volinfo:
>> >brickpath: 'master-node:/srv/media-storage'
>> >[2020-03-12 07:18:20.599000] D [monitor(monitor):419:distribute] <top>:
>> >worker specs: [({'host': 'master-node', 'uuid':
>> >'023cdb20-2737-4278-93c2-0927917ee314', 'dir': '/srv/media-storage'},
>> >('root at slave-node', 'cf34fc96-a08a-49c2-b8eb-a3df5a05f757'), '1',
>> >False)]
>> >[2020-03-12 07:18:20.609170] I
>> >[gsyncdstatus(monitor):248:set_worker_status] GeorepStatus: Worker
>> >Status
>> >Change status=Initializing...
>> >[2020-03-12 07:18:20.609390] I [monitor(monitor):159:monitor] Monitor:
>> >starting gsyncd worker brick=/srv/media-storage slave_node=slave-node
>> >[2020-03-12 07:18:20.616580] D [monitor(monitor):230:monitor] Monitor:
>> >Worker would mount volume privately
>> >[2020-03-12 07:18:20.698721] I [gsyncd(agent
>> >/srv/media-storage):318:main]
>> ><top>: Using session config file
>>
>> >path=/var/lib/glusterd/geo-replication/media-storage_slave-node_dr-media/gsyncd.conf
>> >[2020-03-12 07:18:20.700041] D [subcmds(agent
>> >/srv/media-storage):107:subcmd_agent] <top>: RPC FD rpc_fd='5,11,10,9'
>> >[2020-03-12 07:18:20.700860] I [changelogagent(agent
>> >/srv/media-storage):72:__init__] ChangelogAgent: Agent listining...
>> >[2020-03-12 07:18:20.706031] I [gsyncd(worker
>> >/srv/media-storage):318:main]
>> ><top>: Using session config file
>>
>> >path=/var/lib/glusterd/geo-replication/media-storage_slave-node_dr-media/gsyncd.conf
>> >[2020-03-12 07:18:20.727953] I [resource(worker
>> >/srv/media-storage):1386:connect_remote] SSH: Initializing SSH
>> >connection
>> >between master and slave...
>> >[2020-03-12 07:18:20.763588] D [repce(worker
>> >/srv/media-storage):195:push]
>> >RepceClient: call 31602:140109226157888:1583997500.76
>> >__repce_version__()
>> >...
>> >[2020-03-12 07:18:22.261354] D [repce(worker
>> >/srv/media-storage):215:__call__] RepceClient: call
>> >31602:140109226157888:1583997500.76 __repce_version__ -> 1.0
>> >[2020-03-12 07:18:22.261629] D [repce(worker
>> >/srv/media-storage):195:push]
>> >RepceClient: call 31602:140109226157888:1583997502.26 version() ...
>> >[2020-03-12 07:18:22.269346] D [repce(worker
>> >/srv/media-storage):215:__call__] RepceClient: call
>> >31602:140109226157888:1583997502.26 version -> 1.0
>> >[2020-03-12 07:18:22.269610] D [repce(worker
>> >/srv/media-storage):195:push]
>> >RepceClient: call 31602:140109226157888:1583997502.27 pid() ...
>> >[2020-03-12 07:18:22.278055] D [repce(worker
>> >/srv/media-storage):215:__call__] RepceClient: call
>> >31602:140109226157888:1583997502.27 pid -> 50215
>> >[2020-03-12 07:18:22.278273] I [resource(worker
>> >/srv/media-storage):1435:connect_remote] SSH: SSH connection between
>> >master
>> >and slave established. duration=1.5501
>> >[2020-03-12 07:18:22.278470] I [resource(worker
>> >/srv/media-storage):1105:connect] GLUSTER: Mounting gluster volume
>> >locally...
>> >[2020-03-12 07:18:22.345194] D [resource(worker
>> >/srv/media-storage):879:inhibit] DirectMounter: auxiliary glusterfs
>> >mount
>> >in place
>> >[2020-03-12 07:18:23.352816] D [resource(worker
>> >/srv/media-storage):953:inhibit] DirectMounter: auxiliary glusterfs
>> >mount
>> >prepared
>> >[2020-03-12 07:18:23.353075] I [resource(worker
>> >/srv/media-storage):1128:connect] GLUSTER: Mounted gluster volume
>> >duration=1.0744
>> >[2020-03-12 07:18:23.353261] I [subcmds(worker
>> >/srv/media-storage):84:subcmd_worker] <top>: Worker spawn successful.
>> >Acknowledging back to monitor
>> >[2020-03-12 07:18:23.353602] D [master(worker
>> >/srv/media-storage):104:gmaster_builder] <top>: setting up change
>> >detection
>> >mode mode=xsync
>> >[2020-03-12 07:18:23.353697] D [monitor(monitor):273:monitor] Monitor:
>> >worker(/srv/media-storage) connected
>> >[2020-03-12 07:18:23.354954] D [master(worker
>> >/srv/media-storage):104:gmaster_builder] <top>: setting up change
>> >detection
>> >mode mode=changelog
>> >[2020-03-12 07:18:23.356203] D [master(worker
>> >/srv/media-storage):104:gmaster_builder] <top>: setting up change
>> >detection
>> >mode mode=changeloghistory
>> >[2020-03-12 07:18:23.359487] D [repce(worker
>> >/srv/media-storage):195:push]
>> >RepceClient: call 31602:140109226157888:1583997503.36 version() ...
>> >[2020-03-12 07:18:23.360182] D [repce(worker
>> >/srv/media-storage):215:__call__] RepceClient: call
>> >31602:140109226157888:1583997503.36 version -> 1.0
>> >[2020-03-12 07:18:23.360398] D [master(worker
>> >/srv/media-storage):777:setup_working_dir] _GMaster: changelog working
>> >dir
>>
>> >/var/lib/misc/gluster/gsyncd/media-storage_slave-node_dr-media/srv-media-storage
>> >[2020-03-12 07:18:23.360609] D [repce(worker
>> >/srv/media-storage):195:push]
>> >RepceClient: call 31602:140109226157888:1583997503.36 init() ...
>> >[2020-03-12 07:18:23.380367] D [repce(worker
>> >/srv/media-storage):215:__call__] RepceClient: call
>> >31602:140109226157888:1583997503.36 init -> None
>> >[2020-03-12 07:18:23.380626] D [repce(worker
>> >/srv/media-storage):195:push]
>> >RepceClient: call 31602:140109226157888:1583997503.38
>> >register('/srv/media-storage',
>>
>> >'/var/lib/misc/gluster/gsyncd/media-storage_slave-node.com_dr-media/srv-media-storage',
>>
>> >'/var/log/glusterfs/geo-replication/media-storage_slave-node_dr-media/changes-srv-media-storage.log',
>> >7, 5) ...
>> >[2020-03-12 07:18:25.384763] D [repce(worker
>> >/srv/media-storage):215:__call__] RepceClient: call
>> >31602:140109226157888:1583997503.38 register -> None
>> >[2020-03-12 07:18:25.384988] D [master(worker
>> >/srv/media-storage):777:setup_working_dir] _GMaster: changelog working
>> >dir
>>
>> >/var/lib/misc/gluster/gsyncd/media-storage_slave-node_dr-media/srv-media-storage
>> >[2020-03-12 07:18:25.385189] D [master(worker
>> >/srv/media-storage):777:setup_working_dir] _GMaster: changelog working
>> >dir
>>
>> >/var/lib/misc/gluster/gsyncd/media-storage_slave-node_dr-media/srv-media-storage
>> >[2020-03-12 07:18:25.385427] D [master(worker
>> >/srv/media-storage):777:setup_working_dir] _GMaster: changelog working
>> >dir
>>
>> >/var/lib/misc/gluster/gsyncd/media-storage_slave-node_dr-media/srv-media-storage
>> >[2020-03-12 07:18:25.385597] I [master(worker
>> >/srv/media-storage):1640:register] _GMaster: Working dir
>>
>> >path=/var/lib/misc/gluster/gsyncd/media-storage_slave-node_dr-media/srv-media-storage
>> >[2020-03-12 07:18:25.395163] I [resource(worker
>> >/srv/media-storage):1291:service_loop] GLUSTER: Register time
>> >time=1583997505
>> >[2020-03-12 07:18:25.395838] D [repce(worker
>> >/srv/media-storage):195:push]
>> >RepceClient: call 31602:140108435007232:1583997505.4 keep_alive(None,)
>> >...
>> >[2020-03-12 07:18:25.399630] D [master(worker
>> >/srv/media-storage):539:crawlwrap] _GMaster: primary master with volume
>> >id
>> >458c5247-1fb1-4e4a-88ec-6e3933f1cf3b ...
>> >[2020-03-12 07:18:25.403887] I [gsyncdstatus(worker
>> >/srv/media-storage):281:set_active] GeorepStatus: Worker Status Change
>> >status=Active
>> >[2020-03-12 07:18:25.404815] D [repce(worker
>> >/srv/media-storage):215:__call__] RepceClient: call
>> >31602:140108435007232:1583997505.4 keep_alive -> 1
>> >[2020-03-12 07:18:25.405322] I [gsyncdstatus(worker
>> >/srv/media-storage):253:set_worker_crawl_status] GeorepStatus: Crawl
>> >Status
>> >Change status=History Crawl
>> >[2020-03-12 07:18:25.405780] I [master(worker
>> >/srv/media-storage):1554:crawl] _GMaster: starting history crawl
>> >turns=1
>> >stime=None entry_stime=None etime=1583997505
>> >[2020-03-12 07:18:25.405967] I [resource(worker
>> >/srv/media-storage):1307:service_loop] GLUSTER: No stime available,
>> >using
>> >xsync crawl
>> >[2020-03-12 07:18:25.406529] D [repce(worker
>> >/srv/media-storage):195:push]
>> >RepceClient: call 31602:140108426614528:1583997505.41 keep_alive(None,)
>> >...
>> >[2020-03-12 07:18:25.408334] D [master(worker
>> >/srv/media-storage):539:crawlwrap] _GMaster: primary master with volume
>> >id
>> >458c5247-1fb1-4e4a-88ec-6e3933f1cf3b ...
>> >[2020-03-12 07:18:25.411999] I [master(worker
>> >/srv/media-storage):1670:crawl] _GMaster: starting hybrid crawl
>> >stime=None
>> >[2020-03-12 07:18:25.412503] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering .
>> >[2020-03-12 07:18:25.414274] I [gsyncdstatus(worker
>> >/srv/media-storage):253:set_worker_crawl_status] GeorepStatus: Crawl
>> >Status
>> >Change status=Hybrid Crawl
>> >[2020-03-12 07:18:25.415570] D [repce(worker
>> >/srv/media-storage):215:__call__] RepceClient: call
>> >31602:140108426614528:1583997505.41 keep_alive -> 2
>> >[2020-03-12 07:18:25.418721] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering ./api
>> >[2020-03-12 07:18:25.419502] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering ./api/media
>> >[2020-03-12 07:18:25.420666] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering ./api/media/listing
>> >[2020-03-12 07:18:25.592319] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018
>> >[2020-03-12 07:18:25.603673] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/05-20
>> >[2020-03-12 07:18:25.623689] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/05-20/1528270
>> >[2020-03-12 07:18:25.650356] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/05-20/1528967
>> >[2020-03-12 07:18:25.689970] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/05-20/1528854
>> >[2020-03-12 07:18:25.728863] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/05-20/1528754
>> >[2020-03-12 07:18:25.770190] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/05-20/1528462
>> >[2020-03-12 07:18:25.797228] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/05-20/1528991
>> >[2020-03-12 07:18:25.809954] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/05-20/1529014
>> >[2020-03-12 07:18:25.822673] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/05-20/1528358
>> >[2020-03-12 07:18:25.845856] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/05-20/1529072
>> >[2020-03-12 07:18:25.856015] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/05-20/1528678
>> >[2020-03-12 07:18:25.858510] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/05-20/1528857
>> >[2020-03-12 07:18:25.869515] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/05-20/1528699
>> >[2020-03-12 07:18:25.893805] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/05-20/1528348
>> >[2020-03-12 07:18:25.916162] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/05-20/1528582
>> >[2020-03-12 07:18:25.943183] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/05-20/1528548
>> >[2020-03-12 07:18:25.956763] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/05-20/1528241
>> >[2020-03-12 07:18:25.978117] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/05-20/1528864
>> >[2020-03-12 07:18:25.990802] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/05-20/1528730
>> >[2020-03-12 07:18:26.1740] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/05-20/1528367
>> >[2020-03-12 07:18:26.24030] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/05-20/1529063
>> >[2020-03-12 07:18:26.46653] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/05-20/1529058
>> >[2020-03-12 07:18:26.76448] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/05-20/1528954
>> >[2020-03-12 07:18:26.96426] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/05-20/1528466
>> >[2020-03-12 07:18:26.112611] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/05-20/1528960
>> >[2020-03-12 07:18:26.134823] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/05-20/1528620
>> >[2020-03-12 07:18:26.157533] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/05-20/1528943
>> >[2020-03-12 07:18:26.190406] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/05-20/1528274
>> >[2020-03-12 07:18:26.222803] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/05-20/1528249
>> >[2020-03-12 07:18:26.232880] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/05-20/1528711
>> >[2020-03-12 07:18:26.253369] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/05-20/1528413
>> >[2020-03-12 07:18:26.259948] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/05-20/1528853
>> >[2020-03-12 07:18:26.278692] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/05-20/1528803
>> >[2020-03-12 07:18:26.295695] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/05-20/1528831
>> >[2020-03-12 07:18:26.322766] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/05-20/1528510
>> >[2020-03-12 07:18:26.344959] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/05-20/1529041
>> >[2020-03-12 07:18:26.356956] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/05-20/1528637
>> >[2020-03-12 07:18:26.364748] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/05-20/1528519
>> >[2020-03-12 07:18:26.369580] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/05-20/1528740
>> >[2020-03-12 07:18:26.382257] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/05-20/1528990
>> >[2020-03-12 07:18:26.393202] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/05-20/1528896
>> >[2020-03-12 07:18:26.414587] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/05-20/1528808
>> >[2020-03-12 07:18:26.424684] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/05-20/1529026
>> >[2020-03-12 07:18:26.452891] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/05-20/1528508
>> >[2020-03-12 07:18:26.471034] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/05-20/1529049
>> >[2020-03-12 07:18:26.473328] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/05-20/1529056
>> >[2020-03-12 07:18:26.490430] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/05-20/1528928
>> >[2020-03-12 07:18:26.500461] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/05-20/1528516
>> >[2020-03-12 07:18:26.515748] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/05-20/1528658
>> >[2020-03-12 07:18:26.543150] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/05-20/1528800
>> >[2020-03-12 07:18:26.564454] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/05-20/1528671
>> >[2020-03-12 07:18:26.591042] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/05-20/1528529
>> >[2020-03-12 07:18:26.611500] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/05-20/1528502
>> >[2020-03-12 07:18:26.617207] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/05-20/1529006
>> >[2020-03-12 07:18:26.629002] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/05-20/1528899
>> >[2020-03-12 07:18:26.644531] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/05-20/1528482
>> >[2020-03-12 07:18:26.666290] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/05-20/1528623
>> >[2020-03-12 07:18:26.676486] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/05-20/1528424
>> >[2020-03-12 07:18:26.687612] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/05-20/1528825
>> >[2020-03-12 07:18:26.695039] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/05-20/1528756
>> >[2020-03-12 07:18:26.706976] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/05-20/1529062
>> >[2020-03-12 07:18:26.721376] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/05-20/1529033
>> >[2020-03-12 07:18:26.742405] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/05-20/1528988
>> >[2020-03-12 07:18:26.743782] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/05-20/1528439
>> >[2020-03-12 07:18:26.749468] D [master(worker
>> >/srv/media-storage):1792:Xcrawl] _GMaster: entering
>> >./api/media/listing/2018/05-20/1528347
>> >[2020-03-12 07:18:27.416842] I [master(worker
>> >/srv/media-storage):1681:crawl] _GMaster: processing xsync changelog
>>
>> >path=/var/lib/misc/gluster/gsyncd/media-storage_slave-node_dr-media/srv-media-storage/xsync/XSYNC-CHANGELOG.1583997505
>> >[2020-03-12 07:18:27.417128] D [master(worker
>> >/srv/media-storage):1326:process] _GMaster: processing change
>>
>> >changelog=/var/lib/misc/gluster/gsyncd/media-storage_slave-node_dr-media/srv-media-storage/xsync/XSYNC-CHANGELOG.1583997505
>> >[2020-03-12 07:18:27.660609] D [master(worker
>> >/srv/media-storage):1205:process_change] _GMaster: entries: [{'uid': 0,
>> >'skip_entry': False, 'gfid': '55c74c7f-3609-44c4-b0d1-0e7032413169',
>> >'gid':
>> >0, 'mode': 16895, 'entry':
>> >'.gfid/00000000-0000-0000-0000-000000000001/konga', 'op': 'MKDIR'},
>> >{'uid':
>> >0, 'skip_entry': False, 'gfid': '77dc4f2c-3be2-4cc7-88ed-49fb4be12f73',
>> >'gid': 0, 'mode': 16877, 'entry':
>> >'.gfid/00000000-0000-0000-0000-000000000001/zingat-comment', 'op':
>> >'MKDIR'}, {'uid': 0, 'skip_entry': False, 'gfid':
>> >'d1e67bf6-1495-42b0-b299-46f1e8fa75cd', 'gid': 0, 'mode': 16877,
>> >'entry':
>> >'.gfid/00000000-0000-0000-0000-000000000001/zfotobot', 'op': 'MKDIR'},
>> >{'uid': 0, 'skip_entry': False, 'gfid':
>> >'4d4e189a-9c8f-4137-ab49-23801b532788', 'gid': 0, 'mode': 16895,
>> >'entry':
>> >'.gfid/00000000-0000-0000-0000-000000000001/elastics', 'op': 'MKDIR'},
>> >{'uid': 0, 'skip_entry': False, 'gfid':
>> >'438c462a-74a3-48b4-95f8-7fdd18497c63', 'gid': 0, 'mode': 16895,
>> >'entry':
>> >'.gfid/00000000-0000-0000-0000-000000000001/zingrawler', 'op':
>> >'MKDIR'},
>> >{'uid': 0, 'skip_entry': False, 'gfid':
>> >'7543966d-e153-424f-a311-4cab0adbd765', 'gid': 0, 'mode': 16895,
>> >'entry':
>> >'.gfid/00000000-0000-0000-0000-000000000001/notification', 'op':
>> >'MKDIR'},
>> >{'uid': 0, 'skip_entry': False, 'gfid':
>> >'894b04c4-f006-4fdc-a089-4961e483077f', 'gid': 0, 'mode': 16877,
>> >'entry':
>> >'.gfid/00000000-0000-0000-0000-000000000001/zschools', 'op': 'MKDIR'},
>> >{'uid': 0, 'skip_entry': False, 'gfid':
>> >'74464ff5-d285-45f6-8417-638c313244b6', 'gid': 0, 'mode': 16895,
>> >'entry':
>> >'.gfid/00000000-0000-0000-0000-000000000001/zingat-akademi', 'op':
>> >'MKDIR'}, {'uid': 0, 'skip_entry': False, 'gfid':
>> >'63c30efd-0222-432f-a3b8-09dd5bd5173c', 'gid': 0, 'mode': 16877,
>> >'entry':
>> >'.gfid/00000000-0000-0000-0000-000000000001/webc', 'op': 'MKDIR'},
>> >{'uid':
>> >0, 'skip_entry': False, 'gfid': 'c1d865d8-48f1-4d69-a02a-b06cae6e7776',
>> >'gid': 0, 'mode': 16877, 'entry':
>> >'.gfid/00000000-0000-0000-0000-000000000001/comment-ratings', 'op':
>> >'MKDIR'}, {'uid': 0, 'skip_entry': False, 'gfid':
>> >'09569e91-6223-4320-9a12-af7ad9f6e433', 'gid': 0, 'mode': 16895,
>> >'entry':
>> >'.gfid/00000000-0000-0000-0000-000000000001/airflow', 'op': 'MKDIR'},
>> >{'uid': 0, 'skip_entry': False, 'gfid':
>> >'8aeb9a6f-b223-4835-bccf-a11393b15bbc', 'gid': 0, 'mode': 16877,
>> >'entry':
>> >'.gfid/00000000-0000-0000-0000-000000000001/api', 'op': 'MKDIR'},
>> >{'uid':
>> >0, 'skip_entry': False, 'gfid': '001e77e9-de31-4d73-8ac6-f03c1ae556b2',
>> >'gid': 0, 'mode': 16877, 'entry':
>> >'.gfid/8aeb9a6f-b223-4835-bccf-a11393b15bbc/media', 'op': 'MKDIR'},
>> >{'uid':
>> >0, 'skip_entry': False, 'gfid': '6bdf6593-8d18-42c0-a52c-dc344a2ded46',
>> >'gid': 0, 'mode': 33188, 'entry':
>> >'.gfid/001e77e9-de31-4d73-8ac6-f03c1ae556b2/testforgluster', 'op':
>> >'MKNOD'}, {'stat': {'atime': 1583997329.2596, 'gid': 997, 'mtime':
>> >1507210680.0, 'uid': 998, 'mode': 41471}, 'skip_entry': False, 'gfid':
>> >'3282ed41-377e-472b-b4f4-96a62fafe709', 'link': '/srv/autofs/media',
>> >'entry': '.gfid/001e77e9-de31-4d73-8ac6-f03c1ae556b2/media', 'op':
>> >'SYMLINK'}, {'uid': 0, 'skip_entry': False, 'gfid':
>> >'a29c9172-3b72-4175-9f1c-d1458434a7f0', 'gid': 0, 'mode': 16877,
>> >'entry':
>> >'.gfid/001e77e9-de31-4d73-8ac6-f03c1ae556b2/listing', 'op': 'MKDIR'},
>> >{'uid': 0, 'skip_entry': False, 'gfid':
>> >'5c392153-cc7c-4479-a801-9afdad0ecbcf', 'gid': 0, 'mode': 16877,
>> >'entry':
>> >'.gfid/a29c9172-3b72-4175-9f1c-d1458434a7f0/95762', 'op': 'MKDIR'},
>> >{'uid':
>> >0, 'skip_entry': False, 'gfid': '52dc2abe-4848-47b4-903e-b43cb5f2cec1',
>> >'gid': 0, 'mode': 16877, 'entry':
>> >'.gfid/a29c9172-3b72-4175-9f1c-d1458434a7f0/50117', 'op': 'MKDIR'},
>> >{'uid':
>> >0, 'skip_entry': False, 'gfid': 'f6a211b6-fdff-454e-956c-d4ba7e324164',
>> >'gid': 0, 'mode': 16877, 'entry':
>> >'.gfid/a29c9172-3b72-4175-9f1c-d1458434a7f0/4993', 'op': 'MKDIR'},
>> >{'uid':
>> >0, 'skip_entry': False, 'gfid': '5bfafa60-522c-4e0f-93df-07343c2fb943',
>> >'gid': 0, 'mode': 16877, 'entry':
>> >'.gfid/a29c9172-3b72-4175-9f1c-d1458434a7f0/54531', 'op': 'MKDIR'},
>> >{'uid':
>> >0, 'skip_entry': False, 'gfid': '80e50209-75b8-4e95-bb45-ff3ab5f879bb',
>> >'gid': 0, 'mode': 16877, 'entry':
>> >'.gfid/a29c9172-3b72-4175-9f1c-d1458434a7f0/12715', 'op': 'MKDIR'},
>> >{'uid':
>> >0, 'skip_entry': False, 'gfid': '15b61eac-7b00-4af6-9322-1c75589cd692',
>> >'gid': 0, 'mode': 16877, 'entry':
>> >'.gfid/a29c9172-3b72-4175-9f1c-d1458434a7f0/21321', 'op': 'MKDIR'},
>> >{'uid':
>> >0, 'skip_entry': False, 'gfid': 'acfbb711-376f-4c66-bd5a-de0991073d27',
>> >'gid': 0, 'mode': 16877, 'entry':
>> >'.gfid/a29c9172-3b72-4175-9f1c-d1458434a7f0/87093', 'op': 'MKDIR'},
>> >{'uid':
>> >0, 'skip_entry': False, 'gfid': 'f54b0d9f-a0aa-479e-91ca-8c1b2d651ae6',
>> >'gid': 0, 'mode': 16877, 'entry':
>> >'.gfid/a29c9172-3b72-4175-9f1c-d1458434a7f0/51424', 'op': 'MKDIR'},
>> >{'uid':
>> >0, 'skip_entry': False, 'gfid': '9b4a16b8-1f60-47b2-90ff-6c9ae44269a8',
>> >'gid': 0, 'mode': 16877, 'entry':
>> >'.gfid/a29c9172-3b72-4175-9f1c-d1458434a7f0/20729', 'op': 'MKDIR'},
>> >{'uid':
>> >0, 'skip_entry': False, 'gfid': '6d4ccfdf-5815-4b28-9cba-cbb55ee610e0',
>> >'gid': 0, 'mode': 16877, 'entry':
>> >'.gfid/a29c9172-3b72-4175-9f1c-d1458434a7f0/12066', 'op': 'MKDIR'},
>> >{'uid':
>> >0, 'skip_entry': False, 'gfid': 'b682e238-6ce1-4699-9159-162c835cc25c',
>> >'gid': 0, 'mode': 16877, 'entry':
>> >'.gfid/a29c9172-3b72-4175-9f1c-d1458434a7f0/93511', 'op': 'MKDIR'},
>> >{'uid':
>> >0, 'skip_entry': False, 'gfid': 'd7d4f2fb-7eb7-4a07-9beb-d06cd080ea3c',
>> >'gid': 0, 'mode': 16877, 'entry':
>> >'.gfid/a29c9172-3b72-4175-9f1c-d1458434a7f0/16643', 'op': 'MKDIR'},
>> >{'uid':
>> >0, 'skip_entry': False, 'gfid': '0cc499c4-be50-49f1-a7e4-7dfdd331fd48',
>> >'gid': 0, 'mode': 16877, 'entry':
>> >'.gfid/a29c9172-3b72-4175-9f1c-d1458434a7f0/54979', 'op': 'MKDIR'},
>> >
>> >Etem Bayoğlu <etembayoglu at gmail.com>, 12 Mar 2020 Per, 10:13 tarihinde
>> >şunu
>> >yazdı:
>> >
>> >> Hi,
>> >>
>> >> here my slave node logs at the time sync stopped:
>> >>
>> >> [2020-03-08 03:33:01.489559] I
>> >[glusterfsd-mgmt.c:2282:mgmt_getspec_cbk]
>> >> 0-glusterfs: No change in volfile,continuing
>> >> [2020-03-08 03:33:01.489298] I [MSGID: 100011]
>> >> [glusterfsd.c:1679:reincarnate] 0-glusterfsd: Fetching the volume
>> >file from
>> >> server...
>> >> [2020-03-08 09:49:37.991177] I [fuse-bridge.c:6083:fuse_thread_proc]
>> >> 0-fuse: initiating unmount of /tmp/gsyncd-aux-mount-l3PR6o
>> >> [2020-03-08 09:49:37.993978] W [glusterfsd.c:1596:cleanup_and_exit]
>> >> (-->/lib64/libpthread.so.0(+0x7e65) [0x7f2f9f70ce65]
>> >> -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x55cc67c20625]
>> >> -->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x55cc67c2048b] ) 0-:
>> >> received signum (15), shutting down
>> >> [2020-03-08 09:49:37.994012] I [fuse-bridge.c:6871:fini] 0-fuse:
>> >> Unmounting '/tmp/gsyncd-aux-mount-l3PR6o'.
>> >> [2020-03-08 09:49:37.994022] I [fuse-bridge.c:6876:fini] 0-fuse:
>> >Closing
>> >> fuse connection to '/tmp/gsyncd-aux-mount-l3PR6o'.
>> >> [2020-03-08 09:49:50.302806] I [MSGID: 100030]
>> >[glusterfsd.c:2867:main]
>> >> 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version
>> >7.3
>> >> (args: /usr/sbin/glusterfs --aux-gfid-mount --acl --log-level=INFO
>> >>
>>
>> >--log-file=/var/log/glusterfs/geo-replication-slaves/media-storage_slave-node_dr-media/mnt-master-node-srv-media-storage.log
>> >> --volfile-server=localhost --volfile-id=dr-media --client-pid=-1
>> >> /tmp/gsyncd-aux-mount-1AQBe4)
>> >> [2020-03-08 09:49:50.311167] I [glusterfsd.c:2594:daemonize]
>> >0-glusterfs:
>> >> Pid of current running process is 55522
>> >> [2020-03-08 09:49:50.352351] I [MSGID: 101190]
>> >> [event-epoll.c:682:event_dispatch_epoll_worker] 0-epoll: Started
>> >thread
>> >> with index 0
>> >> [2020-03-08 09:49:50.352416] I [MSGID: 101190]
>> >> [event-epoll.c:682:event_dispatch_epoll_worker] 0-epoll: Started
>> >thread
>> >> with index 1
>> >> [2020-03-08 09:49:50.373248] I [MSGID: 114020] [client.c:2436:notify]
>> >> 0-dr-media-client-0: parent translators are ready, attempting connect
>> >on
>> >> transport
>> >> Final graph:
>> >>
>> >>
>>
>> >+------------------------------------------------------------------------------+
>> >>   1: volume dr-media-client-0
>> >>   2:     type protocol/client
>> >>   3:     option ping-timeout 42
>> >>   4:     option remote-host slave-node
>> >>   5:     option remote-subvolume /data/dr-media
>> >>   6:     option transport-type socket
>> >>   7:     option transport.address-family inet
>> >>   8:     option username 4aafadfa-6ccb-4c2f-920c-1f37ed9eef34
>> >>   9:     option password a8c0f88b-2621-4038-8f65-98068ea71bb0
>> >>  10:     option transport.socket.ssl-enabled off
>> >>  11:     option transport.tcp-user-timeout 0
>> >>  12:     option transport.socket.keepalive-time 20
>> >>  13:     option transport.socket.keepalive-interval 2
>> >>  14:     option transport.socket.keepalive-count 9
>> >>  15:     option send-gids true
>> >>  16: end-volume
>> >>  17:
>> >>  18: volume dr-media-dht
>> >>  19:     type cluster/distribute
>> >>  20:     option lock-migration off
>> >>  21:     option force-migration off
>> >>  22:     subvolumes dr-media-client-0
>> >>  23: end-volume
>> >>  24:
>> >>  25: volume dr-media-write-behind
>> >>  26:     type performance/write-behind
>> >>  27:     option cache-size 8MB
>> >>  28:     option aggregate-size 1MB
>> >>  29:     subvolumes dr-media-dht
>> >>  30: end-volume
>> >>  31:
>> >>  32: volume dr-media-read-ahead
>> >>  33:     type performance/read-ahead
>> >>  34:     subvolumes dr-media-write-behind
>> >>  35: end-volume
>> >>  36:
>> >>  37: volume dr-media-readdir-ahead
>> >>  38:     type performance/readdir-ahead
>> >>  39:     option parallel-readdir off
>> >>  40:     option rda-request-size 131072
>> >>  41:     option rda-cache-limit 10MB
>> >>  42:     subvolumes dr-media-read-ahead
>> >>  43: end-volume
>> >>  44:
>> >>  45: volume dr-media-io-cache
>> >>  46:     type performance/io-cache
>> >>  47:     option cache-size 256MB
>> >>  48:     subvolumes dr-media-readdir-ahead
>> >>  49: end-volume
>> >>  50:
>> >>  51: volume dr-media-open-behind
>> >>  52:     type performance/open-behind
>> >>  53:     subvolumes dr-media-io-cache
>> >>  54: end-volume
>> >>  55:
>> >>  56: volume dr-media-quick-read
>> >>  57:     type performance/quick-read
>> >>  58:     option cache-size 256MB
>> >>  59:     subvolumes dr-media-open-behind
>> >>  60: end-volume
>> >>  61:
>> >>  62: volume dr-media-md-cache
>> >>  63:     type performance/md-cache
>> >>  64:     option cache-posix-acl true
>> >>  65:     subvolumes dr-media-quick-read
>> >>  66: end-volume
>> >>  67:
>> >>  68: volume dr-media-io-threads
>> >>  69:     type performance/io-threads
>> >>  70:     subvolumes dr-media-md-cache
>> >>  71: end-volume
>> >>  72:
>> >>  73: volume dr-media
>> >>  74:     type debug/io-stats
>> >>  75:     option log-level INFO
>> >>  76:     option threads 16
>> >>  77:     option latency-measurement off
>> >>  78:     option count-fop-hits off
>> >>  79:     option global-threading off
>> >>  80:     subvolumes dr-media-io-threads
>> >>  81: end-volume
>> >>  82:
>> >>  83: volume posix-acl-autoload
>> >>  84:     type system/posix-acl
>> >>  85:     subvolumes dr-media
>> >>  86: end-volume
>> >>  87:
>> >>  88: volume gfid-access-autoload
>> >>  89:     type features/gfid-access
>> >>  90:     subvolumes posix-acl-autoload
>> >>  91: end-volume
>> >>  92:
>> >>  93: volume meta-autoload
>> >>  94:     type meta
>> >>  95:     subvolumes gfid-access-autoload
>> >>  96: end-volume
>> >>  97:
>> >>
>> >>
>>
>> >+------------------------------------------------------------------------------+
>> >> [2020-03-08 09:49:50.388102] I [rpc-clnt.c:1963:rpc_clnt_reconfig]
>> >> 0-dr-media-client-0: changing port to 49152 (from 0)
>> >> [2020-03-08 09:49:50.388132] I [socket.c:865:__socket_shutdown]
>> >> 0-dr-media-client-0: intentional socket shutdown(12)
>> >> [2020-03-08 09:49:50.401512] I [MSGID: 114057]
>> >> [client-handshake.c:1375:select_server_supported_programs]
>> >> 0-dr-media-client-0: Using Program GlusterFS 4.x v1, Num (1298437),
>> >Version
>> >> (400)
>> >> [2020-03-08 09:49:50.401765] W [dict.c:999:str_to_data]
>> >> (-->/usr/lib64/glusterfs/7.3/xlator/protocol/client.so(+0x381d4)
>> >> [0x7f7f63daa1d4] -->/lib64/libglusterfs.so.0(dict_set_str+0x16)
>> >> [0x7f7f76b3a2f6] -->/lib64/libglusterfs.so.0(str_to_data+0x71)
>> >> [0x7f7f76b36c11] ) 0-dict: value is NULL [Invalid argument]
>> >> [2020-03-08 09:49:50.401783] I [MSGID: 114006]
>> >> [client-handshake.c:1236:client_setvolume] 0-dr-media-client-0:
>> >failed to
>> >> set process-name in handshake msg
>> >> [2020-03-08 09:49:50.404115] I [MSGID: 114046]
>> >> [client-handshake.c:1105:client_setvolume_cbk] 0-dr-media-client-0:
>> >> Connected to dr-media-client-0, attached to remote volume
>> >'/data/dr-media'.
>> >> [2020-03-08 09:49:50.405761] I [fuse-bridge.c:5166:fuse_init]
>> >> 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.24
>> >kernel
>> >> 7.22
>> >> [2020-03-08 09:49:50.405780] I [fuse-bridge.c:5777:fuse_graph_sync]
>> >> 0-fuse: switched to graph 0
>> >> [2020-03-08 11:49:00.933168] E
>> >[fuse-bridge.c:227:check_and_dump_fuse_W]
>> >> (-->
>> >/lib64/libglusterfs.so.0(_gf_log_callingfn+0x13a)[0x7f7f76b438ea] (-->
>> >>
>> >/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x8221)[0x7f7f6def1221]
>> >(-->
>> >>
>> >/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x9998)[0x7f7f6def2998]
>> >(-->
>> >> /lib64/libpthread.so.0(+0x7e65)[0x7f7f75984e65] (-->
>> >> /lib64/libc.so.6(clone+0x6d)[0x7f7f7524a88d] ))))) 0-glusterfs-fuse:
>> >> writing to fuse device failed: No such file or directory
>> >> [2020-03-08 11:53:29.822876] E
>> >[fuse-bridge.c:227:check_and_dump_fuse_W]
>> >> (-->
>> >/lib64/libglusterfs.so.0(_gf_log_callingfn+0x13a)[0x7f7f76b438ea] (-->
>> >>
>> >/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x8221)[0x7f7f6def1221]
>> >(-->
>> >>
>> >/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x9998)[0x7f7f6def2998]
>> >(-->
>> >> /lib64/libpthread.so.0(+0x7e65)[0x7f7f75984e65] (-->
>> >> /lib64/libc.so.6(clone+0x6d)[0x7f7f7524a88d] ))))) 0-glusterfs-fuse:
>> >> writing to fuse device failed: No such file or directory
>> >> [2020-03-08 12:00:46.656170] E
>> >[fuse-bridge.c:227:check_and_dump_fuse_W]
>> >> (-->
>> >/lib64/libglusterfs.so.0(_gf_log_callingfn+0x13a)[0x7f7f76b438ea] (-->
>> >>
>> >/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x8221)[0x7f7f6def1221]
>> >(-->
>> >>
>> >/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x9998)[0x7f7f6def2998]
>> >(-->
>> >> /lib64/libpthread.so.0(+0x7e65)[0x7f7f75984e65] (-->
>> >> /lib64/libc.so.6(clone+0x6d)[0x7f7f7524a88d] ))))) 0-glusterfs-fuse:
>> >> writing to fuse device failed: No such file or directory
>> >>
>> >>
>> >> master node logs here:
>> >>
>> >> [2020-03-08 09:49:38.115108] E
>> >[fuse-bridge.c:227:check_and_dump_fuse_W]
>> >> (-->
>> >/lib64/libglusterfs.so.0(_gf_log_callingfn+0x13a)[0x7fb5ab2aa8ea] (-->
>> >>
>> >/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x8221)[0x7fb5a265c221]
>> >(-->
>> >>
>> >/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x8b3a)[0x7fb5a265cb3a]
>> >(-->
>> >> /lib64/libpthread.so.0(+0x7e25)[0x7fb5aa0ebe25] (-->
>> >> /lib64/libc.so.6(clone+0x6d)[0x7fb5a99b4bad] ))))) 0-glusterfs-fuse:
>> >> writing to fuse device failed: No such file or directory
>> >> [2020-03-08 09:49:38.932935] I [fuse-bridge.c:6083:fuse_thread_proc]
>> >> 0-fuse: initiating unmount of /tmp/gsyncd-aux-mount-Xy8taN
>> >> [2020-03-08 09:49:38.947634] W [glusterfsd.c:1596:cleanup_and_exit]
>> >> (-->/lib64/libpthread.so.0(+0x7e25) [0x7fb5aa0ebe25]
>> >> -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x557ccdf55625]
>> >> -->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x557ccdf5548b] ) 0-:
>> >> received signum (15), shutting down
>> >> [2020-03-08 09:49:38.947684] I [fuse-bridge.c:6871:fini] 0-fuse:
>> >> Unmounting '/tmp/gsyncd-aux-mount-Xy8taN'.
>> >> [2020-03-08 09:49:38.947704] I [fuse-bridge.c:6876:fini] 0-fuse:
>> >Closing
>> >> fuse connection to '/tmp/gsyncd-aux-mount-Xy8taN'.
>> >> [2020-03-08 09:49:51.545529] I [MSGID: 100030]
>> >[glusterfsd.c:2867:main]
>> >> 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version
>> >7.3
>> >> (args: /usr/sbin/glusterfs --aux-gfid-mount --acl --log-level=INFO
>> >>
>>
>> >--log-file=/var/log/glusterfs/geo-replication/media-storage_slave-node_dr-media/mnt-srv-media-storage.log
>> >> --volfile-server=localhost --volfile-id=media-storage --client-pid=-1
>> >> /tmp/gsyncd-aux-mount-XT9WfC)
>> >> [2020-03-08 09:49:51.559518] I [glusterfsd.c:2594:daemonize]
>> >0-glusterfs:
>> >> Pid of current running process is 18484
>> >> [2020-03-08 09:49:51.645473] I [MSGID: 101190]
>> >> [event-epoll.c:682:event_dispatch_epoll_worker] 0-epoll: Started
>> >thread
>> >> with index 0
>> >> [2020-03-08 09:49:51.645624] I [MSGID: 101190]
>> >> [event-epoll.c:682:event_dispatch_epoll_worker] 0-epoll: Started
>> >thread
>> >> with index 1
>> >> [2020-03-08 09:49:51.701470] I [MSGID: 114020] [client.c:2436:notify]
>> >> 0-media-storage-client-0: parent translators are ready, attempting
>> >connect
>> >> on transport
>> >> [2020-03-08 09:49:51.702908] I [rpc-clnt.c:1963:rpc_clnt_reconfig]
>> >> 0-media-storage-client-0: changing port to 49152 (from 0)
>> >> [2020-03-08 09:49:51.702961] I [socket.c:865:__socket_shutdown]
>> >> 0-media-storage-client-0: intentional socket shutdown(12)
>> >> [2020-03-08 09:49:51.703807] I [MSGID: 114057]
>> >> [client-handshake.c:1375:select_server_supported_programs]
>> >> 0-media-storage-client-0: Using Program GlusterFS 4.x v1, Num
>> >(1298437),
>> >> Version (400)
>> >> [2020-03-08 09:49:51.704147] W [dict.c:999:str_to_data]
>> >> (-->/usr/lib64/glusterfs/7.3/xlator/protocol/client.so(+0x381d4)
>> >> [0x7fc5c01031d4] -->/lib64/libglusterfs.so.0(dict_set_str+0x16)
>> >> [0x7fc5ce8d82f6] -->/lib64/libglusterfs.so.0(str_to_data+0x71)
>> >> [0x7fc5ce8d4c11] ) 0-dict: value is NULL [Invalid argument]
>> >> [2020-03-08 09:49:51.704178] I [MSGID: 114006]
>> >> [client-handshake.c:1236:client_setvolume] 0-media-storage-client-0:
>> >failed
>> >> to set process-name in handshake msg
>> >> [2020-03-08 09:49:51.705982] I [MSGID: 114046]
>> >> [client-handshake.c:1105:client_setvolume_cbk]
>> >0-media-storage-client-0:
>> >> Connected to media-storage-client-0, attached to remote volume
>> >> '/srv/media-storage'.
>> >> [2020-03-08 09:49:51.707627] I [fuse-bridge.c:5166:fuse_init]
>> >> 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.24
>> >kernel
>> >> 7.22
>> >> [2020-03-08 09:49:51.707658] I [fuse-bridge.c:5777:fuse_graph_sync]
>> >> 0-fuse: switched to graph 0
>> >> [2020-03-08 09:56:26.875082] E [fuse-bridge.c:4188:fuse_xattr_cbk]
>> >> 0-glusterfs-fuse: extended attribute not supported by the backend
>> >storage
>> >> [2020-03-08 10:12:35.190809] E [fuse-bridge.c:4188:fuse_xattr_cbk]
>> >> 0-glusterfs-fuse: extended attribute not supported by the backend
>> >storage
>> >> [2020-03-08 10:25:06.240795] E [fuse-bridge.c:4188:fuse_xattr_cbk]
>> >> 0-glusterfs-fuse: extended attribute not supported by the backend
>> >storage
>> >> [2020-03-08 10:40:33.946794] E [fuse-bridge.c:4188:fuse_xattr_cbk]
>> >> 0-glusterfs-fuse: extended attribute not supported by the backend
>> >storage
>> >> [2020-03-08 10:43:50.459247] E
>> >[fuse-bridge.c:227:check_and_dump_fuse_W]
>> >> (-->
>> >/lib64/libglusterfs.so.0(_gf_log_callingfn+0x13a)[0x7fc5ce8e18ea] (-->
>> >>
>> >/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x8221)[0x7fc5c5c93221]
>> >(-->
>> >>
>> >/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x9998)[0x7fc5c5c94998]
>> >(-->
>> >> /lib64/libpthread.so.0(+0x7e25)[0x7fc5cd722e25] (-->
>> >> /lib64/libc.so.6(clone+0x6d)[0x7fc5ccfebbad] ))))) 0-glusterfs-fuse:
>> >> writing to fuse device failed: No such file or directory
>> >> [2020-03-08 10:55:27.034947] E [fuse-bridge.c:4188:fuse_xattr_cbk]
>> >> 0-glusterfs-fuse: extended attribute not supported by the backend
>> >storage
>> >> [2020-03-08 11:05:53.483207] E [fuse-bridge.c:4188:fuse_xattr_cbk]
>> >> 0-glusterfs-fuse: extended attribute not supported by the backend
>> >storage
>> >> [2020-03-08 11:26:01.492270] E [fuse-bridge.c:4188:fuse_xattr_cbk]
>> >> 0-glusterfs-fuse: extended attribute not supported by the backend
>> >storage
>> >> [2020-03-08 11:32:56.618737] E
>> >[fuse-bridge.c:227:check_and_dump_fuse_W]
>> >> (-->
>> >/lib64/libglusterfs.so.0(_gf_log_callingfn+0x13a)[0x7fc5ce8e18ea] (-->
>> >>
>> >/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x8221)[0x7fc5c5c93221]
>> >(-->
>> >>
>> >/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x9998)[0x7fc5c5c94998]
>> >(-->
>> >> /lib64/libpthread.so.0(+0x7e25)[0x7fc5cd722e25] (-->
>> >> /lib64/libc.so.6(clone+0x6d)[0x7fc5ccfebbad] ))))) 0-glusterfs-fuse:
>> >> writing to fuse device failed: No such file or directory
>> >> [2020-03-08 11:37:50.475099] E
>> >[fuse-bridge.c:227:check_and_dump_fuse_W]
>> >> (-->
>> >/lib64/libglusterfs.so.0(_gf_log_callingfn+0x13a)[0x7fc5ce8e18ea] (-->
>> >>
>> >/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x8221)[0x7fc5c5c93221]
>> >(-->
>> >>
>> >/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x9998)[0x7fc5c5c94998]
>> >(-->
>> >> /lib64/libpthread.so.0(+0x7e25)[0x7fc5cd722e25] (-->
>> >> /lib64/libc.so.6(clone+0x6d)[0x7fc5ccfebbad] ))))) 0-glusterfs-fuse:
>> >> writing to fuse device failed: No such file or directory
>> >> [2020-03-08 11:40:54.362173] E [fuse-bridge.c:4188:fuse_xattr_cbk]
>> >> 0-glusterfs-fuse: extended attribute not supported by the backend
>> >storage
>> >> [2020-03-08 11:42:35.859423] E
>> >[fuse-bridge.c:227:check_and_dump_fuse_W]
>> >> (-->
>> >/lib64/libglusterfs.so.0(_gf_log_callingfn+0x13a)[0x7fc5ce8e18ea] (-->
>> >>
>> >/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x8221)[0x7fc5c5c93221]
>> >(-->
>> >>
>> >/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x9998)[0x7fc5c5c94998]
>> >(-->
>> >> /lib64/libpthread.so.0(+0x7e25)[0x7fc5cd722e25] (-->
>> >> /lib64/libc.so.6(clone+0x6d)[0x7fc5ccfebbad] ))))) 0-glusterfs-fuse:
>> >> writing to fuse device failed: No such file or directory
>> >> [2020-03-08 11:44:24.906383] E
>> >[fuse-bridge.c:227:check_and_dump_fuse_W]
>> >> (-->
>> >/lib64/libglusterfs.so.0(_gf_log_callingfn+0x13a)[0x7fc5ce8e18ea] (-->
>> >>
>> >/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x8221)[0x7fc5c5c93221]
>> >(-->
>> >>
>> >/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x9998)[0x7fc5c5c94998]
>> >(-->
>> >> /lib64/libpthread.so.0(+0x7e25)[0x7fc5cd722e25] (-->
>> >> /lib64/libc.so.6(clone+0x6d)[0x7fc5ccfebbad] ))))) 0-glusterfs-fuse:
>> >> writing to fuse device failed: No such file or directory
>> >> [2020-03-08 11:47:45.474723] E
>> >[fuse-bridge.c:227:check_and_dump_fuse_W]
>> >> (-->
>> >/lib64/libglusterfs.so.0(_gf_log_callingfn+0x13a)[0x7fc5ce8e18ea] (-->
>> >>
>> >/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x8221)[0x7fc5c5c93221]
>> >(-->
>> >>
>> >/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x9998)[0x7fc5c5c94998]
>> >(-->
>> >> /lib64/libpthread.so.0(+0x7e25)[0x7fc5cd722e25] (-->
>> >> /lib64/libc.so.6(clone+0x6d)[0x7fc5ccfebbad] ))))) 0-glusterfs-fuse:
>> >> writing to fuse device failed: No such file or directory
>> >> [2020-03-08 11:50:58.127202] E
>> >[fuse-bridge.c:227:check_and_dump_fuse_W]
>> >> (-->
>> >/lib64/libglusterfs.so.0(_gf_log_callingfn+0x13a)[0x7fc5ce8e18ea] (-->
>> >>
>> >/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x8221)[0x7fc5c5c93221]
>> >(-->
>> >>
>> >/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x9998)[0x7fc5c5c94998]
>> >(-->
>> >> /lib64/libpthread.so.0(+0x7e25)[0x7fc5cd722e25] (-->
>> >> /lib64/libc.so.6(clone+0x6d)[0x7fc5ccfebbad] ))))) 0-glusterfs-fuse:
>> >> writing to fuse device failed: No such file or directory
>> >> [2020-03-08 11:52:55.616968] E
>> >[fuse-bridge.c:227:check_and_dump_fuse_W]
>> >> (-->
>> >/lib64/libglusterfs.so.0(_gf_log_callingfn+0x13a)[0x7fc5ce8e18ea] (-->
>> >>
>> >/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x8221)[0x7fc5c5c93221]
>> >(-->
>> >>
>> >/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x9998)[0x7fc5c5c94998]
>> >(-->
>> >> /lib64/libpthread.so.0(+0x7e25)[0x7fc5cd722e25] (-->
>> >> /lib64/libc.so.6(clone+0x6d)[0x7fc5ccfebbad] ))))) 0-glusterfs-fuse:
>> >> writing to fuse device failed: No such file or directory
>> >> [2020-03-08 11:56:24.039211] E [fuse-bridge.c:4188:fuse_xattr_cbk]
>> >> 0-glusterfs-fuse: extended attribute not supported by the backend
>> >storage
>> >> [2020-03-08 11:57:56.031648] E
>> >[fuse-bridge.c:227:check_and_dump_fuse_W]
>> >> (-->
>> >/lib64/libglusterfs.so.0(_gf_log_callingfn+0x13a)[0x7fc5ce8e18ea] (-->
>> >>
>> >/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x8221)[0x7fc5c5c93221]
>> >(-->
>> >>
>> >/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x9998)[0x7fc5c5c94998]
>> >(-->
>> >> /lib64/libpthread.so.0(+0x7e25)[0x7fc5cd722e25] (-->
>> >> /lib64/libc.so.6(clone+0x6d)[0x7fc5ccfebbad] ))))) 0-glusterfs-fuse:
>> >> writing to fuse device failed: No such file or directory
>> >> [2020-03-08 12:06:19.686974] E
>> >[fuse-bridge.c:227:check_and_dump_fuse_W]
>> >> (-->
>> >/lib64/libglusterfs.so.0(_gf_log_callingfn+0x13a)[0x7fc5ce8e18ea] (-->
>> >>
>> >/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x8221)[0x7fc5c5c93221]
>> >(-->
>> >>
>> >/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x9998)[0x7fc5c5c94998]
>> >(-->
>> >> /lib64/libpthread.so.0(+0x7e25)[0x7fc5cd722e25] (-->
>> >> /lib64/libc.so.6(clone+0x6d)[0x7fc5ccfebbad] ))))) 0-glusterfs-fuse:
>> >> writing to fuse device failed: No such file or directory
>> >> [2020-03-08 12:12:41.888889] E [fuse-bridge.c:4188:fuse_xattr_cbk]
>> >> 0-glusterfs-fuse: extended attribute not supported by the backend
>> >storage
>> >>
>> >>
>> >>
>> >> I see just no such file or directory error logs for 5 days.  I
>> >shifted log
>> >> level to DEBUG I will send them also.
>> >>
>> >> Strahil Nikolov <hunter86_bg at yahoo.com>, 12 Mar 2020 Per, 07:55
>> >tarihinde
>> >> şunu yazdı:
>> >>
>> >>> On March 11, 2020 10:17:05 PM GMT+02:00, "Etem Bayoğlu" <
>> >>> etembayoglu at gmail.com> wrote:
>> >>> >Hi Strahil,
>> >>> >
>> >>> >Thank you for your response. when I tail logs on both master and
>> >slave
>> >>> >I
>> >>> >get this:
>> >>> >
>> >>> >on slave, from
>> >>> >/var/log/glusterfs/geo-replication-slaves/<geo-session>/mnt-XXX.log
>> >>> >file:
>> >>> >
>> >>> >[2020-03-11 19:53:32.721509] E
>> >>> >[fuse-bridge.c:227:check_and_dump_fuse_W]
>> >>> >(-->
>> >/lib64/libglusterfs.so.0(_gf_log_callingfn+0x13a)[0x7f78e10488ea]
>> >>> >(-->
>> >>>
>> >>/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x8221)[0x7f78d83f6221]
>> >>> >(-->
>> >>>
>> >>/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x9998)[0x7f78d83f7998]
>> >>> >(-->
>> >>> >/lib64/libpthread.so.0(+0x7e65)[0x7f78dfe89e65] (-->
>> >>> >/lib64/libc.so.6(clone+0x6d)[0x7f78df74f88d] )))))
>> >0-glusterfs-fuse:
>> >>> >writing to fuse device failed: No such file or directory
>> >>> >[2020-03-11 19:53:32.723758] E
>> >>> >[fuse-bridge.c:227:check_and_dump_fuse_W]
>> >>> >(-->
>> >/lib64/libglusterfs.so.0(_gf_log_callingfn+0x13a)[0x7f78e10488ea]
>> >>> >(-->
>> >>>
>> >>/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x8221)[0x7f78d83f6221]
>> >>> >(-->
>> >>>
>> >>/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x9998)[0x7f78d83f7998]
>> >>> >(-->
>> >>> >/lib64/libpthread.so.0(+0x7e65)[0x7f78dfe89e65] (-->
>> >>> >/lib64/libc.so.6(clone+0x6d)[0x7f78df74f88d] )))))
>> >0-glusterfs-fuse:
>> >>> >writing to fuse device failed: No such file or directory
>> >>> >
>> >>> >on master,
>> >>> >from /var/log/glusterfs/geo-replication/<geo-session>/mnt-XXX.log
>> >file:
>> >>> >
>> >>> >[2020-03-11 19:40:55.872002] E [fuse-bridge.c:4188:fuse_xattr_cbk]
>> >>> >0-glusterfs-fuse: extended attribute not supported by the backend
>> >>> >storage
>> >>> >[2020-03-11 19:40:58.389748] E
>> >>> >[fuse-bridge.c:227:check_and_dump_fuse_W]
>> >>> >(-->
>> >/lib64/libglusterfs.so.0(_gf_log_callingfn+0x13a)[0x7f1f4b9108ea]
>> >>> >(-->
>> >>>
>> >>/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x8221)[0x7f1f42cc2221]
>> >>> >(-->
>> >>>
>> >>/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x9998)[0x7f1f42cc3998]
>> >>> >(-->
>> >>> >/lib64/libpthread.so.0(+0x7e25)[0x7f1f4a751e25] (-->
>> >>> >/lib64/libc.so.6(clone+0x6d)[0x7f1f4a01abad] )))))
>> >0-glusterfs-fuse:
>> >>> >writing to fuse device failed: No such file or directory
>> >>> >[2020-03-11 19:41:08.214591] E
>> >>> >[fuse-bridge.c:227:check_and_dump_fuse_W]
>> >>> >(-->
>> >/lib64/libglusterfs.so.0(_gf_log_callingfn+0x13a)[0x7f1f4b9108ea]
>> >>> >(-->
>> >>>
>> >>/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x8221)[0x7f1f42cc2221]
>> >>> >(-->
>> >>>
>> >>/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x9998)[0x7f1f42cc3998]
>> >>> >(-->
>> >>> >/lib64/libpthread.so.0(+0x7e25)[0x7f1f4a751e25] (-->
>> >>> >/lib64/libc.so.6(clone+0x6d)[0x7f1f4a01abad] )))))
>> >0-glusterfs-fuse:
>> >>> >writing to fuse device failed: No such file or directory
>> >>> >[2020-03-11 19:53:59.275469] E
>> >>> >[fuse-bridge.c:227:check_and_dump_fuse_W]
>> >>> >(-->
>> >/lib64/libglusterfs.so.0(_gf_log_callingfn+0x13a)[0x7f1f4b9108ea]
>> >>> >(-->
>> >>>
>> >>/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x8221)[0x7f1f42cc2221]
>> >>> >(-->
>> >>>
>> >>/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x9998)[0x7f1f42cc3998]
>> >>> >(-->
>> >>> >/lib64/libpthread.so.0(+0x7e25)[0x7f1f4a751e25] (-->
>> >>> >/lib64/libc.so.6(clone+0x6d)[0x7f1f4a01abad] )))))
>> >0-glusterfs-fuse:
>> >>> >writing to fuse device failed: No such file or directory
>> >>> >
>> >>> >####################gsyncd.log outputs:######################
>> >>> >
>> >>> >from slave:
>> >>> >[2020-03-11 08:55:16.384085] I [repce(slave
>> >>> >master-node/srv/media-storage):96:service_loop] RepceServer:
>> >>> >terminating on
>> >>> >reaching EOF.
>> >>> >[2020-03-11 08:57:55.87364] I [resource(slave
>> >>> >master-node/srv/media-storage):1105:connect] GLUSTER: Mounting
>> >gluster
>> >>> >volume locally...
>> >>> >[2020-03-11 08:57:56.171372] I [resource(slave
>> >>> >master-node/srv/media-storage):1128:connect] GLUSTER: Mounted
>> >gluster
>> >>> >volume duration=1.0837
>> >>> >[2020-03-11 08:57:56.173346] I [resource(slave
>> >>> >master-node/srv/media-storage):1155:service_loop] GLUSTER: slave
>> >>> >listening
>> >>> >
>> >>> >from master:
>> >>> >[2020-03-11 20:08:55.145453] I [master(worker
>> >>> >/srv/media-storage):1991:syncjob] Syncer: Sync Time Taken
>> >>> >duration=134.9987num_files=4661 job=2 return_code=0
>> >>> >[2020-03-11 20:08:55.285871] I [master(worker
>> >>> >/srv/media-storage):1421:process] _GMaster: Entry Time Taken MKD=83
>> >>> >MKN=8109 LIN=0 SYM=0 REN=0 RMD=0 CRE=0 duration=17.0358 UNL=0
>> >>> >[2020-03-11 20:08:55.286082] I [master(worker
>> >>> >/srv/media-storage):1431:process] _GMaster: Data/Metadata Time
>> >Taken
>> >>> >SETA=83 SETX=0 meta_duration=0.9334 data_duration=135.2497
>> >DATA=8109
>> >>> >XATT=0
>> >>> >[2020-03-11 20:08:55.286410] I [master(worker
>> >>> >/srv/media-storage):1441:process] _GMaster: Batch Completed
>> >>> >changelog_end=1583917610 entry_stime=None
>> >changelog_start=1583917610
>> >>> >stime=None duration=153.5185 num_changelogs=1 mode=xsync
>> >>> >[2020-03-11 20:08:55.315442] I [master(worker
>> >>> >/srv/media-storage):1681:crawl] _GMaster: processing xsync
>> >changelog
>> >>>
>> >>>
>>
>> >>path=/var/lib/misc/gluster/gsyncd/media-storage_daredevil01.zingat.com_dr-media/srv-media-storage/xsync/XSYNC-CHANGELOG.1583917613
>> >>> >
>> >>> >
>> >>> >Thank you..
>> >>> >
>> >>> >Strahil Nikolov <hunter86_bg at yahoo.com>, 11 Mar 2020 Çar, 12:28
>> >>> >tarihinde
>> >>> >şunu yazdı:
>> >>> >
>> >>> >> On March 11, 2020 10:09:27 AM GMT+02:00, "Etem Bayoğlu" <
>> >>> >> etembayoglu at gmail.com> wrote:
>> >>> >> >Hello community,
>> >>> >> >
>> >>> >> >I've set up a glusterfs geo-replication node for disaster
>> >recovery.
>> >>> >I
>> >>> >> >manage about 10TB media data on a gluster volume and I want to
>> >sync
>> >>> >all
>> >>> >> >data to remote location over WAN. So, I created a slave node
>> >volume
>> >>> >on
>> >>> >> >disaster recovery center on remote location and I've started
>> >geo-rep
>> >>> >> >session. It has been transferred data fine up to about 800GB,
>> >but
>> >>> >> >syncing
>> >>> >> >has stopped for three days despite gluster geo-rep status active
>> >and
>> >>> >> >hybrid
>> >>> >> >crawl. There is no sending data. I've recreated session and
>> >>> >restarted
>> >>> >> >but
>> >>> >> >still the same.
>> >>> >> >
>> >>> >> >#gluster volu geo-rep status
>> >>> >> >
>> >>> >> >MASTER NODE            MASTER VOL       MASTER BRICK
>> >SLAVE
>> >>> >> >USER
>> >>> >> >SLAVE                                     SLAVE NODE
>> >>> >> >STATUS
>> >>> >> >   CRAWL STATUS    LAST_SYNCED
>> >>> >>
>> >>> >>
>> >>>
>> >>>
>>
>> >>>------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>> >>> >> >master-node   media-storage    /srv/media-storage    root
>> >>> >> > ssh://slave-node::dr-media    slave-node          Active
>> >>> >> >Hybrid Crawl                 N/A
>> >>> >> >
>> >>> >> >Any idea? please. Thank you.
>> >>> >>
>> >>> >> Hi Etem,
>> >>> >>
>> >>> >> Have you checked the log on both source and destination. Maybe
>> >they
>> >>> >can
>> >>> >> hint you what the issue is.
>> >>> >>
>> >>> >> Best Regards,
>> >>> >> Strahil Nikolov
>> >>> >>
>> >>>
>> >>> Hi Etem,
>> >>>
>> >>> Nothing obvious....
>> >>> I don't like this one:
>> >>>
>> >>> [2020-03-11 19:53:32.721509] E
>> >>> >[fuse-bridge.c:227:check_and_dump_fuse_W]
>> >>> >(-->
>> >/lib64/libglusterfs.so.0(_gf_log_callingfn+0x13a)[0x7f78e10488ea]
>> >>> >(-->
>> >>>
>> >>/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x8221)[0x7f78d83f6221]
>> >>> >(-->
>> >>>
>> >>/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x9998)[0x7f78d83f7998]
>> >>> >(-->
>> >>> >/lib64/libpthread.so.0(+0x7e65)[0x7f78dfe89e65] (-->
>> >>> >/lib64/libc.so.6(clone+0x6d)[0x7f78df74f88d] )))))
>> >0-glusterfs-fuse:
>> >>> >writing to fuse device failed: No such file or directory
>> >>>
>> >>> Can you check the health of the slave volume (splitbrains, brick
>> >>> status,etc) ?
>> >>>
>> >>> Maybe you can check the logs and find when exactly the master
>> >stopped
>> >>> replicating and then checking the logs of the slave at that exact
>> >time .
>> >>>
>> >>> Also, you can increase the log level on the slave and then recreate
>> >the
>> >>> georep.
>> >>> For details, check:
>> >>>
>> >>>
>> >>>
>> >
>> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3/html/administration_guide/configuring_the_log_level
>> >>>
>> >>> P.S.: Trace/debug can fill up your /var/log, so enable them for a
>> >short
>> >>> period of time.
>> >>>
>> >>> Best Regards,
>> >>> Strahil Nikolov
>> >>>
>> >>
>>
>> Hi Etem,
>>
>> The log seems that it ends abruptly,  while  gluster  is creating a list
>> what should be done on the slave.
>> Is it  really ending there ?
>>
>> Best Regards,
>> Strahil Nikolov
>>
> ________
>>
>>
>>
>> Community Meeting Calendar:
>>
>> Schedule -
>> Every Tuesday at 14:30 IST / 09:00 UTC
>> Bridge: https://bluejeans.com/441850968
>>
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>>
>
> --
> Thanks and Regards,
> Kotresh H R
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20200318/7024673b/attachment.html>


More information about the Gluster-users mailing list