[Bugs] [Bug 1579615] [geo-rep]: [Errno 39] Directory not empty
bugzilla at redhat.com
bugzilla at redhat.com
Fri May 18 03:34:48 UTC 2018
https://bugzilla.redhat.com/show_bug.cgi?id=1579615
--- Comment #1 from Raghavendra G <rgowdapp at redhat.com> ---
Description of problem:
=======================
Ran automated test cases with 3x3 master volume and a 3x3 slave volume (Rsync +
Fuse)
The geo-rep status was stuck in history crawl with some workers' status
'FAULTY'
[root@ master]# gluster v info
The worker crashed with 'Directory not empty':
Traceback (most recent call last):
File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 210, in main
main_i()
File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 802, in
main_i
local.service_loop(*[r for r in [remote] if r])
File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line 1676, in
service_loop
g3.crawlwrap(oneshot=True)
File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 597, in
crawlwrap
self.crawl()
File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 1470, in
crawl
self.changelogs_batch_process(changes)
File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 1370, in
changelogs_batch_process
self.process(batch)
File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 1204, in
process
self.process_change(change, done, retry)
File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 1114, in
process_change
failures = self.slave.server.entry_ops(entries)
File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 228, in
__call__
return self.ins(self.meth, *a)
File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 210, in
__call__
raise res
OSError: [Errno 39] Directory not empty:
'.gfid/b6c0b18a-8a5a-408b-88ec-a01fb88c8bfe/level46'
Version-Release number of selected component (if applicable):
=============================================================
root at master]# rpm -qa | grep gluster
glusterfs-server-3.12.2-8.el7rhgs.x86_64
glusterfs-api-3.12.2-8.el7rhgs.x86_64
glusterfs-rdma-3.12.2-8.el7rhgs.x86_64
glusterfs-cli-3.12.2-8.el7rhgs.x86_64
gluster-nagios-common-0.2.4-1.el7rhgs.noarch
glusterfs-libs-3.12.2-8.el7rhgs.x86_64
glusterfs-client-xlators-3.12.2-8.el7rhgs.x86_64
libvirt-daemon-driver-storage-gluster-3.9.0-14.el7_5.2.x86_64
vdsm-gluster-4.19.43-2.3.el7rhgs.noarch
glusterfs-events-3.12.2-8.el7rhgs.x86_64
glusterfs-3.12.2-8.el7rhgs.x86_64
glusterfs-fuse-3.12.2-8.el7rhgs.x86_64
glusterfs-geo-replication-3.12.2-8.el7rhgs.x86_64
python2-gluster-3.12.2-8.el7rhgs.x86_64
gluster-nagios-addons-0.2.10-2.el7rhgs.x86_64
How reproducible:
=================
1/1
Actual results:
==============
The worker crashed with 'Directory not empty' tracebacks which flooded the
logs.
Expected results:
================
There should be no crash
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
More information about the Bugs
mailing list