[Gluster-users] geo-rep: -1 (Directory not empty) warning - STATUS Faulty

Aravinda avishwan at redhat.com
Wed Sep 14 06:46:03 UTC 2016


Thanks, but couldn't find one log file output. To get the log file path

gluster volume geo-replication <MASTER> <SLAVEHOST>::<SLAVEVOL> config 
log_file

regards
Aravinda

On Wednesday 14 September 2016 12:02 PM, ML mail wrote:
> Dear Aravinda,
>
> As requested I have attached within this mail a file containing the last 100-300 lines of the three files located in the requested directory on the master node.
>
> Let me know if you did not receive the file, I am not sure it is possible to attach files on this mailing list.
>
> Regards,
> ML
>
>
>
>
> On Wednesday, September 14, 2016 6:14 AM, Aravinda <avishwan at redhat.com> wrote:
> Please share the logs from Master node which is
> Faulty(/var/log/glusterfs/geo-replication/<mastervol>_<slavehost>_<slavevol>/*.log)
>
> regards
> Aravinda
>
>
> On Wednesday 14 September 2016 01:10 AM, ML mail wrote:
>> Hi,
>>
>> I just discovered that one of my replicated glusterfs volumes is not being geo-replicated to my slave node (STATUS Faulty). The log file on the geo-rep slave node indicates an error with a directory which seems not to be empty. Below you will find the full log entry for this problem which gets repeated every 5 seconds.
>>
>> I am using GlusterFS 3.7.12 on Debian 8 with FUSE mount on the clients.
>>
>>
>> How can I fix this issue?
>>
>> Thanks in advance for your help
>>
>> Regards
>> ML
>>
>>
>> Log File: /var/log/glusterfs/geo-replication-slaves/d99af2fa-439b-4a21-bf3a-38f3849f87ec:gluster%3A%2F%2F127.0.0.1%3Acloud-pro-geo.gluster.log
>>
>> [2016-09-13 19:30:52.098881] I [MSGID: 100030] [glusterfsd.c:2338:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.7.12 (args: /usr/sbin/glusterfs --aux-gfid-mount --acl --log-file=/var/log/glusterfs/geo-replication-slaves/d99af2fa-439b-4a21-bf3a-38f3849f87ec:gluster%3A%2F%2F127.0.0.1%3Acloud-pro-geo.gluster.log --volfile-server=localhost --volfile-id=cloud-pro-geo --client-pid=-1 /tmp/gsyncd-aux-mount-X9XX0v)
>> [2016-09-13 19:30:52.109030] I [MSGID: 101190] [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
>> [2016-09-13 19:30:52.111565] I [graph.c:269:gf_add_cmdline_options] 0-cloud-pro-geo-md-cache: adding option 'cache-posix-acl' for volume 'cloud-pro-geo-md-cache' with value 'true'
>> [2016-09-13 19:30:52.113344] I [MSGID: 101190] [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread with index 2
>> [2016-09-13 19:30:52.113710] I [MSGID: 114020] [client.c:2106:notify] 0-cloud-pro-geo-client-0: parent translators are ready, attempting connect on transport
>> Final graph:
>> +------------------------------------------------------------------------------+
>> 1: volume cloud-pro-geo-client-0
>> 2:     type protocol/client
>> 3:     option ping-timeout 42
>> 4:     option remote-host gfs1geo.domain.tld
>> 5:     option remote-subvolume /data/cloud-pro-geo/brick
>> 6:     option transport-type socket
>> 7:     option username 92671588-f829-4c03-a80f-6299b059452e
>> 8:     option password e1d425d4-dfe7-477e-8dc1-3704c9c9df83
>> 9:     option send-gids true
>> 10: end-volume
>> 11:
>> 12: volume cloud-pro-geo-dht
>> 13:     type cluster/distribute
>> 14:     subvolumes cloud-pro-geo-client-0
>> 15: end-volume
>> 16:
>> 17: volume cloud-pro-geo-write-behind
>> 18:     type performance/write-behind
>> 19:     subvolumes cloud-pro-geo-dht
>> 20: end-volume
>> 21:
>> 22: volume cloud-pro-geo-read-ahead
>> 23:     type performance/read-ahead
>> 24:     subvolumes cloud-pro-geo-write-behind
>> 25: end-volume
>> 26:
>> 27: volume cloud-pro-geo-readdir-ahead
>> 28:     type performance/readdir-ahead
>> 29:     subvolumes cloud-pro-geo-read-ahead
>> 30: end-volume
>> 31:
>> 32: volume cloud-pro-geo-io-cache
>> 33:     type performance/io-cache
>> 34:     subvolumes cloud-pro-geo-readdir-ahead
>> 35: end-volume
>> 36:
>> 37: volume cloud-pro-geo-quick-read
>> 38:     type performance/quick-read
>> 39:     subvolumes cloud-pro-geo-io-cache
>> 40: end-volume
>> 41:
>> 42: volume cloud-pro-geo-open-behind
>> 43:     type performance/open-behind
>> 44:     subvolumes cloud-pro-geo-quick-read
>> 45: end-volume
>> 46:
>> 47: volume cloud-pro-geo-md-cache
>> 48:     type performance/md-cache
>> 49:     option cache-posix-acl true
>> 50:     subvolumes cloud-pro-geo-open-behind
>> 51: end-volume
>> 52:
>> 53: volume cloud-pro-geo
>> 54:     type debug/io-stats
>> 55:     option log-level INFO
>> 56:     option latency-measurement off
>> 57:     option count-fop-hits off
>> 58:     subvolumes cloud-pro-geo-md-cache
>> 59: end-volume
>> 60:
>> 61: volume posix-acl-autoload
>> 62:     type system/posix-acl
>> 63:     subvolumes cloud-pro-geo
>> 64: end-volume
>> 65:
>> 66: volume gfid-access-autoload
>> 67:     type features/gfid-access
>> 68:     subvolumes posix-acl-autoload
>> 69: end-volume
>> 70:
>> 71: volume meta-autoload
>> 72:     type meta
>> 73:     subvolumes gfid-access-autoload
>> 74: end-volume
>> 75:
>> +------------------------------------------------------------------------------+
>> [2016-09-13 19:30:52.115096] I [rpc-clnt.c:1868:rpc_clnt_reconfig] 0-cloud-pro-geo-client-0: changing port to 49154 (from 0)
>> [2016-09-13 19:30:52.121610] I [MSGID: 114057] [client-handshake.c:1437:select_server_supported_programs] 0-cloud-pro-geo-client-0: Using Program GlusterFS 3.3, Num (1298437), Version (330)
>> [2016-09-13 19:30:52.121911] I [MSGID: 114046] [client-handshake.c:1213:client_setvolume_cbk] 0-cloud-pro-geo-client-0: Connected to cloud-pro-geo-client-0, attached to remote volume '/data/cloud-pro-geo/brick'.
>> [2016-09-13 19:30:52.121933] I [MSGID: 114047] [client-handshake.c:1224:client_setvolume_cbk] 0-cloud-pro-geo-client-0: Server and Client lk-version numbers are not same, reopening the fds
>> [2016-09-13 19:30:52.128072] I [fuse-bridge.c:5172:fuse_graph_setup] 0-fuse: switched to graph 0
>> [2016-09-13 19:30:52.128139] I [MSGID: 114035] [client-handshake.c:193:client_set_lk_version_cbk] 0-cloud-pro-geo-client-0: Server lk version = 1
>> [2016-09-13 19:30:52.128258] I [fuse-bridge.c:4083:fuse_init] 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.22 kernel 7.23
>> [2016-09-13 19:30:57.474905] I [MSGID: 109066] [dht-rename.c:1568:dht_rename] 0-cloud-pro-geo-dht: renaming /.gfid/f7eb9d21-d39a-4dd6-941c-46d430e18aa2/File 2016.xlsx.ocTransferId1333449197.part (hash=cloud-pro-geo-client-0/cache=cloud-pro-geo-client-0) => /.gfid/f7eb9d21-d39a-4dd6-941c-46d430e18aa2/File 2016.xlsx (hash=cloud-pro-geo-client-0/cache=cloud-pro-geo-client-0)
>> [2016-09-13 19:30:57.475649] W [fuse-bridge.c:1787:fuse_rename_cbk] 0-glusterfs-fuse: 25: /.gfid/f7eb9d21-d39a-4dd6-941c-46d430e18aa2/File 2016.xlsx.ocTransferId1333449197.part -> /.gfid/f7eb9d21-d39a-4dd6-941c-46d430e18aa2/File 2016.xlsx => -1 (Directory not empty)
>> [2016-09-13 19:30:57.506563] I [fuse-bridge.c:5013:fuse_thread_proc] 0-fuse: unmounting /tmp/gsyncd-aux-mount-X9XX0v
>> [2016-09-13 19:30:57.507269] W [glusterfsd.c:1251:cleanup_and_exit] (-->/lib/x86_64-linux-gnu/libpthread.so.0(+0x80a4) [0x7efc7498b0a4] -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x7efc75beb725] -->/usr/sbin/glusterfs(cleanup_and_exit+0x57) [0x7efc75beb5a7] ) 0-: received signum (15), shutting down
>> [2016-09-13 19:30:57.507293] I [fuse-bridge.c:5720:fini] 0-fuse: Unmounting '/tmp/gsyncd-aux-mount-X9XX0v'.
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users



More information about the Gluster-users mailing list