[Gluster-users] geo-replication not syncing files...

Wade Fitzpatrick wade.fitzpatrick at ladbrokes.com.au
Wed Nov 11 01:38:40 UTC 2015


Your ssh commands connect to port 2503 - is that port listening on the 
slaves?
Does it use privilege-separation?

Don't force it to changelog without an initial sync using xsync.

The warning "fuse: xlator does not implement release_cbk" was fixed in 
3.6.0alpha1 but looks like it could easily be backported 
https://github.com/gluster/glusterfs/commit/bca9eab359710eb3b826c6441126e2e56f774df5

On 11/11/2015 3:20 AM, Dietmar Putz wrote:
> Hi all,
>
> i need some help with a geo-replication issue...
> recently i upgraded two 6-node distributed-replicated gluster from 
> ubuntu 12.04.5 lts to 14.04.3 lts resp. glusterfs 3.4.7 to 3.5.6
> since then the geo-replication does not start syncing but remains as 
> shown in the 'status detail' output below for about 48h.
>
> I followed the hints for upgrade with an existing geo-replication :
> http://www.gluster.org/community/documentation/index.php/Upgrade_to_3.5
>
> the master_gfid_file.txt was created and applied to the slave volume. 
> geo-replication was started with 'force' option.
> in the gluster.log on the slave i can find thousands of lines with 
> messages like :
> ".../.gfid/1abb953b-aa9d-4fa3-9a72-415204057572 => -1 (Operation not 
> permitted)"
> and no files are synced.
>
> I'm not sure whats going on and since there are about 40TByte of data 
> already replicated by the old 3.4.7 setup I have some fear to try 
> around...
> so i got some questions...maybe somebody can give me some hints...
>
> 1. as shown in the example below the trusted.gfid of the same file 
> differs in master and slave volume. as far as i understood the 
> upgrade-howto after applying the master_gfid_file.txt on the slave 
> they should be the same on master and slave...is that right ?
> 2. as shown in the config below the change_detector is 'xsync'. 
> Somewhere i red that xsync is used for the initial replication and is 
> changing to 'change_log' later on when the entire sync is done. should 
> i try to modify the change_detector to 'change_log', does it make 
> sense...?
>
> any other idea which could help me to solve this problem....?
>
> best regards
> dietmar
>
>
>
>
> [ 11:10:01 ] - root at gluster-ger-ber-09  ~ $glusterfs --version
> glusterfs 3.5.6 built on Sep 16 2015 15:27:30
> ...
> [ 11:11:37 ] - root at gluster-ger-ber-09  ~ $cat 
> /var/lib/glusterd/glusterd.info | grep operating-version
> operating-version=30501
>
>
> [ 10:55:35 ] - root at gluster-ger-ber-09  ~ $gluster volume 
> geo-replication ger-ber-01 ssh://gluster-wien-02::aut-wien-01 status 
> detail
>
> MASTER NODE           MASTER VOL    MASTER BRICK 
> SLAVE                                 STATUS         CHECKPOINT 
> STATUS    CRAWL STATUS    FILES SYNCD    FILES PENDING    BYTES 
> PENDING    DELETES PENDING    FILES SKIPPED
> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 
>
> gluster-ger-ber-09    ger-ber-01    /gluster-export 
> gluster-wien-05-int::aut-wien-01      Active N/A Hybrid Crawl    0 
> 8191             0                0 0
> gluster-ger-ber-11    ger-ber-01    /gluster-export 
> ssh://gluster-wien-02::aut-wien-01    Not Started N/A                  
> N/A             N/A N/A N/A              N/A N/A
> gluster-ger-ber-10    ger-ber-01    /gluster-export 
> ssh://gluster-wien-02::aut-wien-01    Not Started N/A                  
> N/A             N/A N/A N/A              N/A N/A
> gluster-ger-ber-12    ger-ber-01    /gluster-export 
> ssh://gluster-wien-02::aut-wien-01    Not Started N/A                  
> N/A             N/A N/A N/A              N/A N/A
> gluster-ger-ber-07    ger-ber-01    /gluster-export 
> ssh://gluster-wien-02::aut-wien-01    Not Started N/A                  
> N/A             N/A N/A N/A              N/A N/A
> gluster-ger-ber-08    ger-ber-01    /gluster-export 
> gluster-wien-04-int::aut-wien-01      Passive N/A N/A             0 
> 0                0                0 0
> [ 10:55:48 ] - root at gluster-ger-ber-09  ~ $
>
>
> [ 10:56:56 ] - root at gluster-ger-ber-09  ~ $gluster volume 
> geo-replication ger-ber-01 ssh://gluster-wien-02::aut-wien-01 config
> special_sync_mode: partial
> state_socket_unencoded: 
> /var/lib/glusterd/geo-replication/ger-ber-01_gluster-wien-02_aut-wien-01/ssh%3A%2F%2Froot%4082.199.131.2%3Agluster%3A%2F%2F127.0.0.1%3Aaut-wien-01.socket
> gluster_log_file: 
> /var/log/glusterfs/geo-replication/ger-ber-01/ssh%3A%2F%2Froot%4082.199.131.2%3Agluster%3A%2F%2F127.0.0.1%3Aaut-wien-01.gluster.log
> ssh_command: ssh -p 2503 -oPasswordAuthentication=no 
> -oStrictHostKeyChecking=no -i 
> /var/lib/glusterd/geo-replication/secret.pem
> ignore_deletes: true
> change_detector: xsync
> ssh_command_tar: ssh -p 2503 -oPasswordAuthentication=no 
> -oStrictHostKeyChecking=no -i 
> /var/lib/glusterd/geo-replication/tar_ssh.pem
> working_dir: 
> /var/run/gluster/ger-ber-01/ssh%3A%2F%2Froot%4082.199.131.2%3Agluster%3A%2F%2F127.0.0.1%3Aaut-wien-01
> remote_gsyncd: /nonexistent/gsyncd
> log_file: 
> /var/log/glusterfs/geo-replication/ger-ber-01/ssh%3A%2F%2Froot%4082.199.131.2%3Agluster%3A%2F%2F127.0.0.1%3Aaut-wien-01.log
> socketdir: /var/run
> state_file: 
> /var/lib/glusterd/geo-replication/ger-ber-01_gluster-wien-02_aut-wien-01/ssh%3A%2F%2Froot%4082.199.131.2%3Agluster%3A%2F%2F127.0.0.1%3Aaut-wien-01.status
> state_detail_file: 
> /var/lib/glusterd/geo-replication/ger-ber-01_gluster-wien-02_aut-wien-01/ssh%3A%2F%2Froot%4082.199.131.2%3Agluster%3A%2F%2F127.0.0.1%3Aaut-wien-01-detail.status
> session_owner: 6a071cfa-b150-4f0b-b1ed-96ab5d4bd671
> gluster_command_dir: /usr/sbin/
> pid_file: 
> /var/lib/glusterd/geo-replication/ger-ber-01_gluster-wien-02_aut-wien-01/ssh%3A%2F%2Froot%4082.199.131.2%3Agluster%3A%2F%2F127.0.0.1%3Aaut-wien-01.pid
> georep_session_working_dir: 
> /var/lib/glusterd/geo-replication/ger-ber-01_gluster-wien-02_aut-wien-01/
> gluster_params: aux-gfid-mount
> volume_id: 6a071cfa-b150-4f0b-b1ed-96ab5d4bd671
> [ 11:10:01 ] - root at gluster-ger-ber-09  ~ $
>
>
>
> [ 12:45:34 ] - root at gluster-wien-05 
> /var/log/glusterfs/geo-replication-slaves $tail -f 
> 6a071cfa-b150-4f0b-b1ed-96ab5d4bd671\:gluster%3A%2F%2F127.0.0.1%3Aaut-wien-01.gluster.log
> [2015-11-10 12:59:16.097932] W [fuse-bridge.c:1942:fuse_create_cbk] 
> 0-glusterfs-fuse: 54267: /.gfid/1abb953b-aa9d-4fa3-9a72-415204057572 
> => -1 (Operation not permitted)
> [2015-11-10 12:59:16.098044] W [defaults.c:1381:default_release] 
> (-->/usr/lib/x86_64-linux-gnu/glusterfs/3.5.6/xlator/mount/fuse.so(+0xfb4d) 
> [0x7fc9cd104b4d] 
> (-->/usr/lib/x86_64-linux-gnu/glusterfs/3.5.6/xlator/mount/fuse.so(free_fuse_state+0x85) 
> [0x7fc9cd0fab95] 
> (-->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(fd_unref+0x10e) 
> [0x7fc9cf52ec9e]))) 0-fuse: xlator does not implement release_cbk
> ...
>
>
> grep 1abb953b-aa9d-4fa3-9a72-415204057572 master_gfid_file.txt
> 1abb953b-aa9d-4fa3-9a72-415204057572 1050/hyve/364/14158.mp4
>
> putz at sdn-de-gate-01:~/central$ ./mycommand.sh -H 
> gluster-ger,gluster-wien -c "getfattr -m . -d -e hex 
> /gluster-export/1050/hyve/364/14158.mp4"
> ...
> master volume :
> -----------------------------------------------------
> Host : gluster-ger-ber-09-int
> # file: gluster-export/1050/hyve/364/14158.mp4
> trusted.afr.ger-ber-01-client-6=0x000000000000000000000000
> trusted.afr.ger-ber-01-client-7=0x000000000000000000000000
> trusted.gfid=0x1abb953baa9d4fa39a72415204057572
> trusted.glusterfs.6a071cfa-b150-4f0b-b1ed-96ab5d4bd671.xtime=0x54bff5c40008dd7f 
>
> -----------------------------------------------------
> Host : gluster-ger-ber-10-int
> # file: gluster-export/1050/hyve/364/14158.mp4
> trusted.afr.ger-ber-01-client-6=0x000000000000000000000000
> trusted.afr.ger-ber-01-client-7=0x000000000000000000000000
> trusted.gfid=0x1abb953baa9d4fa39a72415204057572
> trusted.glusterfs.6a071cfa-b150-4f0b-b1ed-96ab5d4bd671.xtime=0x54bff5c40008dd7f 
>
> ...
> slave volume :
> Host : gluster-wien-04
> # file: gluster-export/1050/hyve/364/14158.mp4
> trusted.afr.aut-wien-01-client-2=0x000000000000000000000000
> trusted.afr.aut-wien-01-client-3=0x000000000000000000000000
> trusted.gfid=0x129ba62c3d214b34beb366fb1e2c8e4b
> trusted.glusterfs.6a071cfa-b150-4f0b-b1ed-96ab5d4bd671.xtime=0x54bff5c40008dd7f 
>
> -----------------------------------------------------
> Host : gluster-wien-05
> # file: gluster-export/1050/hyve/364/14158.mp4
> trusted.afr.aut-wien-01-client-2=0x000000000000000000000000
> trusted.afr.aut-wien-01-client-3=0x000000000000000000000000
> trusted.gfid=0x129ba62c3d214b34beb366fb1e2c8e4b
> trusted.glusterfs.6a071cfa-b150-4f0b-b1ed-96ab5d4bd671.xtime=0x54bff5c40008dd7f 
>
> -----------------------------------------------------
> ...
> putz at sdn-de-gate-01:~/central$
>
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20151111/6010a497/attachment.html>


More information about the Gluster-users mailing list