[Gluster-users] Réplication failed again...

Julien Groselle julien.groselle at gmail.com
Wed Aug 17 07:56:31 UTC 2011


Hi Pranith,

I think my problem about rebalance are a FUSE problem...
I don't like use FUSE and it was the only reason i didn't want set up a
GlusterFS architecture, but i haven't the power to choose :)

When i run gluster volume rebalance REP_SVG start, the step1 start (fixing
layout) and after maybe 2 days, we have a rebalance failed.
In the log there are just FUSE errors... see my first mail for precision.

I have downloaded FUSE binary from Gluster website :
http://download.gluster.com/pub/gluster/glusterfs/fuse/fuse-2.7.4glfs11.tar.gz

Someone have any solution ?
For now I'm looking for another solution, because we need security for our
backups.

Best regards

*Julien Groselle*

2011/8/17 Pranith Kumar K <pranithk at gluster.com>

> **
> hi Julien,
>      Why did the rebalance operation failed. Could you let us know what
> steps lead to this situation.
>
> Pranith
>
>
> On 08/16/2011 12:35 PM, Julien Groselle wrote:
>
> Hello,
>
>  I'm back with my replication issues ! :(
> I can't understand why i replicate 3To but no more...
>
>  We have 2 servers with two storage array, and i installed glusterFS to
> make replication in between this server...
> As far as i understand, there are just one command to use ?
>
>  toomba:~# gluster peer status
> Number of Peers: 1
>
>  Hostname: kaiserstuhl-svg
>  Uuid: 5b79b4bc-c8d2-48d4-bd43-37991197ab47
> State: Peer in Cluster (Connected)
>
>  I have reconfigured default parameters and i try diff algorithm for
> self-heal :
>
>  toomba:~# gluster volume info all
>
>  Volume Name: REP_SVG
> Type: Replicate
> Status: Started
> Number of Bricks: 2
> Transport-type: tcp
> Bricks:
> Brick1: toomba-svg:/storage/backup
>  Brick2: kaiserstuhl-svg:/storage/backup
> Options Reconfigured:
> cluster.self-heal-window-size: 64
> cluster.data-self-heal-algorithm: diff
> diagnostics.client-log-level: WARNING
> performance.write-behind-window-size: 1MB
> performance.cache-size: 32MB
> diagnostics.brick-log-level: WARNING
>
>  And i just use this command :
> gluster volume rebalance REP_SVG start
>
>  And after 2 or 3 days of "fix layout", i always have the same message :
>  toomba:~# gluster volume rebalance REP_SVG status
> rebalance failed
>
>  Here are the errors messages :
>  toomba:~# tail /var/log/glusterfs/etc-glusterd-mount-REP_SVG.log
> [2011-08-13 18:36:47.252272] E
> [afr-self-heal-entry.c:1085:afr_sh_entry_impunge_newfile_cbk]
> 0-REP_SVG-replicate-0: creation of /Sauvegarde/
> chaiten.coe.int/usrshare/20110721/wireshark/diameter on REP_SVG-client-1
> failed (File exists)
> [2011-08-13 18:36:47.252467] E
> [afr-self-heal-entry.c:1085:afr_sh_entry_impunge_newfile_cbk]
> 0-REP_SVG-replicate-0: creation of /Sauvegarde/
> chaiten.coe.int/usrshare/20110721/wireshark/help on REP_SVG-client-1
> failed (File exists)
> [2011-08-13 18:36:47.252692] E
> [afr-self-heal-entry.c:1085:afr_sh_entry_impunge_newfile_cbk]
> 0-REP_SVG-replicate-0: creation of /Sauvegarde/
> chaiten.coe.int/usrshare/20110721/wireshark/radius on REP_SVG-client-1
> failed (File exists)
> [2011-08-13 18:36:47.253069] E
> [afr-self-heal-entry.c:1085:afr_sh_entry_impunge_newfile_cbk]
> 0-REP_SVG-replicate-0: creation of /Sauvegarde/
> chaiten.coe.int/usrshare/20110721/wireshark/wimaxasncp on REP_SVG-client-1
> failed (File exists)
> [2011-08-13 18:36:47.253255] E
> [afr-self-heal-entry.c:1085:afr_sh_entry_impunge_newfile_cbk]
> 0-REP_SVG-replicate-0: creation of /Sauvegarde/
> chaiten.coe.int/usrshare/20110721/wireshark/tpncp on REP_SVG-client-1
> failed (File exists)
> [2011-08-13 18:36:47.253400] E
> [afr-self-heal-entry.c:1085:afr_sh_entry_impunge_newfile_cbk]
> 0-REP_SVG-replicate-0: creation of /Sauvegarde/
> chaiten.coe.int/usrshare/20110721/wireshark/dtds on REP_SVG-client-1
> failed (File exists)
> [2011-08-13 18:36:47.253758] E
> [afr-self-heal-entry.c:1085:afr_sh_entry_impunge_newfile_cbk]
> 0-REP_SVG-replicate-0: creation of /Sauvegarde/
> chaiten.coe.int/usrshare/20110721/wireshark/init.lua on REP_SVG-client-1
> failed (File exists)
> [2011-08-13 18:36:47.258712] W [fuse-bridge.c:2499:fuse_xattr_cbk]
> 0-glusterfs-fuse: 235921125: GETXATTR(trusted.distribute.fix.layout)
> /Sauvegarde/chaiten.coe.int/usrshare/20110721/wireshark/diameter => -1 (No
> such file or directory)
> [2011-08-13 18:36:47.271186] W [fuse-bridge.c:582:fuse_fd_cbk]
> 0-glusterfs-fuse: 235921126: OPENDIR() /Sauvegarde/
> chaiten.coe.int/usrshare/20110721/wireshark/diameter => -1 (No such file
> or directory)
> [2011-08-13 18:36:47.985683] W [glusterfsd.c:712:cleanup_and_exit]
> (-->/lib/libc.so.6(clone+0x6d) [0x7f0abb79d02d]
> (-->/lib/libpthread.so.0(+0x68ba) [0x7f0abba358ba]
> (-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xdd) [0x40536d]))) 0-: received
> signum (15), shutting down
>
>  toomba:~# tail /var/log/glusterfs/bricks/storage-backup.log
> [2011-08-13 18:36:47.255532] W [inode.c:1035:inode_path]
> 0-/storage/backup/inode: no dentry for non-root inode 346366485:
> 47275fbe-a755-40a0-af3c-9545ea0361e9
> [2011-08-13 18:36:47.257356] W [inode.c:1035:inode_path]
> 0-/storage/backup/inode: no dentry for non-root inode 346366485:
> 47275fbe-a755-40a0-af3c-9545ea0361e9
> [2011-08-13 18:36:47.257807] W [inode.c:1035:inode_path]
> 0-/storage/backup/inode: no dentry for non-root inode 346366485:
> 47275fbe-a755-40a0-af3c-9545ea0361e9
> [2011-08-13 18:36:47.258342] E [posix.c:3209:posix_getxattr]
> 0-REP_SVG-posix: listxattr failed on /storage/backup/diameter: No such file
> or directory
> [2011-08-13 18:36:47.271009] E [posix.c:957:posix_opendir] 0-REP_SVG-posix:
> opendir failed on /diameter: No such file or directory
>  [2011-08-13 18:36:48.168309] W
> [socket.c:1494:__socket_proto_state_machine] 0-tcp.REP_SVG-server: reading
> from socket failed. Error (Transport endpoint is not connected), peer (
> 192.168.250.58:1022)
> [2011-08-13 18:36:48.194917] W [inode.c:1035:inode_path]
> 0-/storage/backup/inode: no dentry for non-root inode 431358804:
> 8f5a01c3-5d35-4b86-8a93-2957403360b4
> [2011-08-13 18:36:48.224864] W [inode.c:1035:inode_path]
> 0-/storage/backup/inode: no dentry for non-root inode 371344980:
> c17bbf87-112a-4c45-a481-d923b5780d11
> [2011-08-13 18:36:48.225068] W [inode.c:1035:inode_path]
> 0-/storage/backup/inode: no dentry for non-root inode 299767723:
> d8fff55d-80ae-4311-9864-d39daa52c2b8
> [2011-08-13 18:36:48.225176] W [inode.c:1035:inode_path]
> 0-/storage/backup/inode: no dentry for non-root inode 306335003:
> d7d8dcd2-02ab-4d6a-99ec-c20aa481ef6f
>
>  Please tell me that i missed one option, and you have THE solution...
> Because i have try so many parameters, and all the time the replication
> failed...
>
>  Let me know if you need more information.
>
>  Best regards.
>
> *Julien Groselle*
>
>
> _______________________________________________
> Gluster-users mailing listGluster-users at gluster.orghttp://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20110817/472105ec/attachment.html>


More information about the Gluster-users mailing list