[Gluster-users] V3.0 and rsync crash

Harshavardhana harsha at gluster.com
Tue Dec 15 00:39:33 UTC 2009


Hi Larry,

    your configuration is currently not supported so you might encounter
issues as your
    mail below.

    I would suggest using the following configuration for your setup

volume vol1

 type storage/posix                          # POSIX FS translator

 option directory /mnt/glusterfs/vol1        # Export this directory

end-volume


volume vol2

 type storage/posix

 option directory /mnt/glusterfs/vol2

end-volume


## Add network serving capability to above unified bricks

volume server

 type protocol/server

 option transport-type tcp                       # For TCP/IP transport

 subvolumes vol1 vol2

 option auth.addr.vol1.allow 10.0.0.*          # access to volume

 option auth.addr.vol2.allow 10.0.0.*

end-volume



client config file:



volume brick1

 type protocol/client

 option transport-type tcp            # for TCP/IP transport

 option remote-host gfs001            # IP address of the remote volume

 option remote-subvolume vol1       # name of the remote volume

end-volume
volume brick2

 type protocol/client

 option transport-type tcp            # for TCP/IP transport

 option remote-host gfs001            # IP address of the remote volume

 option remote-subvolume vol2       # name of the remote volume

end-volume


volume bricks

 type cluster/distribute

 subvolumes brick1 brick2

end-volume


Let us know how this works. But i will file a bug to track it.

Regards
--
Harshavardhana
Gluster - http://www.gluster.com


On Tue, Dec 15, 2009 at 5:49 AM, Larry Bates <larry.bates at vitalesafe.com>wrote:

>  Sure.  When Ctrl-C is pressed on client (to terminate rsync)  the logs
> show:
>
>
>
> Client log tail:
>
>
>
> [2009-12-14 08:45:40] N [fuse-bridge.c:2931:fuse_init] glusterfs-fuse: FUSE
> init
>
> ed with protocol versions: glusterfs 7.13 kernel 7.8
>
> [2009-12-14 18:15:45] E [saved-frames.c:165:saved_frames_unwind] storage:
> forced
>
>  unwinding frame type(1) op(UNLINK)
>
> [2009-12-14 18:15:45] W [fuse-bridge.c:1207:fuse_unlink_cbk]
> glusterfs-fuse: 804
>
> 6528: UNLINK() /storage/blobdata/20/40/.49f4020ad397eb689df4e83770a9.8Zirsl
> => -
>
> 1 (Transport endpoint is not connected)
>
> [2009-12-14 18:15:45] W [fuse-bridge.c:1167:fuse_err_cbk] glusterfs-fuse:
> 804652
>
> 9: FLUSH() ERR => -1 (Transport endpoint is not connected)
>
> [2009-12-14 18:15:45] N [client-protocol.c:6972:notify] storage:
> disconnected
>
> [2009-12-14 18:15:45] E [socket.c:760:socket_connect_finish] storage:
> connection
>
>  to 10.0.0.91:6996 failed (Connection refused)
>
> [2009-12-14 18:15:45] E [socket.c:760:socket_connect_finish] storage:
> connection
>
>  to 10.0.0.91:6996 failed (Connection refused)
>
>
>
> Server log tail:
>
>
>
> [2009-12-14 08:52:57] N [server-protocol.c:5809:mop_setvolume] server:
> accepted
>
> client from 10.0.0.71:1021
>
> [2009-12-14 08:52:57] N [server-protocol.c:5809:mop_setvolume] server:
> accepted
>
> client from 10.0.0.71:1020
>
> [2009-12-14 09:51:45] W [posix.c:246:posix_lstat_with_gen] brick1: Access
> to /mn
>
> t/glusterfs/vol1//.. (on dev 2304) is crossing device (2052)
>
> [2009-12-14 09:51:45] W [posix.c:246:posix_lstat_with_gen] brick2: Access
> to /mn
>
> t/glusterfs/vol2//.. (on dev 2304) is crossing device (2068)
>
> pending frames:
>
> frame : type(1) op(UNLINK)
>
>
>
> patchset: 2.0.1-886-g8379edd
>
> signal received: 11
>
> time of crash: 2009-12-14 18:23:08
>
> configuration details:
>
> argp 1
>
> backtrace 1
>
> dlfcn 1
>
> fdatasync 1
>
> libpthread 1
>
> llistxattr 1
>
> setfsid 1
>
> spinlock 1
>
> epoll.h 1
>
> xattr.h 1
>
> st_atim.tv_nsec 1
>
> package-string: glusterfs 3.0.0
>
> /lib64/libc.so.6[0x35702302d0]
>
> /usr/lib64/glusterfs/3.0.0/xlator/protocol/server.so[0x2ad7fd3820d3]
>
>
> /usr/lib64/glusterfs/3.0.0/xlator/protocol/server.so(server_unlink_cbk+0x265)[0x
>
> 2ad7fd383d75]
>
>
> /usr/lib64/glusterfs/3.0.0/xlator/cluster/distribute.so(dht_unlink_cbk+0x1d5)[0x
>
> 2ad7fd1619fa]
>
>
> /usr/lib64/glusterfs/3.0.0/xlator/storage/posix.so(posix_unlink+0x6cc)[0x2ad7fcf
>
> 39b9f]
>
>
> /usr/lib64/glusterfs/3.0.0/xlator/cluster/distribute.so(dht_unlink+0x530)[0x2ad7
>
> fd16a54f]
>
>
> /usr/lib64/glusterfs/3.0.0/xlator/protocol/server.so(server_unlink_resume+0x17e)
>
> [0x2ad7fd389a81]
>
>
> /usr/lib64/glusterfs/3.0.0/xlator/protocol/server.so(server_resolve_done+0x59)[0
>
> x2ad7fd395970]
>
>
> /usr/lib64/glusterfs/3.0.0/xlator/protocol/server.so(server_resolve_all+0xea)[0x
>
> 2ad7fd395a61]
>
>
> /usr/lib64/glusterfs/3.0.0/xlator/protocol/server.so(server_resolve+0xce)[0x2ad7
>
> fd395910]
>
>
> /usr/lib64/glusterfs/3.0.0/xlator/protocol/server.so(server_resolve_all+0xc5)[0x
>
> 2ad7fd395a3c]
>
>
> /usr/lib64/glusterfs/3.0.0/xlator/protocol/server.so(server_resolve_entry+0xb1)[
>
> 0x2ad7fd395559]
>
>
> /usr/lib64/glusterfs/3.0.0/xlator/protocol/server.so(server_resolve+0x7d)[0x2ad7
>
> fd3958bf]
>
>
> /usr/lib64/glusterfs/3.0.0/xlator/protocol/server.so(server_resolve_all+0x76)[0x
>
> 2ad7fd3959ed]
>
>
> /usr/lib64/glusterfs/3.0.0/xlator/protocol/server.so(resolve_and_resume+0x50)[0x
>
> 2ad7fd395af9]
>
>
> /usr/lib64/glusterfs/3.0.0/xlator/protocol/server.so(server_unlink+0x115)[0x2ad7
>
> fd389bfd]
>
>
> /usr/lib64/glusterfs/3.0.0/xlator/protocol/server.so(protocol_server_interpret+0
>
> x1d9)[0x2ad7fd392d1f]
>
>
> /usr/lib64/glusterfs/3.0.0/xlator/protocol/server.so(protocol_server_pollin+0x69
>
> )[0x2ad7fd393ebf]
>
>
> /usr/lib64/glusterfs/3.0.0/xlator/protocol/server.so(notify+0x130)[0x2ad7fd3940c
>
> e]
>
> /usr/lib64/libglusterfs.so.0(xlator_notify+0xf5)[0x2ad7fc47959b]
>
>
> /usr/lib64/glusterfs/3.0.0/transport/socket.so(socket_event_poll_in+0x40)[0x2aaa
>
> aaaaed9a]
>
>
> /usr/lib64/glusterfs/3.0.0/transport/socket.so(socket_event_handler+0xb7)[0x2aaa
>
> aaaaf08f]
>
> /usr/lib64/libglusterfs.so.0[0x2ad7fc49e185]
>
> /usr/lib64/libglusterfs.so.0[0x2ad7fc49e35a]
>
> /usr/lib64/libglusterfs.so.0(event_dispatch+0x73)[0x2ad7fc49e670]
>
> glusterfs(main+0xe88)[0x405e10]
>
> /lib64/libc.so.6(__libc_start_main+0xf4)[0x357021d994]
>
> glusterfs[0x4025d9]
>
> ---------
>
>
>
> *From:* harshavardhanacool at gmail.com [mailto:harshavardhanacool at gmail.com]
> *On Behalf Of *Harshavardhana
> *Sent:* Monday, December 14, 2009 4:22 PM
> *To:* Larry Bates
> *Cc:* gluster-users at gluster.org
> *Subject:* Re: [Gluster-users] V3.0 and rsync crash
>
>
>
> Hi Larry,
>
>       Can you give us more info with log files?
>
> Regards
> --
> Harshavardhana
> Gluster - http://www.gluster.com
>
>  On Mon, Dec 14, 2009 at 8:25 PM, Larry Bates <larry.bates at vitalesafe.com>
> wrote:
>
> I'm a newbie and am setting up glusterFS for the first time.  Right now I
> have a
> single server, single client setup that seemed to be working on V2.09
> properly.
>
> Just upgraded from 2.09 to 3.0 and am noticing the following problem:
>
>
>
> Server and client setup is working and glusterFS is mounting on client
> properly.
>
> Start rsync job to synchronize file between local storage and glusterFS
> volume
>
> Interrupting rsync job with Ctrl-C crashes the server.  Restarting the
> server
> and
>
> client is required.
>
>
>
> server config file:
>
>
>
> volume brick1
>
>  type storage/posix                          # POSIX FS translator
>
>  option directory /mnt/glusterfs/vol1        # Export this directory
>
> end-volume
>
>
>
> volume brick2
>
>  type storage/posix
>
>  option directory /mnt/glusterfs/vol2
>
> end-volume
>
>
>
> volume ns
>
>  type storage/posix
>
>  option directory /mnt/glusterfs-ns
>
> end-volume
>
>
>
> volume bricks
>
>  type cluster/distribute
>
>  option namespace ns
>
>  option scheduler alu  # adptive least usage scheduler
>
>  subvolumes brick1 brick2
>
> end-volume
>
>
>
> ## Add network serving capability to above unified bricks
>
> volume server
>
>  type protocol/server
>
>  option transport-type tcp                       # For TCP/IP transport
>
>  #option transport.socket.listen-port 6996       # Default is 6996
>
>  #option client-volume-filename /etc/glusterfs/glusterfs-client.vol
>
>  subvolumes bricks
>
>  option auth.addr.bricks.allow 10.0.0.*          # access to volume
>
> end-volume
>
>
>
> client config file:
>
>
>
> volume storage
>
>  type protocol/client
>
>  option transport-type tcp            # for TCP/IP transport
>
>  option remote-host gfs001            # IP address of the remote volume
>
>  option remote-subvolume bricks       # name of the remote volume
>
> end-volume
>
>
>
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
>
>


More information about the Gluster-users mailing list