[Gluster-devel] Segmentation fault while running the client

Anand Avati avati at zresearch.com
Mon Dec 17 17:05:02 UTC 2007


Please specify "type protocol/client" in volume globalns

avati

2007/12/17, Luca <raskolnikoff77 at yahoo.it>:
>
> Hello everyone,
>         I'm trying to configure a cluster file system with Gluster, but
> when I
> try to mount the file system I get a segmentation fault error.
>
> My configuration at the moment includes one server and one client (but
> in the future it'll be multiple servers and multiple clients); here it
> is:
>
> ========================= SERVER =====================================
> ### Export volume "brick" with the contents of "/home/export" directory.
> volume brick
>   type protocol/client
>   type storage/posix
>   option directory /mnt/localspace/gluster/fs
> end-volume
>
> volume brick-ns
>   type protocol/client
>   type storage/posix
>   option directory /mnt/localspace/gluster/ns
> end-volume
>
> ### Add network serving capability to above brick.
> volume server
>   type protocol/server
>   option transport-type tcp/server     # For TCP/IP transport
>   option client-volume-filename /etc/glusterfs/glusterfs-client.vol
>   subvolumes brick brick-ns
>   option auth.ip.brick.allow 10.78.36.* # Allow access to "brick" volume
> end-volume
>
> ========================= CLIENT =====================================
> ### File: /etc/glusterfs-client.vol - GlusterFS Client Volume
> Specification
> ### Add client feature and attach to remote subvolume of server1
>
> volume client3
>   type protocol/client
>   option transport-type tcp/client     # for TCP/IP transport
>   option remote-host 10.78.36.204      # IP address of the remote brick
>   option remote-subvolume brick        # name of the remote volume
> end-volume
>
> volume globalns
>     option transport-type tcp/client
>     option remote-host 10.78.36.204
>     option remote-subvolume brick-ns
> end-volume
>
> ### Add unify feature to cluster "server1" and "server2". Associate an
> ### appropriate scheduler that matches your I/O demand.
> volume unify
>   type cluster/unify
>   subvolumes client3
>   option namespace globalns
>   ### ** Round Robin (RR) Scheduler **
>   option scheduler rr
>   option rr.limits.min-free-disk 4GB
>   option rr.refresh-interval 10
> end-volume
> =============================================================
>
> Am I doing any silly error?
> This is what I get from running gdb:
>
> Starting program: /usr/local/test/sbin/glusterfs --no-daemon
> --log-file=/dev/stdout --log-level=DEBUG
> --spec-file=/etc/glusterfs/glusterfs-client.vol /mnt/space/
> [Thread debugging using libthread_db enabled]
> [New Thread 46912501653584 (LWP 5998)]
> 2007-12-17 15:49:27 D [glusterfs.c:131:get_spec_fp] glusterfs: loading
> spec from /etc/glusterfs/glusterfs-client.vol
> 2007-12-17 15:49:27 W [fuse-bridge.c:2100:fuse_transport_notify]
> glusterfs-fuse: Ignoring notify event 4
> [New Thread 1084229952 (LWP 6002)]
> 2007-12-17 15:49:27 D [spec.y:116:new_section] libglusterfs/parser: New
> node for 'client3'
> 2007-12-17 15:49:27 D [spec.y:132:section_type] libglusterfs/parser:
> Type:client3:protocol/client
> 2007-12-17 15:49:27 D [xlator.c:102:xlator_set_type]
> libglusterfs/xlator: attempt to load type protocol/client
> 2007-12-17 15:49:27 D [xlator.c:109:xlator_set_type]
> libglusterfs/xlator: attempt to load
> file /usr/lib64/glusterfs/1.3.7/xlator/protocol/client.so
> 2007-12-17 15:49:27 D [spec.y:152:section_option] libglusterfs/parser:
> Option:client3:transport-type:tcp/client
> 2007-12-17 15:49:27 D [spec.y:152:section_option] libglusterfs/parser:
> Option:client3:remote-host:10.78.36.204
> 2007-12-17 15:49:27 D [spec.y:152:section_option] libglusterfs/parser:
> Option:client3:remote-subvolume:brick
> 2007-12-17 15:49:27 D [spec.y:216:section_end] libglusterfs/parser:
> end:client3
> 2007-12-17 15:49:27 D [spec.y:116:new_section] libglusterfs/parser: New
> node for 'globalns'
> 2007-12-17 15:49:27 D [spec.y:152:section_option] libglusterfs/parser:
> Option:globalns:transport-type:tcp/client
> 2007-12-17 15:49:27 D [spec.y:152:section_option] libglusterfs/parser:
> Option:globalns:remote-host:10.78.36.204
> 2007-12-17 15:49:27 D [spec.y:152:section_option] libglusterfs/parser:
> Option:globalns:remote-subvolume:brick-ns
> 2007-12-17 15:49:27 E [spec.y:211:section_end] libglusterfs/parser:
> "type" not specified for volume globalns
> 2007-12-17 15:49:27 D [spec.y:116:new_section] libglusterfs/parser: New
> node for 'unify'
> 2007-12-17 15:49:27 D [spec.y:132:section_type] libglusterfs/parser:
> Type:unify:cluster/unify
> 2007-12-17 15:49:27 D [xlator.c:102:xlator_set_type]
> libglusterfs/xlator: attempt to load type cluster/unify
> 2007-12-17 15:49:27 D [xlator.c:109:xlator_set_type]
> libglusterfs/xlator: attempt to load
> file /usr/lib64/glusterfs/1.3.7/xlator/cluster/unify.so
> 2007-12-17 15:49:27 D [spec.y:201:section_sub] liglusterfs/parser:
> child:unify->client3
> 2007-12-17 15:49:27 D [spec.y:152:section_option] libglusterfs/parser:
> Option:unify:namespace:globalns
> 2007-12-17 15:49:27 D [spec.y:152:section_option] libglusterfs/parser:
> Option:unify:scheduler:rr
> 2007-12-17 15:49:27 D [spec.y:152:section_option] libglusterfs/parser:
> Option:unify:rr.limits.min-free-disk:4GB
> 2007-12-17 15:49:27 D [spec.y:152:section_option] libglusterfs/parser:
> Option:unify:rr.refresh-interval:10
> 2007-12-17 15:49:27 D [spec.y:216:section_end] libglusterfs/parser:
> end:unify
> 2007-12-17 15:49:27 W [inode.c:1099:inode_table_new] fuse: creating new
> inode table with lru_limit=1024, sizeof(inode_t)=156
> 2007-12-17 15:49:27 D [inode.c:559:__create_inode] fuse/inode: create
> inode(1)
> 2007-12-17 15:49:27 D [inode.c:351:__active_inode] fuse/inode:
> activating inode(1), lru=0/1024
> 2007-12-17 15:49:27 D [client-protocol.c:4549:init] client3: missing
> 'inode-lru-limit'. defaulting to 1000
> 2007-12-17 15:49:27 D [client-protocol.c:4566:init] client3: defaulting
> transport-timeout to 108
> 2007-12-17 15:49:27 D [transport.c:83:transport_load]
> libglusterfs/transport: attempt to load type tcp/client
> 2007-12-17 15:49:27 D [transport.c:88:transport_load]
> libglusterfs/transport: attempt to load
> file /usr/lib64/glusterfs/1.3.7/transport/tcp/client.so
> 2007-12-17 15:49:27 D [unify.c:3887:init] unify: namespace node
> specified as globalns
> 2007-12-17 15:49:27 D [scheduler.c:36:get_scheduler]
> libglusterfs/scheduler: attempt to load file rr.so
>
> 2007-12-17 15:49:27 D [unify.c:3905:init] unify: Child node count is 1
>
> Program received signal SIGSEGV, Segmentation fault.
> [Switching to Thread 46912501653584 (LWP 5998)]
> 0x0000000000000000 in ?? ()
> (gdb) >>>>>>>>>>>>>>>> backtrace <<<<<<<<<<<<<<<<<
> #0  0x0000000000000000 in ?? ()
> #1  0x00002aaaaaacc11d in xlator_tree_init ()
> from /usr/lib64/libglusterfs.so.0
> #2  0x00002aaaab10c972 in init ()
>    from /usr/lib64/glusterfs/1.3.7/xlator/cluster/unify.so
> #3  0x00002aaaaaacc0e5 in xlator_search_by_name ()
>    from /usr/lib64/libglusterfs.so.0
> #4  0x00002aaaaaacc0b9 in xlator_search_by_name ()
>    from /usr/lib64/libglusterfs.so.0
> #5  0x00002aaaaaacc10a in xlator_tree_init ()
> from /usr/lib64/libglusterfs.so.0
> #6  0x000000000040a0e0 in fuse_init (data=0x60e660, conn=0x60f1bc)
>     at fuse-bridge.c:1876
> #7  0x0000003fb0e11157 in fuse_lowlevel_new_compat ()
> from /lib64/libfuse.so.2
> #8  0x000000000040a96b in fuse_transport_notify (xl=0x60e420, event=2,
>     data=0x60e660) at fuse-bridge.c:2128
> #9  0x00002aaaaaad1636 in sys_epoll_iteration ()
>    from /usr/lib64/libglusterfs.so.0
> #10 0x00002aaaaaad10c5 in poll_iteration ()
> from /usr/lib64/libglusterfs.so.0
> #11 0x00000000004037c5 in main (argc=6, argv=0x7fff26097c68) at
> glusterfs.c:388
> (gdb)
>
> Thanks,
> Luca
>
>
>
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at nongnu.org
> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>



-- 
If I traveled to the end of the rainbow
As Dame Fortune did intend,
Murphy would be there to tell me
The pot's at the other end.



More information about the Gluster-devel mailing list