[Gluster-users] Error : gluster2.0.3rc1 with fuse2.8 in kernel2.6.30 , help !!!!!

Harshavardhana harsha at gluster.com
Mon Jul 6 09:18:32 UTC 2009


Eagleyes,

   I think you are using glusterfs with two different versions of fuse API
versions. API versions for 2.6.30 kernel are not compatible with 2.6.16-21
version. I would suggest you to use same fuse API versions for glusterfs.
Can i have a few details

1. dmesg | grep -i fuse (on each clients)
2. grep -i FUSE_MINOR_VERSION /usr/include/fuse/fuse_common.h (on each
clients)

Regards
--
Harshavardhana
Z Research Inc http://www.zresearch.com/


On Mon, Jul 6, 2009 at 12:31 PM, eagleeyes <eagleeyes at 126.com> wrote:

>  HI
>
>   1.    I use gluster2.0.3rc2 with fuse init (API version 7.11) in SUSE
> sp10 ,kernel 2.6.30.
> There were some  error log :
>  pending frames:
> frame : type(1) op(WRITE)
> frame : type(1) op(READ)
> frame : type(1) op(READ)
> frame : type(1) op(READ)
>  patchset: 65524f58b29f0b813549412ba6422711a505f5d8
> signal received: 11
> configuration details:argp 1
> backtrace 1
> dlfcn 1
> fdatasync 1
> libpthread 1
> llistxattr 1
> setfsid 1
> spinlock 1
> epoll.h 1
> xattr.h 1
> st_atim.tv_nsec 1
> package-string: glusterfs 2.0.3rc2
> [0xffffe400]
> /usr/local/lib/libfuse.so.2(fuse_session_process+0x26)[0xb752fb56]
> /lib/glusterfs/2.0.3rc2/xlator/mount/fuse.so[0xb755de25]
> /lib/libpthread.so.0[0xb7f0d2ab]
> /lib/libc.so.6(__clone+0x5e)[0xb7ea4a4e]
> ---------
>   2. Use  glusterfs 2.0.3rc2 with fuse init (API version 7.6)  in suse
> sp10, kernel 2.6.16.21-0.8-smp ,
> when  i expanded  dht volumes from four to six ,then i "rm *" in gluster
> directory , there were some error :
>
> [2009-07-06 22:56:23] W [fuse-bridge.c:921:fuse_unlink_cbk] glusterfs-fuse: 1636: RMDIR() /scheduler => -1 (No such file or directory)
>
> [2009-07-06 22:56:23] W [fuse-bridge.c:921:fuse_unlink_cbk] glusterfs-fuse: 1643: RMDIR() /transport => -1 (No such file or directory)
>
> [2009-07-06 22:56:23] W [fuse-bridge.c:921:fuse_unlink_cbk] glusterfs-fuse: 1655: RMDIR() /xlators/cluster => -1 (No such file or directory)
>
> [2009-07-06 22:56:23] W [fuse-bridge.c:921:fuse_unlink_cbk] glusterfs-fuse: 1666: RMDIR() /xlators/debug => -1 (No such file or directory)
> [2009-07-06 22:56:23] W [fuse-bridge.c:921:fuse_unlink_cbk] glusterfs-fuse: 1677: RMDIR() /xlators/mount => -1 (No such file or directory)
>
>
>   and new files  didn't write into the new volumes after expansion .
>
>
>
>
> 2009-07-06
> ------------------------------
>  eagleeyes
> ------------------------------
> *发件人:* Anand Avati
> *发送时间:* 2009-07-06  12:09:13
> *收件人:* eagleeyes
> *抄送:* gluster-users
> *主题:* Re: [Gluster-users] Error : gluster2.0.3rc1 with fuse2.8 in
> kernel2.6.30 ,help !!!!!
>   Please use 2.0.3 stable, or upgrade to the next rc2 until then. This
> has been fixed in rc2.
>  Avati
>  On Mon, Jul 6, 2009 at 8:31 AM, eagleeyes<eagleeyes at 126.com> wrote:
> > HI
> >    I use gluster2.0.3rc1 with  fuse 2.8 in kernel
>
> > 2.6.30(SUSE Linux Enterprise Server 10 SP1 with kernel  2.6.30 ) . the mount
> > message was :
> >
> > /dev/hda4 on /data type reiserfs (rw,user_xattr)
>
> > glusterfs-client.vol.dht on /home type fuse.glusterfs (rw,allow_other,default_permissions,max_read=131072)
> >
> >
> >
>
> >  There was some error when i "touce 111" in gluster directory ,the error was
> > :
> >  /home: Transport endpoint is not connected
> >
> > pending frames:
> > patchset: e0db4ff890b591a58332994e37ce6db2bf430213
> > signal received: 11
> > configuration details:argp 1
> > backtrace 1
> > dlfcn 1
> > fdatasync 1
> > libpthread 1
> > llistxattr 1
> > setfsid 1
> > spinlock 1
> > epoll.h 1
> > xattr.h 1
> > st_atim.tv_nsec 1
> > package-string: glusterfs 2.0.3rc1
> > [0xffffe400]
> > /lib/glusterfs/2.0.3rc1/xlator/mount/fuse.so[0xb75c6288]
>
> > /lib/glusterfs/2.0.3rc1/xlator/performance/write-behind.so(wb_create_cbk+0xa7)[0xb75ccad7]
>
> > /lib/glusterfs/2.0.3rc1/xlator/performance/io-cache.so(ioc_create_cbk+0xde)[0xb7fbe8ae]
>
> > /lib/glusterfs/2.0.3rc1/xlator/performance/read-ahead.so(ra_create_cbk+0x167)[0xb7fc78b7]
>
> > /lib/glusterfs/2.0.3rc1/xlator/cluster/dht.so(dht_create_cbk+0xf7)[0xb75e25b7]
>
> > /lib/glusterfs/2.0.3rc1/xlator/protocol/client.so(client_create_cbk+0x2ad)[0xb76004ad]
>
> > /lib/glusterfs/2.0.3rc1/xlator/protocol/client.so(protocol_client_interpret+0x1ef)[0xb75ef8ff]
>
> > /lib/glusterfs/2.0.3rc1/xlator/protocol/client.so(protocol_client_pollin+0xcf)[0xb75efaef]
>
> > /lib/glusterfs/2.0.3rc1/xlator/protocol/client.so(notify+0x1ec)[0xb75f6ddc]
>
> > /lib/glusterfs/2.0.3rc1/transport/socket.so(socket_event_poll_in+0x3b)[0xb75b775b]
>
> > /lib/glusterfs/2.0.3rc1/transport/socket.so(socket_event_handler+0xae)[0xb75b7b8e]
> > /lib/libglusterfs.so.0[0xb7facbda]
> > /lib/libglusterfs.so.0(event_dispatch+0x21)[0xb7fabac1]
> > glusterfs(main+0xc2e)[0x804b6ae]
> > /lib/libc.so.6(__libc_start_main+0xdc)[0xb7e6087c]
> > glusterfs[0x8049c11]
> > ---------
> >
> > the server configuration
> >
> > gfs1:/ # cat /etc/glusterfs/glusterfsd-sever.vol
> > volume posix1
> >   type storage/posix                   # POSIX FS translator
> >   option directory /data/data1        # Export this directory
> > end-volume
> > volume posix2
> >   type storage/posix                   # POSIX FS translator
> >   option directory /data/data2        # Export this directory
> > end-volume
> > volume posix3
> >   type storage/posix                   # POSIX FS translator
> >   option directory /data/data3        # Export this directory
> > end-volume
> > volume posix4
> >   type storage/posix                   # POSIX FS translator
> >   option directory /data/data4        # Export this directory
> > end-volume
> > volume posix5
> >   type storage/posix                   # POSIX FS translator
> >   option directory /data/data5        # Export this directory
> > end-volume
> > volume posix6
> >   type storage/posix                   # POSIX FS translator
> >   option directory /data/data6        # Export this directory
> > end-volume
> > volume posix7
> >   type storage/posix                   # POSIX FS translator
> >   option directory /data/data7        # Export this directory
> > end-volume
> > volume posix8
> >   type storage/posix                   # POSIX FS translator
> >   option directory /data/data8        # Export this directory
> > end-volume
> > volume brick1
> >   type features/posix-locks
>
> >   option mandatory-locks on          # enables mandatory locking on all files
> >   subvolumes posix1
> > end-volume
> > volume brick2
> >   type features/posix-locks
>
> >   option mandatory-locks on          # enables mandatory locking on all files
> >   subvolumes posix2
> > end-volume
> > volume brick3
> >   type features/posix-locks
>
> >   option mandatory-locks on          # enables mandatory locking on all files
> >   subvolumes posix3
> > end-volume
> > volume brick4
> >   type features/posix-locks
>
> >   option mandatory-locks on          # enables mandatory locking on all files
> >   subvolumes posix4
> > end-volume
> > volume brick5
> >   type features/posix-locks
>
> >   option mandatory-locks on          # enables mandatory locking on all files
> >   subvolumes posix5
> > end-volume
> > volume brick6
> >   type features/posix-locks
>
> >   option mandatory-locks on          # enables mandatory locking on all files
> >   subvolumes posix6
> > end-volume
> > volume brick7
> >   type features/posix-locks
>
> >   option mandatory-locks on          # enables mandatory locking on all files
> >   subvolumes posix7
> > end-volume
> > volume brick8
> >   type features/posix-locks
>
> >   option mandatory-locks on          # enables mandatory locking on all files
> >   subvolumes posix8
> > end-volume
> > ### Add network serving capability to above brick.
> > volume server
> >   type protocol/server
> >   option transport-type tcp
>
> >   option transport.socket.bind-address 172.20.92.240     # Default is to listen on all interfaces
> >   option transport.socket.listen-port 6996              # Default is 6996
> >   subvolumes brick1 brick2 brick3 brick4
> >   option auth.addr.brick1.allow * # Allow access to "brick" volume
> >   option auth.addr.brick2.allow * # Allow access to "brick" volume
> >   option auth.addr.brick3.allow * # Allow access to "brick" volume
> >   option auth.addr.brick4.allow * # Allow access to "brick" volume
> >   option auth.addr.brick5.allow * # Allow access to "brick" volume
> >   option auth.addr.brick6.allow * # Allow access to "brick" volume
> >   option auth.addr.brick7.allow * # Allow access to "brick" volume
> >   option auth.addr.brick8.allow * # Allow access to "brick" volume
> > end-volume
> >
> > the client configuration:
> >
> > gfs1:/ # cat /etc/glusterfs/glusterfs-client.vol.dht
> > volume client1
> >   type protocol/client
> >   option transport-type tcp
>
> >   option remote-host 172.20.92.240        # IP address of the remote brick2
> >   option remote-port 6996
> >   option remote-subvolume brick1       # name of the remote volume
> > end-volume
> > volume client2
> >   type protocol/client
> >   option transport-type tcp
>
> >   option remote-host 172.20.92.240         # IP address of the remote brick2
> >   option remote-port 6996
> >   #option transport-timeout 10          # seconds to wait for a reply
> >   option remote-subvolume brick2       # name of the remote volume
> > end-volume
> > volume client3
> >   type protocol/client
> >   option transport-type tcp
> >   option remote-host 172.20.92.240      # IP address of the remote brick2
> >   option remote-port 6996
> >   #option transport-timeout 10          # seconds to wait for a reply
> >   option remote-subvolume brick3       # name of the remote volume
> > end-volume
> > volume client4
> >   type protocol/client
> >   option transport-type tcp
> >   option remote-host 172.20.92.240      # IP address of the remote brick2
> >   option remote-port 6996
> >   #option transport-timeout 10          # seconds to wait for a reply
> >   option remote-subvolume brick4       # name of the remote volume
> > end-volume
> > volume client5
> >   type protocol/client
> >   option transport-type tcp
> >   option remote-host 172.20.92.240      # IP address of the remote brick2
> >   option remote-port 6996
> >   #option transport-timeout 10          # seconds to wait for a reply
> >   option remote-subvolume brick1       # name of the remote volume
> > end-volume
> > volume client6
> >   type protocol/client
> >   option transport-type tcp
> >   option remote-host 172.20.92.240      # IP address of the remote brick2
> >   option remote-port 6996
> >   #option transport-timeout 10          # seconds to wait for a reply
> >   option remote-subvolume brick2       # name of the remote volume
> > end-volume
> > volume client7
> >   type protocol/client
> >   option transport-type tcp
> >   option remote-host 172.20.92.240      # IP address of the remote brick2
> >   option remote-port 6996
> >   #option transport-timeout 10          # seconds to wait for a reply
> >   option remote-subvolume brick3       # name of the remote volume
> > end-volume
> > volume client8
> >   type protocol/client
> >   option transport-type tcp
> >   option remote-host 172.20.92.240      # IP address of the remote brick2
> >   option remote-port 6996
> >   #option transport-timeout 10          # seconds to wait for a reply
> >   option remote-subvolume brick4       # name of the remote volume
> > end-volume
> > #volume afr3
> > #  type cluster/afr
> > #  subvolumes client3 client6
> > #end-volume
> > volume dht
> >   type cluster/dht
> >   option lookup-unhashed yes
> >   subvolumes client1 client2  client3 client4
> > end-volume
> >
> > Could you help me ?
> >
> >
> >
> > 2009-07-06
> > ________________________________
> > eagleeyes
> > ________________________________
> > 发件人: Sachidananda
> > 发送时间: 2009-07-04  11:39:03
> > 收件人: eagleeyes
> > 抄送: gluster-users
> > 主题: Re: [Gluster-users] HELP : Files lost after DHT expansion
> > Hi,
> > eagleeyes wrote:
>
> >  > When i  update to gluster2.0.3 ,after dht expansion ,double  directorys
> >  > appear in the gluster directory ,why ?
> >  >
> >  > client configure
> >  > volume dht
> >  >   type cluster/dht
> >  >   option lookup-unhashed yes
> >  >   option min-free-disk 10%
> >  >   subvolumes client1 client2  client3 client4 client5 client6 client7
> > client8
> >  >   #subvolumes client1 client2  client3 client4
> >  > end-volume
> >  >
> >  >
> > Can you please send us your server/client volume files?
> > --
> > Sachidananda.
> > _______________________________________________
> > Gluster-users mailing list
> > Gluster-users at gluster.org
> > http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
> >
> >
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20090706/f3cf7897/attachment.html>


More information about the Gluster-users mailing list