[Gluster-users] Gluster (2.0.1 -> git) with fuse 2.8 crashes NFS
Amar Tumballi
amar at gluster.com
Tue Jul 7 22:54:31 UTC 2009
Hi Justice,
Thanks for letting us know this. This crashing behavior with fuse-2.8 should
be fixed by Harsha's patch http://patches.gluster.com/patch/664/
I think the 'bigwrite' effect with two minor bug fixes went in write-behind
should have given this performance benefit.
Regards,
Amar
On Tue, Jul 7, 2009 at 3:43 PM, Justice London <jlondon at lawinfo.com> wrote:
> The 2.0.3 release of gluster appears so far to have fixed the crash issue
> I was experiencing. What was the specific patch that fixed for it I was
> wondering?
>
>
>
> Great job either way! It appears that with fuse 2.8 and newer kernels that
> gluster absolutely flies. With a replication environment between two crummy
> testbed machines it’s probably about twice as fast as 2.7.4 based fuse!
>
>
>
> Justice London
> jlondon at lawinfo.com
>
> ------------------------------
>
> *From:* gluster-users-bounces at gluster.org [mailto:
> gluster-users-bounces at gluster.org] *On Behalf Of *Justice London
> *Sent:* Thursday, July 02, 2009 12:33 PM
> *To:* 'Raghavendra G'
> *Cc:* 'gluster-users'; 'Harshavardhana'
>
> *Subject:* Re: [Gluster-users] Gluster (2.0.1 -> git) with fuse 2.8
> crashes NFS
>
>
>
> Sure:
>
>
>
> Server:
>
>
>
> ### Export volume "brick" with the contents of "/home/export" directory.
>
> volume posix
>
> type storage/posix # POSIX FS translator
>
> option directory /home/gluster/vmglustore # Export this directory
>
> option background-unlink yes
>
> end-volume
>
>
>
> volume locks
>
> type features/posix-locks
>
> subvolumes posix
>
> end-volume
>
>
>
> volume brick
>
> type performance/io-threads
>
> option thread-count 32
>
> # option autoscaling yes
>
> # option min-threads 8
>
> # option max-threads 200
>
> subvolumes locks
>
> end-volume
>
>
>
> ### Add network serving capability to above brick.
>
> volume brick-server
>
> type protocol/server
>
> option transport-type tcp
>
> # option transport-type unix
>
> # option transport-type ib-sdp
>
> # option transport.socket.bind-address 192.168.1.10 # Default is to
> listen on all interfaces
>
> # option transport.socket.listen-port 6996 # Default is 6996
>
>
>
> # option transport-type ib-verbs
>
> # option transport.ib-verbs.bind-address 192.168.1.10 # Default is to
> listen on all interfaces
>
> # option transport.ib-verbs.listen-port 6996 # Default is 6996
>
> # option transport.ib-verbs.work-request-send-size 131072
>
> # option transport.ib-verbs.work-request-send-count 64
>
> # option transport.ib-verbs.work-request-recv-size 131072
>
> # option transport.ib-verbs.work-request-recv-count 64
>
>
>
> option client-volume-filename /etc/glusterfs/glusterfs.vol
>
> subvolumes brick
>
> # NOTE: Access to any volume through protocol/server is denied by
>
> # default. You need to explicitly grant access through # "auth"
>
> # option.
>
> option auth.addr.brick.allow * # Allow access to "brick" volume
>
> end-volume
>
>
>
>
>
> Client:
>
>
>
> ### Add client feature and attach to remote subvolume
>
> volume remotebrick1
>
> type protocol/client
>
> option transport-type tcp
>
> # option transport-type unix
>
> # option transport-type ib-sdp
>
> option remote-host 192.168.1.35 # IP address of the remote brick
>
> # option transport.socket.remote-port 6996 # default server
> port is 6996
>
>
>
> # option transport-type ib-verbs
>
> # option transport.ib-verbs.remote-port 6996 # default server
> port is 6996
>
> # option transport.ib-verbs.work-request-send-size 1048576
>
> # option transport.ib-verbs.work-request-send-count 16
>
> # option transport.ib-verbs.work-request-recv-size 1048576
>
> # option transport.ib-verbs.work-request-recv-count 16
>
>
>
> # option transport-timeout 30 # seconds to wait for a reply
>
> # from server for each request
>
> option remote-subvolume brick # name of the remote volume
>
> end-volume
>
>
>
> volume remotebrick2
>
> type protocol/client
>
> option transport-type tcp
>
> option remote-host 192.168.1.36
>
> option remote-subvolume brick
>
> end-volume
>
>
>
> volume brick-replicate
>
> type cluster/replicate
>
> subvolumes remotebrick1 remotebrick2
>
> end-volume
>
>
>
>
>
> volume threads
>
> type performance/io-threads
>
> option thread-count 8
>
> # option autoscaling yes
>
> # option min-threads 8
>
> # option max-threads 200
>
> subvolumes brick-replicate
>
> end-volume
>
>
>
> ### Add readahead feature
>
> volume readahead
>
> type performance/read-ahead
>
> option page-count 4 # cache per file = (page-count x page-size)
>
> option force-atime-update off
>
> subvolumes threads
>
> end-volume
>
>
>
> ### Add IO-Cache feature
>
> #volume iocache
>
> # type performance/io-cache
>
> # option page-size 1MB
>
> # option cache-size 64MB
>
> # subvolumes readahead
>
> #end-volume
>
>
>
> ### Add writeback feature
>
> volume writeback
>
> type performance/write-behind
>
> option cache-size 8MB
>
> option flush-behind on
>
> subvolumes readahead
>
> end-volume
>
>
>
>
>
> Justice London
> jlondon at lawinfo.com
> ------------------------------
>
> *From:* Raghavendra G [mailto:raghavendra.hg at gmail.com]
> *Sent:* Thursday, July 02, 2009 10:17 AM
> *To:* Justice London
> *Cc:* Harshavardhana; gluster-users
> *Subject:* Re: [Gluster-users] Gluster (2.0.1 -> git) with fuse 2.8
> crashes NFS
>
>
>
> Hi,
>
> Can you send across the volume specification files you are using?
>
> regards,
> Raghavendra.
>
> 2009/6/24 Justice London <jlondon at lawinfo.com>
>
> Here you go. Let me know if you need anything else:
>
> Core was generated by `/usr/local/sbin/glusterfsd
> -p /var/run/glusterfsd.pid -f /etc/glusterfs/gluster'.
> Program terminated with signal 11, Segmentation fault.
> [New process 653]
> [New process 656]
> [New process 687]
> [New process 657]
> [New process 658]
> [New process 659]
> [New process 660]
> [New process 661]
> [New process 662]
> [New process 663]
> [New process 665]
> [New process 666]
> [New process 667]
> [New process 668]
> [New process 669]
> [New process 670]
> [New process 671]
> [New process 672]
> [New process 679]
> [New process 680]
> [New process 681]
> [New process 682]
> [New process 683]
> [New process 684]
> [New process 686]
> [New process 676]
> [New process 685]
> [New process 674]
> [New process 675]
> [New process 677]
> [New process 654]
> [New process 673]
> [New process 678]
> [New process 664]
> #0 0xb808ee9c in __glusterfs_this_location at plt ()
> from /usr/local/lib/libglusterfs.so.0
> (gdb) backtrace
> #0 0xb808ee9c in __glusterfs_this_location at plt ()
> from /usr/local/lib/libglusterfs.so.0
> #1 0xb809b935 in default_fxattrop (frame=0x809cc68, this=0x8055a80,
> fd=0x809ca20, flags=GF_XATTROP_ADD_ARRAY, dict=0x809cac8)
> at defaults.c:1122
> #2 0xb809b930 in default_fxattrop (frame=0x8063570, this=0x8055f80,
> fd=0x809ca20, flags=GF_XATTROP_ADD_ARRAY, dict=0x809cac8)
> at defaults.c:1122
> #3 0xb76b3c35 in server_fxattrop (frame=0x809cc28, bound_xl=0x8055f80,
> hdr=0x8064c88, hdrlen=150, iobuf=0x0) at server-protocol.c:4596
> #4 0xb76a9f1b in protocol_server_interpret (this=0x8056500,
> trans=0x8064698,
> hdr_p=0x8064c88 "", hdrlen=150, iobuf=0x0) at server-protocol.c:7502
> #5 0xb76aa1cc in protocol_server_pollin (this=0x8056500,
> trans=0x8064698)
> at server-protocol.c:7783
> #6 0xb76aa24f in notify (this=0x8056500, event=2, data=0x8064698)
> at server-protocol.c:7839
> #7 0xb809737f in xlator_notify (xl=0x8056500, event=2, data=0x8064698)
> at xlator.c:912
> #8 0xb4ea08dd in socket_event_poll_in (this=0x8064698) at socket.c:713
> #9 0xb4ea099b in socket_event_handler (fd=8, idx=1, data=0x8064698,
> poll_in=1, poll_out=0, poll_err=0) at socket.c:813
> #10 0xb80b168a in event_dispatch_epoll (event_pool=0x8050d58) at
> event.c:804
> #11 0xb80b0471 in event_dispatch (event_pool=0x8051338) at event.c:975
> ---Type <return> to continue, or q <return> to quit---
> #12 0x0804b880 in main (argc=5, argv=0xbfae1044) at glusterfsd.c:1263
> Current language: auto; currently asm
>
>
>
> Justice London
> jlondon at lawinfo.com
>
> On Mon, 2009-06-22 at 10:47 +0530, Harshavardhana wrote:
> > Hi Justice,
> >
> > Can you get a backtrace from the segfault through gdb? .
>
> >
> > Regards
> > --
> > Harshavardhana
> > Z Research Inc http://www.zresearch.com/
> >
> >
>
> > On Sat, Jun 20, 2009 at 10:47 PM, <jlondon at lawinfo.com> wrote:
> > Sure, the kernel version is 2.6.29 and the fuse release is the
> > just
> > released 2.8.0-pre3 (although I can use pre2 if needed).
> >
> >
> > Justice London
> > jlondon at lawinfo.com
> >
> > > Hi Justice,
> > >
> > > There are certain modifications required in
> > fuse-extra.c to make
> > > glusterfs work properly for fuse 2.8.0 release. glusterfs
> > 2.0.1 release is
> > > not tested against 2.8.0 release fuse and certainly will not
> > work without
> > > those modifications. May i know the kernel version you are
> > trying to use?
> > > and the version of fuse being under use? pre1 or pre2
> > release?
> > >
> > > Regards
> > > --
> > > Harshavardhana
> > > Z Research Inc http://www.zresearch.com/
> > >
> > >
> > > On Fri, Jun 19, 2009 at 11:14 PM, Justice London
> > > <jlondon at lawinfo.com>wrote:
> > >
> > >> No matter what I do I cannot seem to get gluster to stay
> > stable when
> > >> doing any sort of writes to the mount, when using gluster
> > in combination
> > >> with fuse 2.8.0-preX and NFS. I tried both unfs3 and
> > standard kernel-nfs
> > >> and
> > >> no matter what, any sort of data transaction seems to crash
> > gluster
> > >> immediately. The error log is as such:
> > >>
> > >>
> > >>
> > >> pending frames:
> > >>
> > >>
> > >>
> > >> patchset: git://git.sv.gnu.org/gluster.git
> > >>
> > >> signal received: 11
> > >>
> > >> configuration details:argp 1
> > >>
> > >> backtrace 1
> > >>
> > >> bdb->cursor->get 1
> > >>
> > >> db.h 1
> > >>
> > >> dlfcn 1
> > >>
> > >> fdatasync 1
> > >>
> > >> libpthread 1
> > >>
> > >> llistxattr 1
> > >>
> > >> setfsid 1
> > >>
> > >> spinlock 1
> > >>
> > >> epoll.h 1
> > >>
> > >> xattr.h 1
> > >>
> > >> st_atim.tv_nsec 1
> > >>
> > >> package-string: glusterfs 2.0.0git
> > >>
> > >> [0xf57fe400]
> > >>
> > >> /usr/local/lib/libglusterfs.so.0(default_fxattrop
> > +0xc0)[0xb7f4d530]
> > >>
> > >>
> > >>
> /usr/local/lib/glusterfs/2.0.0git/xlator/protocol/server.so(server_fxattrop+0x175)[0xb7565af5]
> > >>
> > >>
> > >>
> /usr/local/lib/glusterfs/2.0.0git/xlator/protocol/server.so(protocol_server_interpret+0xbb)[0xb755beeb]
> > >>
> > >>
> > >>
> /usr/local/lib/glusterfs/2.0.0git/xlator/protocol/server.so(protocol_server_pollin+0x9c)[0xb755c19c]
> > >>
> > >>
> > >>
> /usr/local/lib/glusterfs/2.0.0git/xlator/protocol/server.so(notify+0x7f)[0xb755c21f]
> > >>
> > >> /usr/local/lib/libglusterfs.so.0(xlator_notify
> > +0x3f)[0xb7f4937f]
> > >>
> > >>
> > >>
> /usr/local/lib/glusterfs/2.0.0git/transport/socket.so(socket_event_poll_in+0x3d)[0xb4d528dd]
> > >>
> > >>
> > >>
> /usr/local/lib/glusterfs/2.0.0git/transport/socket.so(socket_event_handler+0xab)[0xb4d5299b]
> > >>
> > >> /usr/local/lib/libglusterfs.so.0[0xb7f6321a]
> > >>
> > >> /usr/local/lib/libglusterfs.so.0(event_dispatch
> > +0x21)[0xb7f62001]
> > >>
> > >> /usr/local/sbin/glusterfsd(main+0xb3b)[0x804b81b]
> > >>
> > >> /lib/libc.so.6(__libc_start_main+0xe5)[0xb7df3455]
> > >>
> > >> /usr/local/sbin/glusterfsd[0x8049db1]
> > >>
> > >>
> > >>
> > >> Any ideas on if there is a solution, or will be one
> > upcoming in either
> > >> gluster or fuse? Other than with NFS, the git version of
> > gluster seems
> > >> to
> > >> be really, really fast with fuse 2.8
> > >>
> > >>
> > >>
> > >> Justice London
> > >> jlondon at lawinfo.com
> > >>
> > >>
> > >>
> > >> _______________________________________________
> > >> Gluster-users mailing list
> > >> Gluster-users at gluster.org
> > >> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
> > >>
> > >>
> > >
> >
> >
> >
> >
> >
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>
>
>
>
> --
> Raghavendra G
>
> No virus found in this incoming message.
> Checked by AVG - www.avg.com
> Version: 8.5.375 / Virus Database: 270.13.1/2211 - Release Date: 07/02/09
> 05:54:00
>
> Checked by AVG - www.avg.com
> Version: 8.5.375 / Virus Database: 270.13.7/2222 - Release Date: 07/07/09
> 05:53:00
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>
>
--
Regards,
Amar Tumballi
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20090707/5644082d/attachment.html>
More information about the Gluster-users
mailing list