[Gluster-devel] AFR setup with Virtual Servers crashes
Anand Avati
avati at zresearch.com
Thu May 10 17:12:03 UTC 2007
Urban,
please try with 'tla get -A gluster at sv.gnu.org
glusterfs--mainline--2.4'. the mainline--2.5 is an unstable tree, and
sorry about that coming to the downloads page.
thanks,
avati
2007/5/10, Urban Loesch <ul at enas.net>:
> Hi Avati,
>
> I tried out the latest code from repository.
> But I'm not sure if I made all correct because TLA is new for me.
>
> Downloaded the latest source:
> - # tla get -A gluster at sv.gnu.org glusterfs--mainline--2.5 glusterfs
> - executed "autogen.sh" without errors
> - # ./configure the Server and Client
> - Server: CFLAGS="-O3" ./configure --prefix=/usr --sysconfdir=/etc
> --disable-fuse-client --disable-ibverbs
> - Client: CFLAGS="-O3" ./configure --prefix=/usr --sysconfdir=/etc
> --disable-server --disable-ibverbs
> - # make
> - # make install
>
> Now "glusterfsd -V" shows me the following at both servers.
> glusterfs 1.4.0
> Copyright (c) 2006, 2007 Z RESEARCH Inc. <http://www.zresearch.com>
> GlusterFS comes with ABSOLUTELY NO WARRANTY.
> You may redistribute copies of GlusterFS under the terms of the GNU
> General Public License.
>
> And "glusterfs -V" as follows:
> glusterfs 1.4.0
> Copyright (c) 2006, 2007 Z RESEARCH Inc. <http://www.zresearch.com>
> GlusterFS comes with ABSOLUTELY NO WARRANTY.
> You may redistribute copies of GlusterFS under the terms of the GNU
> General Public License.
>
> The both servers are working normally and I can mount the volumes on the
> client without problems, but when I try some directory listing (eg. ls
> -lh mounted directory) it shows me the content of the directory and the
> the mounted volume crashes now at the client. The servers are working
> nomally
>
> It happens any time.
> Here's the core dump of the crash:
> gdb glusterfs -c core.3701
> ..
> Failed to read a valid object file image from memory.
> Core was generated by `glusterfs --no-daemon --log-file=/dev/stdout
> --log-level=DEBUG --spec-file=/etc'.
> Program terminated with signal 11, Segmentation fault.
> #0 0xb7f1a016 in dict_get (this=0x8077050, key=0x8055960 "brick") at
> dict.c:132
> 132 dict.c: No such file or directory.
> in dict.c
> (gdb) bt
> #0 0xb7f1a016 in dict_get (this=0x8077050, key=0x8055960 "brick") at
> dict.c:132
> #1 0xb75a8920 in client_releasedir () from
> /usr/lib/glusterfs/1.4.0/xlator/protocol/client.so
> #2 0xb75a24fe in afr_releasedir () from
> /usr/lib/glusterfs/1.4.0/xlator/cluster/afr.so
> #3 0x08050ac6 in fuse_releasedir ()
> #4 0xb7f0e73f in fuse_reply_err () from /usr/lib/libfuse.so.2
> #5 0xb7f0fa8d in fuse_reply_entry () from /usr/lib/libfuse.so.2
> #6 0xb7f112d6 in fuse_session_process () from /usr/lib/libfuse.so.2
> #7 0x0804a8e5 in fuse_transport_notify ()
> #8 0xb7f210fd in transport_notify (this=0x80553c0, event=1) at
> transport.c:148
> #9 0xb7f21da9 in sys_epoll_iteration (ctx=0xbfdd3a08) at epoll.c:53
> #10 0xb7f211ad in poll_iteration (ctx=0xbfdd3a08) at transport.c:251
> #11 0x0804a1db in main ()
> (gdb) quit
>
> Have you any idea or have I made some error during compilation?
>
> thanks
> Urban
>
>
> Anand Avati wrote:
> > Urban,
> > this bug has alredy been fixed in the source repository.
> > thanks,
> > avati
> >
> > 2007/5/10, Urban Loesch <ul at enas.net>:
> >> Hi Avati,
> >>
> >> thanks for your fast answer.
> >>
> >> I use the version glusterfs-1.3.0-pre3 downloaded form your server
> >> (http://ftp.zresearch.com/pub/gluster/glusterfs/1.3-pre/).
> >> I will try the latest version from TLA today afternoon and let you know
> >> what happens.
> >>
> >> Here's the backtrace from the core dump
> >> # gdb glusterfsd -c core.15160
> >> ..
> >> Core was generated by `glusterfsd --no-daemon --log-file=/dev/stdout
> >> --log-level=DEBUG'.
> >> Program terminated with signal 11, Segmentation fault.
> >> #0 0xb75d8fd3 in posix_locks_flush () from
> >> /usr/lib/glusterfs/1.3.0-pre3/xlator/features/posix-locks.so
> >> (gdb) bt
> >> #0 0xb75d8fd3 in posix_locks_flush () from
> >> /usr/lib/glusterfs/1.3.0-pre3/xlator/features/posix-locks.so
> >> #1 0xb75d1192 in fop_flush () from
> >> /usr/lib/glusterfs/1.3.0-pre3/xlator/protocol/server.so
> >> #2 0xb75cded7 in proto_srv_notify () from
> >> /usr/lib/glusterfs/1.3.0-pre3/xlator/protocol/server.so
> >> #3 0xb7f54ecd in transport_notify (this=0x804b1a0, event=1) at
> >> transport.c:148
> >> #4 0xb7f55b79 in sys_epoll_iteration (ctx=0xbfbc2ff0) at epoll.c:53
> >> #5 0xb7f54f7d in poll_iteration (ctx=0xbfbc2ff0) at transport.c:251
> >> #6 0x0804924e in main ()
> >>
> >> Yes it is reproducible. It happens every time when I try to start my
> >> virtual server.
> >>
> >> Thanks
> >> Urban
> >>
> >
> >
>
>
--
Anand V. Avati
More information about the Gluster-devel
mailing list