[Bugs] [Bug 1690769] GlusterFS 5.5 crashes in 1x4 replicate setup.

bugzilla at redhat.com bugzilla at redhat.com
Wed Mar 20 09:18:57 UTC 2019


https://bugzilla.redhat.com/show_bug.cgi?id=1690769

Pranith Kumar K <pkarampu at redhat.com> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
              Flags|                            |needinfo?(atumball at redhat.c
                   |                            |om)



--- Comment #1 from Pranith Kumar K <pkarampu at redhat.com> ---
(In reply to Amar Tumballi from comment #0)
> Description of problem:
> 
> Looks like an issue with AFR in 1x4 setup for me looking at the backtraces:
> 
> (gdb) bt
> #0 0x00007f95a054f0e0 in raise () from /lib64/libc.so.6
> #1 0x00007f95a05506c1 in abort () from /lib64/libc.so.6
> #2 0x00007f95a05476fa in __assert_fail_base () from /lib64/libc.so.6
> #3 0x00007f95a0547772 in __assert_fail () from /lib64/libc.so.6
> #4 0x00007f95a08dd0b8 in pthread_mutex_lock () from /lib64/libpthread.so.0
> #5 0x00007f95994f0c9d in afr_frame_return () from
> /usr/lib64/glusterfs/5.3/xlator/cluster/replicate.so
> #6 0x00007f9599503ba1 in afr_lookup_cbk () from
> /usr/lib64/glusterfs/5.3/xlator/cluster/replicate.so
> #7 0x00007f9599788f3f in client4_0_lookup_cbk () from
> /usr/lib64/glusterfs/5.3/xlator/protocol/client.so
> #8 0x00007f95a1153820 in rpc_clnt_handle_reply () from
> /usr/lib64/libgfrpc.so.0
> #9 0x00007f95a1153b6f in rpc_clnt_notify () from /usr/lib64/libgfrpc.so.0
> #10 0x00007f95a1150063 in rpc_transport_notify () from
> /usr/lib64/libgfrpc.so.0
> #11 0x00007f959aea00b2 in socket_event_handler () from
> /usr/lib64/glusterfs/5.3/rpc-transport/socket.so
> #12 0x00007f95a13e64c3 in event_dispatch_epoll_worker () from
> /usr/lib64/libglusterfs.so.0
> #13 0x00007f95a08da559 in start_thread () from /lib64/libpthread.so.0
> #14 0x00007f95a061181f in clone () from /lib64/libc.so.6
> (gdb) thr 14
> 
> Thread 14 (Thread 0x7f9592ec7700 (LWP 6572)):
> #0 0x00007f95a08e3c4d in __lll_lock_wait () from /lib64/libpthread.so.0
> No symbol table info available.
> #1 0x00007f95a08e68b7 in __lll_lock_elision () from /lib64/libpthread.so.0
> No symbol table info available.
> #2 0x00007f95994f0c9d in afr_frame_return () from
> /usr/lib64/glusterfs/5.3/xlator/cluster/replicate.so
> No symbol table info available.
> #3 0x00007f9599503ba1 in afr_lookup_cbk () from
> /usr/lib64/glusterfs/5.3/xlator/cluster/replicate.so
> No symbol table info available.
> #4 0x00007f9599788f3f in client4_0_lookup_cbk () from
> /usr/lib64/glusterfs/5.3/xlator/protocol/client.so
> No symbol table info available.
> #5 0x00007f95a1153820 in rpc_clnt_handle_reply () from
> /usr/lib64/libgfrpc.so.0
> No symbol table info available.
> #6 0x00007f95a1153b6f in rpc_clnt_notify () from /usr/lib64/libgfrpc.so.0
> No symbol table info available.
> #7 0x00007f95a1150063 in rpc_transport_notify () from
> /usr/lib64/libgfrpc.so.0
> No symbol table info available.
> #8 0x00007f959aea00b2 in socket_event_handler () from
> /usr/lib64/glusterfs/5.3/rpc-transport/socket.so
> No symbol table info available.
> #9 0x00007f95a13e64c3 in event_dispatch_epoll_worker () from
> /usr/lib64/libglusterfs.so.0
> No symbol table info available.
> #10 0x00007f95a08da559 in start_thread () from /lib64/libpthread.so.0
> No symbol table info available.
> #11 0x00007f95a061181f in clone () from /lib64/libc.so.6
> No symbol table info available.
> 
> 
> Version-Release number of selected component (if applicable):
> 5.5 (and also 5.3, not seen in 3.x)
> 
> How reproducible:
> 100% 

I didn't find any steps to recreate this issue on the mail thread. I also ran
some workloads on replica 4 and didn't find this issue. Do you know what steps
lead to this crash?

> 
> 
> Additional info:
> Please refer to
> https://lists.gluster.org/pipermail/gluster-users/2019-March/036048.html &
> https://lists.gluster.org/pipermail/gluster-users/2019-February/035871.html

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list