[Bugs] [Bug 1464327] New: glusterfs client crashes upon when reading large directory

bugzilla at redhat.com bugzilla at redhat.com
Fri Jun 23 06:04:56 UTC 2017


https://bugzilla.redhat.com/show_bug.cgi?id=1464327

            Bug ID: 1464327
           Summary: glusterfs client crashes upon when reading large
                    directory
           Product: GlusterFS
           Version: mainline
         Component: fuse
          Assignee: bugs at gluster.org
          Reporter: csaba at redhat.com
                CC: bugs at gluster.org



Description of problem:

Set up a glusterfs mount with following parameters:

- performance.client-io-threads = on
- performance.stat-prefetch = on
(these are defaults)
- client is mounted with -oattribute_timeout=0,gid_timeout=0

Then, when doing READDPRP on a large directory (5000 files are usually enough,
10000 is always enough), a crash occurs.

Version-Release number of selected component (if applicable):

Seems to occur from 3.8 up.

How reproducible:

Deterministically.

Steps to Reproduce:

In the gluster mount, do:

$ for i in `seq 10000`; do echo -ne "\r$i        "; printf "foof%05d\n" $i
|xargs touch; done
$ ls -l | wc =l


Actual results:

Thread 7 "glusterfs" received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7ffff0ee2700 (LWP 13811)]
0x00007ffff7b0743c in __dentry_grep (table=0x7fffec032c80,
parent=0x7fffec030ce0, name=0x7fffec046d88 "foof07345") at inode.c:774
warning: Source file is more recent than executable.
774                     if (tmp->parent == parent && !strcmp (tmp->name, name))
{
(gdb) bt
#0  0x00007ffff7b0743c in __dentry_grep (table=0x7fffec032c80,
parent=0x7fffec030ce0, name=0x7fffec046d88 "foof07345") at inode.c:774
#1  0x00007ffff7b07c92 in __inode_link (inode=0x7fffec042420,
parent=0x7fffec030ce0, name=0x7fffec046d88 "foof07345", iatt=0x7fffec046d08)
    at inode.c:1049
#2  0x00007ffff7b07e72 in inode_link (inode=0x7fffec042420,
parent=0x7fffec030ce0, name=0x7fffec046d88 "foof07345", iatt=0x7fffec046d08)
    at inode.c:1096
#3  0x00007ffff51d0e8a in fuse_readdirp_cbk (frame=0x7fffe00016c0,
cookie=0x7fffe00017d0, this=0x656e60, op_ret=46, op_errno=0,
    entries=0x7fffe4002200, xdata=0x0) at fuse-bridge.c:2944
#4  0x00007fffeb9b7ef2 in io_stats_readdirp_cbk (frame=0x7fffe00017d0,
cookie=0x7fffe00018e0, this=0x7fffec010240, op_ret=46, op_errno=0,
    buf=0x7fffe4002200, xdata=0x0) at io-stats.c:2132
#5  0x00007ffff7b9bbd5 in default_readdirp_cbk (frame=0x7fffe00018e0,
cookie=0x7fffe40039d0, this=0x7fffec00ec70, op_ret=46, op_errno=0,
    entries=0x7fffe4002200, xdata=0x0) at defaults.c:1403
#6  0x00007fffebdf6fda in mdc_readdirp_cbk (frame=0x7fffe40039d0,
cookie=0x7fffe4001320, this=0x7fffec00d680, op_ret=46, op_errno=0,
    entries=0x7fffe4002200, xdata=0x0) at md-cache.c:2393
#7  0x00007ffff0224698 in dht_readdirp_cbk (frame=0x7fffe4001320,
cookie=0x7fffec009610, this=0x7fffec00bfe0, op_ret=23, op_errno=0,
    orig_entries=0x7ffff0ee19e0, xdata=0x0) at dht-common.c:5230
#8  0x00007ffff04ad7ca in client3_3_readdirp_cbk (req=0x7fffec02f1b0,
iov=0x7fffec02f1f0, count=1, myframe=0x7fffec03d610)
    at client-rpc-fops.c:2580
#9  0x00007ffff78ba533 in rpc_clnt_handle_reply (clnt=0x7fffec02c900,
pollin=0x7fffec002b60) at rpc-clnt.c:778
#10 0x00007ffff78baace in rpc_clnt_notify (trans=0x7fffec02cad0,
mydata=0x7fffec02c930, event=RPC_TRANSPORT_MSG_RECEIVED, data=0x7fffec002b60)
    at rpc-clnt.c:971
#11 0x00007ffff78b6bc5 in rpc_transport_notify (this=0x7fffec02cad0,
event=RPC_TRANSPORT_MSG_RECEIVED, data=0x7fffec002b60)
    at rpc-transport.c:538
#12 0x00007ffff279f265 in socket_event_poll_in (this=0x7fffec02cad0,
notify_handled=_gf_true) at socket.c:2315
#13 0x00007ffff279f884 in socket_event_handler (fd=11, idx=2, gen=1,
data=0x7fffec02cad0, poll_in=1, poll_out=0, poll_err=0) at socket.c:2467
#14 0x00007ffff7b61fed in event_dispatch_epoll_handler (event_pool=0x64f080,
event=0x7ffff0ee1ea0) at event-epoll.c:572
#15 0x00007ffff7b622c1 in event_dispatch_epoll_worker (data=0x693760) at
event-epoll.c:648
#16 0x00007ffff693f5ba in start_thread (arg=0x7ffff0ee2700) at
pthread_create.c:333
#17 0x00007ffff62177cd in clone () at
../sysdeps/unix/sysv/linux/x86_64/clone.S:109
(gdb)

Expected results:

10002

Additional info:

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list