[Gluster-users] Geo-replication broken in 3.4 alpha2?

Csaba Henk csaba at redhat.com
Mon Mar 25 17:14:32 UTC 2013


On 2013-03-21, Csaba Henk <csaba at redhat.com> wrote:
>
> This behavior is confirmed -- it's exactly reproducible.
>
> I'll try to get back to you tomorrow with an update. If that won't happen (because not getting any cleverer...)
> then I can chime back only after 4th of April, I'll be on leave.

So what happens:

- gsyncd does a listxattr on its aux gluster mount (w/ client-pid=-1)

- as special code path for client-pid=-1 clients, we get:

(gdb) bt
#0  internal_fnmatch (pattern=0x7f29db6b5b2a "*.selinux*", string=0x18441a0 "security.selinux", string_end=0x18441b0 "", no_leading_period=4, flags=4, ends=0x0, alloca_used=0) at fnmatch_loop.c:49
#1  0x00007f29dceef8e3 in __fnmatch (pattern=0x7f29db6b5b2a "*.selinux*", string=0x18441a0 "security.selinux", flags=4) at fnmatch.c:451
#2  0x00007f29db6a8e2c in fuse_filter_xattr (key=0x18441a0 "security.selinux") at fuse-bridge.c:3015
#3  0x00007f29ddbc2ae4 in dict_keys_join (value=0x0, size=0, dict=<value optimized out>, filter_fn=0x7f29db6a8df0 <fuse_filter_xattr>) at dict.c:1183
#4  0x00007f29db6b0083 in fuse_xattr_cbk (frame=0x7f29dbc679fc, cookie=<value optimized out>, this=0x1759050, op_ret=0, op_errno=0, dict=0x7f29dbaa1b1c, xdata=0x0) at fuse-bridge.c:3064
#5  0x00007f29d6138446 in io_stats_getxattr_cbk (frame=0x7f29dbe72638, cookie=<value optimized out>, this=<value optimized out>, op_ret=0, op_errno=0, dict=0x7f29dbaa1b1c, xdata=0x0) at io-stats.c:1640
#6  0x00007f29d6344ad1 in mdc_getxattr_cbk (frame=0x7f29dbe72230, cookie=<value optimized out>, this=<value optimized out>, op_ret=0, op_errno=0, xattr=<value optimized out>, xdata=0x0) at md-cache.c:1658
#7  0x00007f29d6fa6fcd in dht_getxattr_cbk (frame=0x7f29dbe722dc, cookie=<value optimized out>, this=<value optimized out>, op_ret=<value optimized out>, op_errno=0, xattr=<value optimized out>, xdata=0x0)
    at dht-common.c:2041
#8  0x00007f29d71e3f58 in afr_getxattr_cbk (frame=0x7f29dbe72388, cookie=<value optimized out>, this=<value optimized out>, op_ret=0, op_errno=0, dict=0x7f29dbaa1ba8, xdata=0x0) at afr-inode-read.c:621
#9  0x00007f29d7452828 in client3_3_getxattr_cbk (req=<value optimized out>, iov=<value optimized out>, count=<value optimized out>, myframe=0x7f29dbe7202c) at client-rpc-fops.c:1115
#10 0x00007f29dd9a6df5 in rpc_clnt_handle_reply (clnt=0x17f2ab0, pollin=0x1774c60) at rpc-clnt.c:771
#11 0x00007f29dd9a79d7 in rpc_clnt_notify (trans=<value optimized out>, mydata=0x17f2ae0, event=<value optimized out>, data=<value optimized out>) at rpc-clnt.c:890
#12 0x00007f29dd9a3338 in rpc_transport_notify (this=<value optimized out>, event=<value optimized out>, data=<value optimized out>) at rpc-transport.c:495
#13 0x00007f29d96872d4 in socket_event_poll_in (this=0x18024e0) at socket.c:2118
#14 0x00007f29d968742d in socket_event_handler (fd=<value optimized out>, idx=<value optimized out>, data=0x18024e0, poll_in=1, poll_out=0, poll_err=0) at socket.c:2230
#15 0x00007f29ddc093e7 in event_dispatch_epoll_handler (event_pool=0x17583c0) at event-epoll.c:384
#16 event_dispatch_epoll (event_pool=0x17583c0) at event-epoll.c:445
#17 0x0000000000406676 in main (argc=6, argv=0x7fff6b3bfa58) at glusterfsd.c:1902

here at #3, dict_keys_join() we run an infinite loop -- for the input given to
fuse_filter_xattr (the filter_fn callback of dict_keys_join), it returns 1,

and thus:

// dict.c

int
dict_keys_join (void *value, int size, dict_t *dict,
                int (*filter_fn)(char *k))
{
    int          len = 0;
        data_pair_t *pairs = NULL;
        data_pair_t *next  = NULL;

        pairs = dict->members_list;
        while (pairs) {
                next = pairs->next;

                if (filter_fn && filter_fn (pairs->key))
                        continue;

         ...
 
                 pairs = next;
         }

////

we won't get out of the while loop here, because continue happens before advancing pairs.

Haven't you noticed your CPU burning? ;)

Csaba




More information about the Gluster-users mailing list