[Bugs] [Bug 1273387] FUSE clients in a container environment hang and do not recover post losing connections to all bricks

bugzilla at redhat.com bugzilla at redhat.com
Mon Oct 26 10:45:10 UTC 2015


https://bugzilla.redhat.com/show_bug.cgi?id=1273387



--- Comment #3 from Vijay Bellur <vbellur at redhat.com> ---
COMMIT: http://review.gluster.org/12402 committed in master by Raghavendra G
(rgowdapp at redhat.com) 
------
commit 4f65f894ab1c19618383ba212dc0f0df48675823
Author: Raghavendra G <rgowdapp at redhat.com>
Date:   Tue Oct 20 16:27:14 2015 +0530

    mount/fuse: use a queue instead of pipe to communicate with thread
    doing inode/entry invalidations.

    Writing to pipe can block if pipe is full. This can lead to deadlocks
    in some situations. Consider following situation:

    1. Kernel sends a write on an inode. Client is waiting for a response
       to write from brick.
    2. A lookup happens on behalf of different application/thread on the
       same inode. In response, mdc tries to invalidate the inode.
    3. fuse_invalidate_inode is called. It writes a invalidation request
       to pipe. Another thread which reads from this pipe writes the
       request to /dev/fuse. The invalidate code in fuse-kernel-module,
       tries to acquire lock on all pages for the inode and is blocked as
       a write is in progress on same inode (step 1)
    4. Now, poller thread is blocked in invalidate notification and cannot
       receive any messages from same socket (on which lookup response
       came). But client is expecting a response for write from same
       socket (again step1) and we've a deadlock.

    The deadlock can be solved in two ways:
    1. Use a queue (and a conditional variable for notifications) to pass
       invalidation requests from poller to invalidate thread. This is a
       variant of using non-blocking pipe, but doesn't have any limit on the
       amount of data (worst case we run out of memory and error out).

    2. Allow events from sockets, immediately after we read one
       rpc-msg. Currently we disallow events till that rpc-msg is read from
       socket, processed and handled by higher layers. That way we won't run
       into these kind of issues. Also, it'll increase parallelism in way of
       reading from sockets.

    This patch implements solution 1 above.

    Change-Id: I8e8199fd7f4da9eab46a719d9292f35c039967e1
    BUG: 1273387
    Signed-off-by: Raghavendra G <rgowdapp at redhat.com>
    Reviewed-on: http://review.gluster.org/12402

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list