[Bugs] [Bug 1664215] Toggling Read ahead translator off causes some clients to umount some of its volumes

bugzilla at redhat.com bugzilla at redhat.com
Tue Jan 8 13:37:01 UTC 2019


https://bugzilla.redhat.com/show_bug.cgi?id=1664215

Nithya Balachandran <nbalacha at redhat.com> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
                 CC|                            |amgad.saleh at nokia.com,
                   |                            |nbalacha at redhat.com
           Assignee|bugs at gluster.org            |atumball at redhat.com
              Flags|                            |needinfo?(amgad.saleh at nokia
                   |                            |.com)



--- Comment #1 from Nithya Balachandran <nbalacha at redhat.com> ---
(In reply to Amgad from comment #0)
> Created attachment 1519119 [details]
> Glusterfs client logs
> 
> Description of problem:
> 
> After turning the Read ahead translator "off", some of the clients (fuse)
> got disconnected (umount) to one of the data volumes. Attached the glusterfs
> logs from the client that experienced the disconnect. 
> 
> The following is an excerpt of the messages in the glusterfs/data.log.*
> logfiles:
> ---
> [2019-01-07 07:40:44.625789] I [fuse-bridge.c:4835:fuse_graph_sync] 0-fuse:
> switched to graph 8
> [2019-01-07 07:40:44.629594] I [MSGID: 114021] [client.c:2369:notify]
> 6-el_data-client-0: current graph is no longer active, destroying rpc_client
> [2019-01-07 07:40:44.629651] I [MSGID: 114021] [client.c:2369:notify]
> 6-el_data-client-1: current graph is no longer active, destroying rpc_client
> [2019-01-07 07:40:44.629668] I [MSGID: 114018]
> [client.c:2285:client_rpc_notify] 6-el_data-client-0: disconnected from
> el_data-client-0. Client process will keep trying to connect to glusterd
> until brick's port is available
> [2019-01-07 07:40:44.629724] I [MSGID: 114018]
> [client.c:2285:client_rpc_notify] 6-el_data-client-1: disconnected from
> el_data-client-1. Client process will keep trying to connect to glusterd
> until brick's port is available
> [2019-01-07 07:40:44.629732] E [MSGID: 108006]
> [afr-common.c:5118:__afr_handle_child_down_event] 6-el_data-replicate-0: All
> subvolumes are down. Going offline until atleast one of them comes back up.
> [2019-01-07 07:40:44.869481] I [glusterfsd-mgmt.c:52:mgmt_cbk_spec] 0-mgmt:
> Volume file changed
> [2019-01-07 07:40:44.916540] I [glusterfsd-mgmt.c:52:mgmt_cbk_spec] 0-mgmt:
> Volume file changed
> ----
> 
> Version-Release number of selected component (if applicable):
> 3.12.13
> 
> How reproducible:
> Turn the Read ahead translator "off" on the server side.
> 

Do you mean read-ahead or readdir-ahead? They are 2 different translators and
the memory leak was in readdir-ahead.

Do clients lose access to the volume and do you see errors on the mount point?
The graph switch messages in the logs are expected.


@Amar, please assign the BZ to the appropriate person.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list