[Gluster-users] Issues with replicated gluster volume

ahemad shaik ahemad_shaik at yahoo.com
Tue Jun 16 08:01:30 UTC 2020


 Hi Karthik,
Please find attached logs.
kindly suggest on how to make the volume high available. 
Thanks,Ahemad


    On Tuesday, 16 June, 2020, 12:09:10 pm IST, Karthik Subrahmanya <ksubrahm at redhat.com> wrote:  
 
 Hi,
Thanks for the clarification.
In that case can you attach complete glusterd, bricks and mount logs from all the nodes when this happened.Also paste the output that you are seeing when you try to access or do operations on the mount point.
Regards,Karthik
On Tue, Jun 16, 2020 at 11:55 AM ahemad shaik <ahemad_shaik at yahoo.com> wrote:

 Sorry, It was a typo.
The command i exact command i have used is below.
The volume is mounted on node4.
""mount -t glusterfs node1:/glustervol /mnt/ ""


gluster Volume is created from node1,node2 and node3. 
""gluster volume create glustervol replica 3 transport tcp node1:/data node2:/data node3:/data force""
I have tried rebooting node3 to test high availability. 
I hope it is clear now.
Please let me know if any questions.
Thanks,Ahemad 


    On Tuesday, 16 June, 2020, 11:45:48 am IST, Karthik Subrahmanya <ksubrahm at redhat.com> wrote:  
 
 Hi Ahemad,
A quick question on the mount command that you have used"mount -t glusterfs node4:/glustervol    /mnt/"
Here you are specifying the hostname as node4 instead of node{1,2,3} which actually host the volume that you intend to mount. Is this a typo or did you paste the same command that you used for mounting?
If it is the actual command that you have used, then node4 seems to have some old volume details which are not cleared properly and it is being used while mounting. According to the peer info that you provided, only node1, 2 & 3 are part of the list, so node4 is unaware of the volume that you want to mount and this command is mounting a volume which is only visible to node4.
Regards,Karthik
On Tue, Jun 16, 2020 at 11:11 AM ahemad shaik <ahemad_shaik at yahoo.com> wrote:

 Hi Karthik,

Please provide the following info, I see there are errors unable to connect to port and warning that transport point end connected. Please find the complete logs below.
kindly suggest.
1. gluster peer status
gluster peer statusNumber of Peers: 2
Hostname: node1Uuid: 0e679115-15ad-4a85-9d0a-9178471ef90State: Peer in Cluster (Connected)
Hostname: node2Uuid: 785a7c5b-86d3-45b9-b371-7e66e7fa88e0State: Peer in Cluster (Connected)

gluster pool listUUID                                    Hostname                                State0e679115-15ad-4a85-9d0a-9178471ef90     node1         Connected785a7c5b-86d3-45b9-b371-7e66e7fa88e0    node2                                   Connectedec137af6-4845-4ebb-955a-fac1df9b7b6c    localhost(node3)                        Connected
2. gluster volume info glustervol
Volume Name: glustervolType: ReplicateVolume ID: 5422bb27-1863-47d5-b216-61751a01b759Status: StartedSnapshot Count: 0Number of Bricks: 1 x 3 = 3Transport-type: tcpBricks:Brick1: node1:/dataBrick2: node2:/dataBrick3: node3:/dataOptions Reconfigured:performance.client-io-threads: offnfs.disable: onstorage.fips-mode-rchecksum: ontransport.address-family: inet
3. gluster volume status glustervol
gluster volume status glustervolStatus of volume: glustervolGluster process                             TCP Port  RDMA Port  Online  Pid------------------------------------------------------------------------------Brick node1:/data                            49152     0          Y       59739Brick node2:/data                            49153     0          Y       3498Brick node3:/data                            49152     0          Y       1880Self-heal Daemon on localhost                N/A       N/A        Y       1905Self-heal Daemon on node1                    N/A       N/A        Y       3519Self-heal Daemon on node2                    N/A       N/A        Y       59760
Task Status of Volume glustervol------------------------------------------------------------------------------There are no active volume tasks
4. client log from node4 when you saw unavailability-
Below are the logs when i reboot server node3, we can see in logs that "0-glustervol-client-2: disconnected from glustervol-client-2".
Please find the complete logs below from the reboot to until the server available. I am testing high availability by just rebooting server. In real case scenario chances are there that server may not available for some hours so i just dont want to have the long down time.

[2020-06-16 05:14:25.256136] I [MSGID: 114046] [client-handshake.c:1105:client_setvolume_cbk] 0-glustervol-client-0: Connected to glustervol-client-0, attached to remote volume '/data'.[2020-06-16 05:14:25.256179] I [MSGID: 108005] [afr-common.c:5247:__afr_handle_child_up_event] 0-glustervol-replicate-0: Subvolume 'glustervol-client-0' came back up; going online.[2020-06-16 05:14:25.257972] I [MSGID: 114046] [client-handshake.c:1105:client_setvolume_cbk] 0-glustervol-client-1: Connected to glustervol-client-1, attached to remote volume '/data'.[2020-06-16 05:14:25.258014] I [MSGID: 108002] [afr-common.c:5609:afr_notify] 0-glustervol-replicate-0: Client-quorum is met[2020-06-16 05:14:25.260312] I [MSGID: 114046] [client-handshake.c:1105:client_setvolume_cbk] 0-glustervol-client-2: Connected to glustervol-client-2, attached to remote volume '/data'.[2020-06-16 05:14:25.261935] I [fuse-bridge.c:5145:fuse_init] 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.24 kernel 7.23[2020-06-16 05:14:25.261957] I [fuse-bridge.c:5756:fuse_graph_sync] 0-fuse: switched to graph 0[2020-06-16 05:16:59.729400] I [MSGID: 114018] [client.c:2331:client_rpc_notify] 0-glustervol-client-2: disconnected from glustervol-client-2. Client process will keep trying to connect to glusterd until brick's port is available[2020-06-16 05:16:59.730053] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f4a6a41badb] (--> /lib64/libgfrpc.so.0(+0xd7e4)[0x7f4a6a1c27e4] (--> /lib64/libgfrpc.so.0(+0xd8fe)[0x7f4a6a1c28fe] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x97)[0x7f4a6a1c3987] (--> /lib64/libgfrpc.so.0(+0xf518)[0x7f4a6a1c4518] ))))) 0-glustervol-client-2: forced unwinding frame type(GlusterFS 4.x v1) op(LOOKUP(27)) called at 2020-06-16 05:16:08.175698 (xid=0xae)[2020-06-16 05:16:59.730089] W [MSGID: 114031] [client-rpc-fops_v2.c:2634:client4_0_lookup_cbk] 0-glustervol-client-2: remote operation failed. Path: / (00000000-0000-0000-0000-000000000001) [Transport endpoint is not connected][2020-06-16 05:16:59.730336] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f4a6a41badb] (--> /lib64/libgfrpc.so.0(+0xd7e4)[0x7f4a6a1c27e4] (--> /lib64/libgfrpc.so.0(+0xd8fe)[0x7f4a6a1c28fe] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x97)[0x7f4a6a1c3987] (--> /lib64/libgfrpc.so.0(+0xf518)[0x7f4a6a1c4518] ))))) 0-glustervol-client-2: forced unwinding frame type(GlusterFS 4.x v1) op(LOOKUP(27)) called at 2020-06-16 05:16:10.237849 (xid=0xaf)[2020-06-16 05:16:59.730540] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f4a6a41badb] (--> /lib64/libgfrpc.so.0(+0xd7e4)[0x7f4a6a1c27e4] (--> /lib64/libgfrpc.so.0(+0xd8fe)[0x7f4a6a1c28fe] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x97)[0x7f4a6a1c3987] (--> /lib64/libgfrpc.so.0(+0xf518)[0x7f4a6a1c4518] ))))) 0-glustervol-client-2: forced unwinding frame type(GlusterFS 4.x v1) op(LOOKUP(27)) called at 2020-06-16 05:16:22.694419 (xid=0xb0)[2020-06-16 05:16:59.731132] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f4a6a41badb] (--> /lib64/libgfrpc.so.0(+0xd7e4)[0x7f4a6a1c27e4] (--> /lib64/libgfrpc.so.0(+0xd8fe)[0x7f4a6a1c28fe] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x97)[0x7f4a6a1c3987] (--> /lib64/libgfrpc.so.0(+0xf518)[0x7f4a6a1c4518] ))))) 0-glustervol-client-2: forced unwinding frame type(GlusterFS 4.x v1) op(LOOKUP(27)) called at 2020-06-16 05:16:27.574139 (xid=0xb1)[2020-06-16 05:16:59.731319] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f4a6a41badb] (--> /lib64/libgfrpc.so.0(+0xd7e4)[0x7f4a6a1c27e4] (--> /lib64/libgfrpc.so.0(+0xd8fe)[0x7f4a6a1c28fe] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x97)[0x7f4a6a1c3987] (--> /lib64/libgfrpc.so.0(+0xf518)[0x7f4a6a1c4518] ))))) 0-glustervol-client-2: forced unwinding frame type(GF-DUMP) op(NULL(2)) called at 2020-06-16 05:16:34.231433 (xid=0xb2)[2020-06-16 05:16:59.731352] W [rpc-clnt-ping.c:210:rpc_clnt_ping_cbk] 0-glustervol-client-2: socket disconnected[2020-06-16 05:16:59.731464] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f4a6a41badb] (--> /lib64/libgfrpc.so.0(+0xd7e4)[0x7f4a6a1c27e4] (--> /lib64/libgfrpc.so.0(+0xd8fe)[0x7f4a6a1c28fe] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x97)[0x7f4a6a1c3987] (--> /lib64/libgfrpc.so.0(+0xf518)[0x7f4a6a1c4518] ))))) 0-glustervol-client-2: forced unwinding frame type(GlusterFS 4.x v1) op(LOOKUP(27)) called at 2020-06-16 05:16:41.213884 (xid=0xb3)[2020-06-16 05:16:59.731650] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f4a6a41badb] (--> /lib64/libgfrpc.so.0(+0xd7e4)[0x7f4a6a1c27e4] (--> /lib64/libgfrpc.so.0(+0xd8fe)[0x7f4a6a1c28fe] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x97)[0x7f4a6a1c3987] (--> /lib64/libgfrpc.so.0(+0xf518)[0x7f4a6a1c4518] ))))) 0-glustervol-client-2: forced unwinding frame type(GlusterFS 4.x v1) op(LOOKUP(27)) called at 2020-06-16 05:16:48.756212 (xid=0xb4)[2020-06-16 05:16:59.731876] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f4a6a41badb] (--> /lib64/libgfrpc.so.0(+0xd7e4)[0x7f4a6a1c27e4] (--> /lib64/libgfrpc.so.0(+0xd8fe)[0x7f4a6a1c28fe] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x97)[0x7f4a6a1c3987] (--> /lib64/libgfrpc.so.0(+0xf518)[0x7f4a6a1c4518] ))))) 0-glustervol-client-2: forced unwinding frame type(GlusterFS 4.x v1) op(LOOKUP(27)) called at 2020-06-16 05:16:52.258940 (xid=0xb5)[2020-06-16 05:16:59.732060] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f4a6a41badb] (--> /lib64/libgfrpc.so.0(+0xd7e4)[0x7f4a6a1c27e4] (--> /lib64/libgfrpc.so.0(+0xd8fe)[0x7f4a6a1c28fe] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x97)[0x7f4a6a1c3987] (--> /lib64/libgfrpc.so.0(+0xf518)[0x7f4a6a1c4518] ))))) 0-glustervol-client-2: forced unwinding frame type(GlusterFS 4.x v1) op(LOOKUP(27)) called at 2020-06-16 05:16:54.618301 (xid=0xb6)[2020-06-16 05:16:59.732246] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f4a6a41badb] (--> /lib64/libgfrpc.so.0(+0xd7e4)[0x7f4a6a1c27e4] (--> /lib64/libgfrpc.so.0(+0xd8fe)[0x7f4a6a1c28fe] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x97)[0x7f4a6a1c3987] (--> /lib64/libgfrpc.so.0(+0xf518)[0x7f4a6a1c4518] ))))) 0-glustervol-client-2: forced unwinding frame type(GlusterFS 4.x v1) op(LOOKUP(27)) called at 2020-06-16 05:16:58.288790 (xid=0xb7)[2020-06-16 05:17:10.245302] I [rpc-clnt.c:2028:rpc_clnt_reconfig] 0-glustervol-client-2: changing port to 49152 (from 0)[2020-06-16 05:17:10.249896] I [MSGID: 114046] [client-handshake.c:1105:client_setvolume_cbk] 0-glustervol-client-2: Connected to glustervol-client-2, attached to remote volume '/data'The message "W [MSGID: 114031] [client-rpc-fops_v2.c:2634:client4_0_lookup_cbk] 0-glustervol-client-2: remote operation failed. Path: / (00000000-0000-0000-0000-000000000001) [Transport endpoint is not connected]" repeated 8 times between [2020-06-16 05:16:59.730089] and [2020-06-16 05:16:59.732278]
Thanks,Ahemad
    On Tuesday, 16 June, 2020, 10:58:42 am IST, ahemad shaik <ahemad_shaik at yahoo.com> wrote:  
 
  Hi Karthik,
Please find the details below.
Please provide the following info:1. gluster peer status
gluster peer statusNumber of Peers: 2
Hostname: node1Uuid: 0e679115-15ad-4a85-9d0a-9178471ef90State: Peer in Cluster (Connected)
Hostname: node2Uuid: 785a7c5b-86d3-45b9-b371-7e66e7fa88e0State: Peer in Cluster (Connected)

gluster pool listUUID                                    Hostname                                State0e679115-15ad-4a85-9d0a-9178471ef90     node1 Connected785a7c5b-86d3-45b9-b371-7e66e7fa88e0    node2                                   Connectedec137af6-4845-4ebb-955a-fac1df9b7b6c    localhost(node3)                        Connected
2. gluster volume info glustervol
Volume Name: glustervolType: ReplicateVolume ID: 5422bb27-1863-47d5-b216-61751a01b759Status: StartedSnapshot Count: 0Number of Bricks: 1 x 3 = 3Transport-type: tcpBricks:Brick1: node1:/dataBrick2: node2:/dataBrick3: node3:/dataOptions Reconfigured:performance.client-io-threads: offnfs.disable: onstorage.fips-mode-rchecksum: ontransport.address-family: inet
3. gluster volume status glustervol
gluster volume status glustervolStatus of volume: glustervolGluster process                             TCP Port  RDMA Port  Online  Pid------------------------------------------------------------------------------Brick node1:/data                            49152     0          Y       59739Brick node2:/data                            49153     0          Y       3498Brick node3:/data                            49152     0          Y       1880Self-heal Daemon on localhost                N/A       N/A        Y       1905Self-heal Daemon on node1                    N/A       N/A        Y       3519Self-heal Daemon on node2                    N/A       N/A        Y       59760
Task Status of Volume glustervol------------------------------------------------------------------------------There are no active volume tasks
4. client log from node4 when you saw unavailability-
Below are the logs when i reboot server node3, we can see in logs that "0-glustervol-client-2: disconnected from glustervol-client-2".
Please find the complete logs below from the reboot to until the server available. I am testing high availability by just rebooting server. In real case scenario chances are there that server may not available for some hours so we just dont want to have the long down time.

[2020-06-16 05:14:25.256136] I [MSGID: 114046] [client-handshake.c:1105:client_setvolume_cbk] 0-glustervol-client-0: Connected to glustervol-client-0, attached to remote volume '/data'.[2020-06-16 05:14:25.256179] I [MSGID: 108005] [afr-common.c:5247:__afr_handle_child_up_event] 0-glustervol-replicate-0: Subvolume 'glustervol-client-0' came back up; going online.[2020-06-16 05:14:25.257972] I [MSGID: 114046] [client-handshake.c:1105:client_setvolume_cbk] 0-glustervol-client-1: Connected to glustervol-client-1, attached to remote volume '/data'.[2020-06-16 05:14:25.258014] I [MSGID: 108002] [afr-common.c:5609:afr_notify] 0-glustervol-replicate-0: Client-quorum is met[2020-06-16 05:14:25.260312] I [MSGID: 114046] [client-handshake.c:1105:client_setvolume_cbk] 0-glustervol-client-2: Connected to glustervol-client-2, attached to remote volume '/data'.[2020-06-16 05:14:25.261935] I [fuse-bridge.c:5145:fuse_init] 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.24 kernel 7.23[2020-06-16 05:14:25.261957] I [fuse-bridge.c:5756:fuse_graph_sync] 0-fuse: switched to graph 0[2020-06-16 05:16:59.729400] I [MSGID: 114018] [client.c:2331:client_rpc_notify] 0-glustervol-client-2: disconnected from glustervol-client-2. Client process will keep trying to connect to glusterd until brick's port is available[2020-06-16 05:16:59.730053] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f4a6a41badb] (--> /lib64/libgfrpc.so.0(+0xd7e4)[0x7f4a6a1c27e4] (--> /lib64/libgfrpc.so.0(+0xd8fe)[0x7f4a6a1c28fe] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x97)[0x7f4a6a1c3987] (--> /lib64/libgfrpc.so.0(+0xf518)[0x7f4a6a1c4518] ))))) 0-glustervol-client-2: forced unwinding frame type(GlusterFS 4.x v1) op(LOOKUP(27)) called at 2020-06-16 05:16:08.175698 (xid=0xae)[2020-06-16 05:16:59.730089] W [MSGID: 114031] [client-rpc-fops_v2.c:2634:client4_0_lookup_cbk] 0-glustervol-client-2: remote operation failed. Path: / (00000000-0000-0000-0000-000000000001) [Transport endpoint is not connected][2020-06-16 05:16:59.730336] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f4a6a41badb] (--> /lib64/libgfrpc.so.0(+0xd7e4)[0x7f4a6a1c27e4] (--> /lib64/libgfrpc.so.0(+0xd8fe)[0x7f4a6a1c28fe] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x97)[0x7f4a6a1c3987] (--> /lib64/libgfrpc.so.0(+0xf518)[0x7f4a6a1c4518] ))))) 0-glustervol-client-2: forced unwinding frame type(GlusterFS 4.x v1) op(LOOKUP(27)) called at 2020-06-16 05:16:10.237849 (xid=0xaf)[2020-06-16 05:16:59.730540] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f4a6a41badb] (--> /lib64/libgfrpc.so.0(+0xd7e4)[0x7f4a6a1c27e4] (--> /lib64/libgfrpc.so.0(+0xd8fe)[0x7f4a6a1c28fe] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x97)[0x7f4a6a1c3987] (--> /lib64/libgfrpc.so.0(+0xf518)[0x7f4a6a1c4518] ))))) 0-glustervol-client-2: forced unwinding frame type(GlusterFS 4.x v1) op(LOOKUP(27)) called at 2020-06-16 05:16:22.694419 (xid=0xb0)[2020-06-16 05:16:59.731132] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f4a6a41badb] (--> /lib64/libgfrpc.so.0(+0xd7e4)[0x7f4a6a1c27e4] (--> /lib64/libgfrpc.so.0(+0xd8fe)[0x7f4a6a1c28fe] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x97)[0x7f4a6a1c3987] (--> /lib64/libgfrpc.so.0(+0xf518)[0x7f4a6a1c4518] ))))) 0-glustervol-client-2: forced unwinding frame type(GlusterFS 4.x v1) op(LOOKUP(27)) called at 2020-06-16 05:16:27.574139 (xid=0xb1)[2020-06-16 05:16:59.731319] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f4a6a41badb] (--> /lib64/libgfrpc.so.0(+0xd7e4)[0x7f4a6a1c27e4] (--> /lib64/libgfrpc.so.0(+0xd8fe)[0x7f4a6a1c28fe] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x97)[0x7f4a6a1c3987] (--> /lib64/libgfrpc.so.0(+0xf518)[0x7f4a6a1c4518] ))))) 0-glustervol-client-2: forced unwinding frame type(GF-DUMP) op(NULL(2)) called at 2020-06-16 05:16:34.231433 (xid=0xb2)[2020-06-16 05:16:59.731352] W [rpc-clnt-ping.c:210:rpc_clnt_ping_cbk] 0-glustervol-client-2: socket disconnected[2020-06-16 05:16:59.731464] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f4a6a41badb] (--> /lib64/libgfrpc.so.0(+0xd7e4)[0x7f4a6a1c27e4] (--> /lib64/libgfrpc.so.0(+0xd8fe)[0x7f4a6a1c28fe] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x97)[0x7f4a6a1c3987] (--> /lib64/libgfrpc.so.0(+0xf518)[0x7f4a6a1c4518] ))))) 0-glustervol-client-2: forced unwinding frame type(GlusterFS 4.x v1) op(LOOKUP(27)) called at 2020-06-16 05:16:41.213884 (xid=0xb3)[2020-06-16 05:16:59.731650] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f4a6a41badb] (--> /lib64/libgfrpc.so.0(+0xd7e4)[0x7f4a6a1c27e4] (--> /lib64/libgfrpc.so.0(+0xd8fe)[0x7f4a6a1c28fe] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x97)[0x7f4a6a1c3987] (--> /lib64/libgfrpc.so.0(+0xf518)[0x7f4a6a1c4518] ))))) 0-glustervol-client-2: forced unwinding frame type(GlusterFS 4.x v1) op(LOOKUP(27)) called at 2020-06-16 05:16:48.756212 (xid=0xb4)[2020-06-16 05:16:59.731876] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f4a6a41badb] (--> /lib64/libgfrpc.so.0(+0xd7e4)[0x7f4a6a1c27e4] (--> /lib64/libgfrpc.so.0(+0xd8fe)[0x7f4a6a1c28fe] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x97)[0x7f4a6a1c3987] (--> /lib64/libgfrpc.so.0(+0xf518)[0x7f4a6a1c4518] ))))) 0-glustervol-client-2: forced unwinding frame type(GlusterFS 4.x v1) op(LOOKUP(27)) called at 2020-06-16 05:16:52.258940 (xid=0xb5)[2020-06-16 05:16:59.732060] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f4a6a41badb] (--> /lib64/libgfrpc.so.0(+0xd7e4)[0x7f4a6a1c27e4] (--> /lib64/libgfrpc.so.0(+0xd8fe)[0x7f4a6a1c28fe] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x97)[0x7f4a6a1c3987] (--> /lib64/libgfrpc.so.0(+0xf518)[0x7f4a6a1c4518] ))))) 0-glustervol-client-2: forced unwinding frame type(GlusterFS 4.x v1) op(LOOKUP(27)) called at 2020-06-16 05:16:54.618301 (xid=0xb6)[2020-06-16 05:16:59.732246] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f4a6a41badb] (--> /lib64/libgfrpc.so.0(+0xd7e4)[0x7f4a6a1c27e4] (--> /lib64/libgfrpc.so.0(+0xd8fe)[0x7f4a6a1c28fe] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x97)[0x7f4a6a1c3987] (--> /lib64/libgfrpc.so.0(+0xf518)[0x7f4a6a1c4518] ))))) 0-glustervol-client-2: forced unwinding frame type(GlusterFS 4.x v1) op(LOOKUP(27)) called at 2020-06-16 05:16:58.288790 (xid=0xb7)[2020-06-16 05:17:10.245302] I [rpc-clnt.c:2028:rpc_clnt_reconfig] 0-glustervol-client-2: changing port to 49152 (from 0)[2020-06-16 05:17:10.249896] I [MSGID: 114046] [client-handshake.c:1105:client_setvolume_cbk] 0-glustervol-client-2: Connected to glustervol-client-2, attached to remote volume '/data'.
Thanks,Ahemad
    On Tuesday, 16 June, 2020, 10:10:16 am IST, Karthik Subrahmanya <ksubrahm at redhat.com> wrote:  
 
 Hi Ahemad,
Please provide the following info:1. gluster peer status2. gluster volume info glustervol3. gluster volume status glustervol4. client log from node4 when you saw unavailability
Regards,Karthik
On Mon, Jun 15, 2020 at 11:07 PM ahemad shaik <ahemad_shaik at yahoo.com> wrote:

Hi There,
I have created 3 replica gluster volume with 3 bricks from 3 nodes.
"gluster volume create glustervol replica 3 transport tcp node1:/data node2:/data node3:/data force"
mounted on client node using below command.
"mount -t glusterfs node4:/glustervol    /mnt/"
when any of the node (either node1,node2 or node3) goes down, gluster mount/volume (/mnt) not accessible at client (node4).
purpose of replicated volume is high availability but not able to achieve it.
Is it a bug or i am missing anything.

Any suggestions will be great help!!!
kindly suggest.
Thanks,Ahemad   ________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users at gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

    
  
  
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20200616/48bb483f/attachment-0001.html>
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: node1-bricks-logs.txt
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20200616/48bb483f/attachment-0007.txt>
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: node1-glusterd-logs.txt
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20200616/48bb483f/attachment-0008.txt>
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: node2-bricks-logs.txt
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20200616/48bb483f/attachment-0009.txt>
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: node2-glusterd-logs.txt
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20200616/48bb483f/attachment-0010.txt>
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: node3-bricks-logs.txt
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20200616/48bb483f/attachment-0011.txt>
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: node3-glusterd-logs.txt
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20200616/48bb483f/attachment-0012.txt>
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: node4-client-mnt-logs.txt
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20200616/48bb483f/attachment-0013.txt>


More information about the Gluster-users mailing list