[Gluster-users] gnfs split brain when 1 server in 3x1 down (high load) - help request

Strahil Nikolov hunter86_bg at yahoo.com
Sun Mar 29 21:39:24 UTC 2020


On March 29, 2020 7:10:49 AM GMT+03:00, Erik Jacobson <erik.jacobson at hpe.com> wrote:
>Hello all,
>
>I am getting split-brain errors in the gnfs nfs.log when 1 gluster
>server is down in a 3-brick/3-node gluster volume. It only happens
>under
>intense load.
>
>I reported this a few months ago but didn't have a repeatable test
>case.
>Since then, we got reports from the field and I was able to make a test
>case
>with 3 gluster servers and 76 NFS clients/compute nodes. I point all 76
>nodes to one gnfs server to make the problem more likely to happen with
>the
>limited nodes we have in-house.
>
>We are using gluster nfs (ganesha is not yet reliable for our workload)
>to export an NFS filesystem that is used for a read-only root
>filesystem
>for NFS clients. The largest client count we have is 2592 across 9
>leaders (3 replicated subvolumes) - out in the field. This is where
>the problem was first reported.
>
>In the lab, I have a test case that can repeat the problem on a single
>subvolume cluster.
>
>Please forgive how ugly the test case is. I'm sure an IO test person
>can
>make it pretty. It basically runs a bunch of cluster-manger
>NFS-intensive
>operations while also producing other load. If one leader is down,
>nfs.log reports some split-brain errors. For real-world customers, the
>symptom is "some nodes failing to boot" in various ways or "jobs
>failing
>to launch due to permissions or file read problems (like a library not
>being readable on one node)". If all leaders are up, we see no errors.
>
>As an attachment, I will include volume settings.
>
>Here are example nfs.log errors:
>
>
>[2020-03-29 03:42:52.295532] E [MSGID: 108008]
>[afr-read-txn.c:312:afr_read_txn_refresh_done] 0-cm_shared-replicate-0:
>Failing ACCESS on gfid 8eed77d3-b4fa-4beb-a0e7-e46c2b71ffe1:
>split-brain observed. [Input/output error]
>[2020-03-29 03:42:52.295583] W [MSGID: 112199]
>[nfs3-helpers.c:3308:nfs3_log_common_res] 0-nfs-nfsv3:
><gfid:9e721602-2732-4490-bde3-19cac6e33291>/bin/whoami => (XID:
>19fb1558, ACCESS: NFS: 5(I/O error), POSIX: 5(Input/output error))
>[2020-03-29 03:43:03.600023] E [MSGID: 108008]
>[afr-read-txn.c:312:afr_read_txn_refresh_done] 0-cm_shared-replicate-0:
>Failing ACCESS on gfid 77614c4f-1ac4-448d-8fc2-8aedc9b30868:
>split-brain observed. [Input/output error]
>[2020-03-29 03:43:03.600075] W [MSGID: 112199]
>[nfs3-helpers.c:3308:nfs3_log_common_res] 0-nfs-nfsv3:
><gfid:9e721602-2732-4490-bde3-19cac6e33291>/lib64/perl5/vendor_perl/XML/LibXML/Literal.pm
>=> (XID: 9a851abc, ACCESS: NFS: 5(I/O error), POSIX: 5(Input/output
>error))
>[2020-03-29 03:43:07.681294] E [MSGID: 108008]
>[afr-read-txn.c:312:afr_read_txn_refresh_done] 0-cm_shared-replicate-0:
>Failing READLINK on gfid 36134289-cb2d-43d9-bd50-60e23d7fa69b:
>split-brain observed. [Input/output error]
>[2020-03-29 03:43:07.681339] W [MSGID: 112199]
>[nfs3-helpers.c:3327:nfs3_log_readlink_res] 0-nfs-nfsv3:
><gfid:9e721602-2732-4490-bde3-19cac6e33291>/lib64/.libhogweed.so.4.hmac
>=> (XID: 5c29744f, READLINK: NFS: 5(I/O error), POSIX: 5(Input/output
>error)) target: (null)
>
>
>The brick log isn't very interesting during the failure. There are some
>ACL errors that don't seem to directly relate to the issue at hand.
>(I can attach if requested!)
>
>This is glusterfs72 (although we originally hit it with 4.1.6).
>I'm using rhel8 (although field reports are from rhel76).
>
>If there is anything the community can suggest to help me with this, it
>would really be appreciated. I'm getting unhappy reports from the field
>that the failover doesn't work as expected.
>
>I've tried tweaking several things from various threading settings to
>enabling md-cach-statfs to mem-factor to listen backlogs. I even tried
>adjusting the cluster.read-hash-mode and choose-local settings.
>
>"cluster-configuration" in the script initiates a bunch of operations
>on the
>node that results in reading many files and doing some database
>queries. I
>used it in my test case as it is a common failure point when nodes are
>booting. This test case, although ugly, fails 100% if one server is
>down and
>works 100% if all servers are up.
>
>
>#! /bin/bash
>
>#
># Test case:
>#
># in a 1x3 Gluster Replicated setup with the HPCM volume settings..
>#
># On a cluster with 76 nodes (maybe can be replicated with less we
>don't
># know)
>#
># When all the nodes are assigned to one IP alias to get the load in to
># one leader node....
>#
># This test case will produce split-brain errors in the nfs.log file
># when 1 leader is down, but will run clean when all 3 are up.
>#
># It is not necessary to power off the leader you wish to disable.
>Simply
># running 'systemctl stop glusterd' is sufficient.
>#
># We will use this script to try to resolve the issue with split-brain
># under stress when one leader is down.
>#
>
># (compute group is 76 compute nodes)
>echo "killing any node find or node tar commands..."
>pdsh -f 500 -g compute killall find
>pdsh -f 500 -g compute killall tar
>
># (in this test, leader1 is known to have glusterd stopped for the test
>case)
>echo "stop, start glusterd, drop caches, sleep 15"
>set -x
>pdsh -w leader2,leader3 systemctl stop glusterd
>sleep 3
>pdsh -w leader2,leader3 "echo 3 > /proc/sys/vm/drop_caches"
>pdsh -w leader2,leader3 systemctl start glusterd
>set +x
>sleep 15
>
>echo "drop caches on nodes"
>pdsh -f 500 -g compute "echo 3 > /proc/sys/vm/drop_caches"
>
>echo
>"----------------------------------------------------------------------"
>echo "test start"
>echo
>"----------------------------------------------------------------------"
>
>set -x
>
>
>pdsh -f 500 -g compute "tar cf - /usr > /dev/null" &
>pdsh -f 500 -g compute /opt/sgi/lib/cluster-configuration
>pdsh -f 500 -g compute /opt/sgi/lib/cluster-configuration
>pdsh -f 500 -g compute "find /usr > /dev/null" &
>pdsh -f 500 -g compute /opt/sgi/lib/cluster-configuration
>pdsh -f 500 -g compute /opt/sgi/lib/cluster-configuration
>wait

Hey Erik,

That's  odd.
As  far as  I know, the client's are accessing  one of the gluster nodes  that serves as NFS server and then syncs data across the peers ,right?
What happens when the virtual IP(s) are  failed  over to the other gluster node? Is the issue resolved?

Do you get any split brain entries via 'gluster volume geal <VOL> info' ?

Also, what kind of  load balancing are you using ?

Best Regards,
Strahil Nikolov


More information about the Gluster-users mailing list