[Gluster-users] backupvolfile-server (servers) not working for new mounts?
Joel Patterson
joel_patterson at verizon.net
Thu Apr 4 16:48:44 UTC 2019
I have a gluster 4.1 system with three servers running
Docker/Kubernetes. The pods mount filesystems using gluster.
10.13.112.31 is the primary server [A] and all mounts specify it with
two other servers [10.13.113.116 [B] and 10.13.114.16 [C]] specified in
backup-volfile-servers.
I'm testing what happens when a server goes down.
If I bring down [B] or [C], no problem, everything restages and works.
But if I bring down [A], any *existing* mount continues to work, but any
new mounts fail. I'm seeing messages about all subvolumes being down in
the pod.
But I've mounted this exact same volume on the same system (before I
bring down the server) and I can access all the data fine.
Why the failure for new mounts? I'm on AWS and all servers are in
different availability zones, but I don't see how that would be an issue.
I tried using just backupvolfile-server and that didn't work either.
---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus
More information about the Gluster-users
mailing list