[Gluster-users] One node goes offline, the other node can't see the replicated volume anymore

Greg Scott GregScott at infrasupport.com
Mon Jul 15 20:19:10 UTC 2013


Woops, didn't copy the list on this one.
*****

I have SElinux set to permissive mode so those SELinux warnings should not be important.  If they were real, I would also have trouble mounting by hand, right?

- Greg

-----Original Message-----
From: Joe Julian [mailto:joe at julianfamily.org] 
Sent: Monday, July 15, 2013 2:37 PM
To: Greg Scott
Subject: Re: [Gluster-users] One node goes offline, the other node can't see the replicated volume anymore

It's a known selinux bug: https://bugzilla.redhat.com/show_bug.cgi?id=984465

Either add your own via audit2allow or wait for a fix. (I'd do the former).

On 07/15/2013 12:28 PM, Greg Scott wrote:
> Maybe I am dealing with a systemd timing glitch because I can do my mount by hand on both nodes.
>
> I do
>
> ls /firewall-scripts, confirm it's empty, then
>
> mount -av, and then another
>
> ls /firewall-scripts and now my files show up.  Both nodes behave identically.
>
> [root at chicago-fw2 rc.d]# nano /var/log/messages
> [root at chicago-fw2 rc.d]# ls /firewall-scripts
> [root at chicago-fw2 rc.d]# mount -av
> /                        : ignored
> /boot                    : already mounted
> /boot/efi                : already mounted
> /gluster-fw2             : already mounted
> swap                     : ignored
> extra arguments at end (ignored)
> /firewall-scripts        : successfully mounted
> [root at chicago-fw2 rc.d]# ls /firewall-scripts
> allow-all           failover-monitor.sh  lost+found       route-monitor.sh
> allow-all-with-nat  fwdate.txt           rc.firewall      start-failover-monitor.sh
> etc                 initial_rc.firewall  rcfirewall.conf  var
> [root at chicago-fw2 rc.d]#
>
> - Greg




More information about the Gluster-users mailing list