[Gluster-users] One node goes offline, the other node can't see the replicated volume anymore

Greg Scott GregScott at infrasupport.com
Tue Jul 16 00:58:20 UTC 2013


Back to the Twilight Zone again.

I removed my rc.local this time and did a reboot, so the fstab mounts should have taken care of it.   But they didn't.  The fstab line looks like this now on both nodes:

localhost:/firewall-scripts /firewall-scripts glusterfs defaults,_netdev 0 0

After logon, my firewall-scripts filesystem is not mounted.   A mount -av by hand mounts it up.  

Here is the extract from /var/log/messages.  I noticed a couple of mentions of mounts.  
.
.
.
Jul 15 19:39:56 chicago-fw1 network[457]: Bringing up interface enp5s7:  [  OK  ]
Jul 15 19:39:56 chicago-fw1 systemd[1]: Started LSB: Bring up/down networking.
Jul 15 19:39:56 chicago-fw1 systemd[1]: Starting Network.
Jul 15 19:39:56 chicago-fw1 systemd[1]: Reached target Network.
Jul 15 19:39:56 chicago-fw1 systemd[1]: Started Login and scanning of iSCSI devices.
Jul 15 19:39:56 chicago-fw1 systemd[1]: Mounting /firewall-scripts...
Jul 15 19:39:56 chicago-fw1 systemd[1]: Starting Vsftpd ftp daemon...
Jul 15 19:39:56 chicago-fw1 systemd[1]: Starting RPC bind service...
Jul 15 19:39:56 chicago-fw1 systemd[1]: Starting OpenSSH server daemon...
Jul 15 19:39:56 chicago-fw1 systemd[1]: Started RPC bind service.
Jul 15 19:39:56 chicago-fw1 systemd[1]: Started Vsftpd ftp daemon.
Jul 15 19:39:56 chicago-fw1 systemd[1]: Starting GlusterFS an clustered file-system server...
Jul 15 19:39:56 chicago-fw1 systemd[1]: Started OpenSSH server daemon.
Jul 15 19:39:56 chicago-fw1 dbus-daemon[458]: dbus[458]: [system] Activating service name='org.fedoraproject.Setroubleshootd' (u
sing servicehelper)
Jul 15 19:39:56 chicago-fw1 dbus[458]: [system] Activating service name='org.fedoraproject.Setroubleshootd' (using servicehelper
)
Jul 15 19:39:56 chicago-fw1 kernel: [   24.267903] fuse init (API version 7.21)
Jul 15 19:39:56 chicago-fw1 systemd[1]: Mounted /firewall-scripts.
Jul 15 19:39:56 chicago-fw1 systemd[1]: Starting Remote File Systems.
Jul 15 19:39:56 chicago-fw1 systemd[1]: Reached target Remote File Systems.
Jul 15 19:39:56 chicago-fw1 systemd[1]: Starting Trigger Flushing of Journal to Persistent Storage...
Jul 15 19:39:56 chicago-fw1 systemd[1]: Mounting FUSE Control File System...
Jul 15 19:39:59 chicago-fw1 systemd[1]: Started Trigger Flushing of Journal to Persistent Storage.
Jul 15 19:39:59 chicago-fw1 systemd[1]: Mounted FUSE Control File System.
Jul 15 19:39:59 chicago-fw1 systemd[1]: Starting Permit User Sessions...
Jul 15 19:39:59 chicago-fw1 systemd[1]: Started Permit User Sessions.
Jul 15 19:39:59 chicago-fw1 systemd[1]: Starting Command Scheduler...
Jul 15 19:39:59 chicago-fw1 systemd[1]: Started Command Scheduler.
Jul 15 19:39:59 chicago-fw1 systemd[1]: Starting Job spooling tools...
Jul 15 19:39:59 chicago-fw1 systemd[1]: Started Job spooling tools.
Jul 15 19:39:59 chicago-fw1 systemd[1]: Starting Terminate Plymouth Boot Screen...
Jul 15 19:39:59 chicago-fw1 systemd[1]: Starting Wait for Plymouth Boot Screen to Quit...
Jul 15 19:39:59 chicago-fw1 systemd[1]: Started Terminate Plymouth Boot Screen.
Jul 15 19:39:59 chicago-fw1 avahi-daemon[445]: Registering new address record for fe80::230:18ff:fea2:a340 on enp5s7.*.
Jul 15 19:39:59 chicago-fw1 dbus[458]: [system] Successfully activated service 'org.fedoraproject.Setroubleshootd'
Jul 15 19:39:59 chicago-fw1 dbus-daemon[458]: dbus[458]: [system] Successfully activated service 'org.fedoraproject.Setroublesho
otd'
Jul 15 19:40:02 chicago-fw1 audispd: queue is full - dropping event
Jul 15 19:40:02 chicago-fw1 audispd: queue is full - dropping event
.
.
. zillions more "queue is full" messages
.
.
Jul 15 19:40:04 chicago-fw1 audispd: queue is full - dropping event
Jul 15 19:40:04 chicago-fw1 audispd: queue is full - dropping event
Jul 15 19:40:05 chicago-fw1 systemd[1]: Started GlusterFS an clustered file-system server.
Jul 15 19:40:05 chicago-fw1 systemd[1]: Starting Multi-User System.
Jul 15 19:40:05 chicago-fw1 systemd[1]: Reached target Multi-User System.
Jul 15 19:40:05 chicago-fw1 systemd[1]: Starting Update UTMP about System Runlevel Changes...
Jul 15 19:40:05 chicago-fw1 systemd[1]: Starting Stop Read-Ahead Data Collection 10s After Completed Startup.
Jul 15 19:40:05 chicago-fw1 systemd[1]: Started Stop Read-Ahead Data Collection 10s After Completed Startup.
Jul 15 19:40:05 chicago-fw1 systemd[1]: Started Update UTMP about System Runlevel Changes.
Jul 15 19:40:05 chicago-fw1 systemd[1]: Startup finished in 1.482s (kernel) + 2.210s (initrd) + 29.710s (userspace) = 33.403s.
Jul 15 19:40:06 chicago-fw1 mount[1000]: Mount failed. Please check the log file for more details.
Jul 15 19:40:06 chicago-fw1 systemd[1]: firewall\x2dscripts.mount mount process exited, code=exited status=1
Jul 15 19:40:06 chicago-fw1 systemd[1]: Unit firewall\x2dscripts.mount entered failed state.
Jul 15 19:40:06 chicago-fw1 rpc.statd[1184]: Version 1.2.7 starting
Jul 15 19:40:06 chicago-fw1 sm-notify[1185]: Version 1.2.7 starting
Jul 15 19:40:06 chicago-fw1 setroubleshoot: Plugin Exception catchall_labels
Jul 15 19:40:06 chicago-fw1 setroubleshoot: SELinux is preventing /usr/sbin/glusterfsd from mounton access on the directory /fir
ewall-scripts. For complete SELinux messages. run sealert -l 7fb3c8ad-94f4-4292-b4ee-2495b452ef4b
.
.
.



More information about the Gluster-users mailing list