[Gluster-users] One node goes offline, the other node can't see the replicated volume anymore

Greg Scott GregScott at infrasupport.com
Wed Jul 17 10:59:53 UTC 2013


I just rebooted both fw1 and fw2 again with no custom systemd script, no rc.local, all virgin.  This time my /firewall-scripts filesystem is mounted on fw1 and not mounted on fw2.  So the exact opposite behavior as the same reboot test yesterday.

I’m going to run out of time very soon to tinker with this – the system it’s replacing is 400 miles away and degrading fast.


-          Greg


From: gluster-users-bounces at gluster.org [mailto:gluster-users-bounces at gluster.org] On Behalf Of Greg Scott
Sent: Tuesday, July 16, 2013 11:58 AM
To: 'Joe Julian'
Cc: 'gluster-users at gluster.org'
Subject: Re: [Gluster-users] One node goes offline, the other node can't see the replicated volume anymore

BTW, I know I posted fw1 and fw2 results in different emails, but I rebooted both at the same time.


-          Greg

From: Greg Scott
Sent: Tuesday, July 16, 2013 11:56 AM
To: 'Joe Julian'
Cc: gluster-users at gluster.org<mailto:gluster-users at gluster.org>
Subject: RE: [Gluster-users] One node goes offline, the other node can't see the replicated volume anymore

Holy moley – but it **IS** mounted on fw2.  Go figure.   Welcome to today’s Twilight Zone episode.

[root at chicago-fw2 systemd]# cd /etc/rc.d
[root at chicago-fw2 rc.d]# mv rc.local greg-rc.local
[root at chicago-fw2 rc.d]# more /etc/fstab

#
# /etc/fstab
# Created by anaconda on Sat Jul  6 05:08:55 2013
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/fedora-root /                       ext4    defaults        1 1
UUID=f0cceb6a-61c4-409b-b882-5d6779a52505 /boot                   ext4    defaults        1 2
UUID=665D-DF0B          /boot/efi               vfat    umask=0077,shortname=winnt 0 0
/dev/mapper/fedora-gluster--fw2 /gluster-fw2            ext4    defaults        1 2
/dev/mapper/fedora-swap swap                    swap    defaults        0 0
# Added gluster stuff Greg Scott
localhost:/firewall-scripts /firewall-scripts glusterfs defaults,_netdev 0 0

[root at chicago-fw2 rc.d]# reboot
login as: root
root at 10.10.10.72's<mailto:root at 10.10.10.72's> password:
Last login: Tue Jul 16 10:21:56 2013 from tinahp100b.infrasupport.local
[root at chicago-fw2 ~]# df -h
Filesystem                       Size  Used Avail Use% Mounted on
/dev/mapper/fedora-root           14G  4.2G  8.4G  34% /
devtmpfs                         990M     0  990M   0% /dev
tmpfs                            996M     0  996M   0% /dev/shm
tmpfs                            996M  888K  996M   1% /run
tmpfs                            996M     0  996M   0% /sys/fs/cgroup
tmpfs                            996M     0  996M   0% /tmp
/dev/sda2                        477M   90M  362M  20% /boot
/dev/sda1                        200M  9.4M  191M   5% /boot/efi
/dev/mapper/fedora-gluster--fw2  7.6G   19M  7.2G   1% /gluster-fw2
localhost:/firewall-scripts      7.6G   19M  7.2G   1% /firewall-scripts
[root at chicago-fw2 ~]#



-          Greg

From: gluster-users-bounces at gluster.org<mailto:gluster-users-bounces at gluster.org> [mailto:gluster-users-bounces at gluster.org] On Behalf Of Greg Scott
Sent: Tuesday, July 16, 2013 11:52 AM
To: 'Joe Julian'
Cc: gluster-users at gluster.org<mailto:gluster-users at gluster.org>
Subject: Re: [Gluster-users] One node goes offline, the other node can't see the replicated volume anymore

➢ Get rid of every other mount attempt. No custom systemd script, no rc.local (I know you start
➢ your own app from there, but let's get one thing working first) and make sure the fstab entry
➢ still has the _netdev option.

OK, done on both nodes.  Fw1 pasted in below.  Not mounted after coming back up.

[root at chicago-fw1 ~]# cd /etc/rc.d
[root at chicago-fw1 rc.d]# mv rc.local greg-rc.local
[root at chicago-fw1 rc.d]# more /etc/fstab

#
# /etc/fstab
# Created by anaconda on Sat Jul  6 04:26:01 2013
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/fedora-root /                       ext4    defaults        1 1
UUID=818c4142-e389-4f28-a28e-6e26df3caa32 /boot                   ext4    defaults        1 2
UUID=C57B-BCF9          /boot/efi               vfat    umask=0077,shortname=winnt 0 0
/dev/mapper/fedora-gluster--fw1 /gluster-fw1            xfs     defaults        1 2
/dev/mapper/fedora-swap swap                    swap    defaults        0 0
# Added gluster stuff Greg Scott
localhost:/firewall-scripts /firewall-scripts glusterfs defaults,_netdev 0 0

[root at chicago-fw1 rc.d]# reboot
login as: root
root at 10.10.10.71's<mailto:root at 10.10.10.71's> password:
Last login: Tue Jul 16 10:21:33 2013 from tinahp100b.infrasupport.local
[root at chicago-fw1 ~]# df -h
Filesystem                       Size  Used Avail Use% Mounted on
/dev/mapper/fedora-root           14G  3.9G  8.7G  31% /
devtmpfs                         990M     0  990M   0% /dev
tmpfs                            996M     0  996M   0% /dev/shm
tmpfs                            996M  888K  996M   1% /run
tmpfs                            996M     0  996M   0% /sys/fs/cgroup
tmpfs                            996M     0  996M   0% /tmp
/dev/sda2                        477M   87M  365M  20% /boot
/dev/sda1                        200M  9.4M  191M   5% /boot/efi
/dev/mapper/fedora-gluster--fw1  7.9G   33M  7.8G   1% /gluster-fw1
[root at chicago-fw1 ~]#

-Greg



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130717/a604207e/attachment.html>


More information about the Gluster-users mailing list