[Bugs] [Bug 1435170] New: Gluster-client no failover

bugzilla at redhat.com bugzilla at redhat.com
Thu Mar 23 10:20:23 UTC 2017


https://bugzilla.redhat.com/show_bug.cgi?id=1435170

            Bug ID: 1435170
           Summary: Gluster-client no failover
           Product: GlusterFS
           Version: 3.10
         Component: glusterd
          Severity: high
          Assignee: bugs at gluster.org
          Reporter: apinter.it at gmail.com
                CC: bugs at gluster.org



Description of problem:
Running Glusterfs-server on 4 KVM virtual machines with CentOS7.3 Core ,
installed using CentOS Storage SIG packages v3.10 and connecting from Fedora 25
with client version 3.10 as well.

When I turn off the server I connect to originally with the client there is no
failover, but the volume dismounts. 
In the log noticed name resolve errors, but the 4 server is present in the
hosts file on the servers and on fedora as well.


Version-Release number of selected component (if applicable):
Installed Gluster packages on CentOS server side:

centos-release-gluster310-1.0-1.el7.centos.noarch
nfs-ganesha-gluster-2.4.3-1.el7.x86_64
glusterfs-api-3.10.0-1.el7.x86_64
glusterfs-cli-3.10.0-1.el7.x86_64
glusterfs-server-3.10.0-1.el7.x86_64
glusterfs-libs-3.10.0-1.el7.x86_64
glusterfs-3.10.0-1.el7.x86_64
glusterfs-fuse-3.10.0-1.el7.x86_64
glusterfs-ganesha-3.10.0-1.el7.x86_64
glusterfs-client-xlators-3.10.0-1.el7.x86_64

Installed glusterfs packages on Fedora side:

glusterfs-3.10.0-1.fc25.x86_64
glusterfs-api-3.10.0-1.fc25.x86_64
glusterfs-libs-3.10.0-1.fc25.x86_64
glusterfs-fuse-3.10.0-1.fc25.x86_64
glusterfs-client-xlators-3.10.0-1.fc25.x86_64


How reproducible: 
100% Always on CentOS, Fedora (and Ubuntu < used this as a reference to make
sure it's not my F25's issue) side clients.


Steps to Reproduce:
1. Install Glusterfs-server 3.10 (and above listed packages) from CentOS SIG
2. Add servers to /etc/hosts (on every server and client)
3. Enable unrestricted communication between server (on all server) 
firewall-cmd --permanent --add-source=192.168.0.204
firewall-cmd --permanent --add-source=192.168.0.205
firewall-cmd --permanent --add-source=192.168.0.206
firewall-cmd --permanent --add-source=192.168.0.207
4. Created trusted pool between 4 servers
5. Create volume: gluster volume create gfs replica 2 transport tcp
gfs1:/tank/avalon/gfs gfs2:/tank/avalon/gfs gfs3:/tank/avalon/gfs
gfs4:/tank/avalon/gfs
6. Label bricks
semanage fcontext -a -t glusterd_brick_t "/tank/avalon/gfs(/.*)?"
restorecon -Rv /tank/avalon/gfs
7. Conenct from F25 with glusterfs client:
sudo mount -t glusterfs -o backupvolfile-server=volfile_bk,transport=tcp
gfs1:/gfs /mnt/bitWafl/


Actual results:
After shutting down server client is not connecting to other server
Mount log is long, available here:
https://paste.fedoraproject.org/paste/OnhVTc-PcNBBDnEvIftddF5M1UNdIGYhyRLivL9gydE=

Command log from
server:https://paste.fedoraproject.org/paste/bPmz63R3VozHI7XHZE9dnl5M1UNdIGYhyRLivL9gydE=

Expected results:
Client to connect other server in the pool where volume is present without
downtime.

Additional info:
SELinux on server is not giving AVC messages, bricks are labelled. Had a
name-bind AVC message, but after creating a pp for it and rebooting it is no
longer showing.

AVC Report
========================================================
# date time comm subj syscall class permission obj event
========================================================
1. 03/22/2017 19:46:07 ? system_u:system_r:init_t:s0 0 (null) (null) (null)
unset 608
2. 03/23/2017 09:40:54 glusterd system_u:system_r:glusterd_t:s0 49 tcp_socket
name_bind system_u:object_r:ephemeral_port_t:s0 denied 1649
3. 03/23/2017 11:50:01 ? system_u:system_r:init_t:s0 0 (null) (null) (null)
unset 2157
4. 03/23/2017 15:15:30 glusterd system_u:system_r:glusterd_t:s0 49 tcp_socket
name_bind system_u:object_r:ephemeral_port_t:s0 denied 83
5. 03/23/2017 15:40:01 ? system_u:system_r:init_t:s0 0 (null) (null) (null)
unset 132

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list