[Bugs] [Bug 1540249] Gluster is trying to use a port outside documentation and firewalld' s glusterfs.xml

bugzilla at redhat.com bugzilla at redhat.com
Tue Jan 30 15:45:21 UTC 2018


https://bugzilla.redhat.com/show_bug.cgi?id=1540249



--- Comment #1 from devianca at gmail.com ---
to remind everyone, this is a firewall issue i bet.

but nevertheless, this is some more info from a situation like this:

node1:
[root at ProdigyX ~]# gluster pool list
UUID                                    Hostname        State
xxx    10.250.1.1      Disconnected
yyy    localhost       Connected
[root at ProdigyX ~]# gluster volume info replica1

Volume Name: replica1
Type: Replicate
Volume ID: zzz
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 10.250.1.2:/array0/brick1
Brick2: 10.250.1.1:/raid0array1/brick2
Options Reconfigured:
client.bind-insecure: off
performance.client-io-threads: off
auth.allow: 10.250.1.1,10.250.1.2
transport.address-family: inet
nfs.disable: on
server.event-threads: 8
performance.io-thread-count: 64
performance.cache-size: 32MB
performance.write-behind-window-size: 64MB
server.allow-insecure: off
[root at ProdigyX ~]# gluster volume status replica1
Status of volume: replica1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.250.1.2:/array0/brick1             49152     0          Y       1193
Self-heal Daemon on localhost               N/A       N/A        Y       1176

Task Status of Volume replica1
------------------------------------------------------------------------------
There are no active volume tasks

[root at ProdigyX ~]# ps auxwww | grep gluster
root      1164  0.3  0.0 604644 10732 ?        Ssl  14:00   0:00
/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
root      1176  0.0  0.0 593156  7644 ?        Ssl  14:00   0:00
/usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p
/var/run/gluster/gluste shd/glustershd.pid -l /var/log/glusterfs/glustershd.log
-S /var/run/gluster/6c565af3c4462a80c526b26f74d90dca.socket --xlator-option
*replicate*.node-uuid=yyy
root      1193  0.0  0.0 1612800 16356 ?       Ssl  14:00   0:00
/usr/sbin/glusterfsd -s 10.250.1.2 --volfile-id
replica1.10.250.1.2.array0-brick1 -p
/var/run/gluster/vols/replica1/10.250.1.2-array0-brick1.pid -S
/var/run/gluster/9329e7359a1938faf4767c564e490de5.socket --brick-name
/array0/brick1 -l /var/log/glusterfs/bricks/array0-brick1.log --xlator-option
*-posix.glusterd-uuid=yyy --brick-port 49152 --xlator-option
replica1-server.listen-port=49152
root      2171  0.1  0.0 550584  9496 ?        Ssl  14:01   0:00
/usr/sbin/glusterfs --volfile-server=127.0.0.1 --volfile-id=/replica1 /gluster
root      2343  0.0  0.0 112676   972 pts/0    S+   14:01   0:00 grep
--color=auto gluster

node2:
[root at BUNKER ~]# gluster pool list
UUID                                    Hostname        State
yyy    10.250.1.2      Connected
xxx    localhost       Connected
You have new mail in /var/spool/mail/root
[root at BUNKER ~]# gluster volume info replica1

Volume Name: replica1
Type: Replicate
Volume ID: zzz
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 10.250.1.2:/array0/brick1
Brick2: 10.250.1.1:/raid0array1/brick2
Options Reconfigured:
server.allow-insecure: off
performance.write-behind-window-size: 64MB
performance.cache-size: 32MB
performance.io-thread-count: 64
server.event-threads: 8
nfs.disable: on
transport.address-family: inet
auth.allow: 10.250.1.1,10.250.1.2
performance.client-io-threads: off
client.bind-insecure: off
[root at BUNKER ~]# gluster volume status replica1
Status of volume: replica1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.250.1.2:/array0/brick1             49152     0          Y       1193
Brick 10.250.1.1:/raid0array1/brick2        49152     0          Y       1400
Self-heal Daemon on localhost               N/A       N/A        Y       1503
Self-heal Daemon on 10.250.1.2              N/A       N/A        Y       1176

Task Status of Volume replica1
------------------------------------------------------------------------------
There are no active volume tasks

[root at BUNKER ~]# ps auxwww |grep gluster
root       869  0.0  0.2 604652  9560 ?        Ssl  12:35   0:01
/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
root      1400  0.0  0.5 1416192 20312 ?       Ssl  12:35   0:00
/usr/sbin/glusterfsd -s 10.250.1.1 --volfile-id
replica1.10.250.1.1.raid0array1-brick2 -p
/var/run/gluster/vols/replica1/10.250.1.1-raid0array1-brick2.pid -S
/var/run/gluster/b1821a9027697b21620a8c5abc7a8fb9.socket --brick-name
/raid0array1/brick2 -l /var/log/glusterfs/bricks/raid0array1-brick2.log
--xlator-option *-posix.glusterd-uuid=xxx --brick-port 49152 --xlator-option
replica1-server.listen-port=49152
root      1503  0.0  0.4 683312 17008 ?        Ssl  12:39   0:00
/usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p
/var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log
-S /var/run/gluster/941dd362146b7df8e34dbccb220962e5.socket --xlator-option
*replicate*.node-uuid=xxx
root      1918  0.0  0.0 112680   984 pts/0    S+   14:04   0:00 grep
--color=auto gluster

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list