[Bugs] [Bug 1540249] Gluster is trying to use a port outside documentation and firewalld' s glusterfs.xml
bugzilla at redhat.com
bugzilla at redhat.com
Fri Feb 2 22:46:00 UTC 2018
https://bugzilla.redhat.com/show_bug.cgi?id=1540249
--- Comment #10 from devianca at gmail.com ---
Whats the progress?
More logs, check uptime, firewalld+glusterd start times, pool list:
Node1:
[root at ProdigyX ~]# uptime
23:42:48 up 3 days, 6:50, 1 user, load average: 0,01, 0,03, 0,05
[root at ProdigyX ~]# systemctl status firewalld --no-pager --full
● firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor
preset: enabled)
Active: active (running) since tor 2018-01-30 16:52:39 CET; 3 days ago
Docs: man:firewalld(1)
Main PID: 859 (firewalld)
CGroup: /system.slice/firewalld.service
└─859 /usr/bin/python -Es /usr/sbin/firewalld --nofork --nopid
--debug=10
jan 30 16:52:38 ProdigyX systemd[1]: Starting firewalld - dynamic firewall
daemon...
jan 30 16:52:39 ProdigyX systemd[1]: Started firewalld - dynamic firewall
daemon.
jan 30 16:52:40 ProdigyX firewalld[859]: WARNING: ICMP type 'beyond-scope' is
not supported by the kernel for ipv6.
jan 30 16:52:40 ProdigyX firewalld[859]: WARNING: beyond-scope:
INVALID_ICMPTYPE: No supported ICMP type., ignoring for run-time.
jan 30 16:52:40 ProdigyX firewalld[859]: WARNING: ICMP type 'failed-policy' is
not supported by the kernel for ipv6.
jan 30 16:52:40 ProdigyX firewalld[859]: WARNING: failed-policy:
INVALID_ICMPTYPE: No supported ICMP type., ignoring for run-time.
jan 30 16:52:40 ProdigyX firewalld[859]: WARNING: ICMP type 'reject-route' is
not supported by the kernel for ipv6.
jan 30 16:52:40 ProdigyX firewalld[859]: WARNING: reject-route:
INVALID_ICMPTYPE: No supported ICMP type., ignoring for run-time.
[root at ProdigyX ~]# systemctl status glusterd --no-pager --full
● glusterd.service - GlusterFS, a clustered file-system server
Loaded: loaded (/etc/systemd/system/glusterd.service; enabled; vendor
preset: disabled)
Drop-In: /etc/systemd/system/glusterd.service.d
└─override.conf
Active: active (running) since tor 2018-01-30 16:53:19 CET; 3 days ago
Main PID: 1163 (glusterd)
CGroup: /system.slice/glusterd.service
├─1163 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
├─1175 /usr/sbin/glusterfs -s localhost --volfile-id
gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l
/var/log/glusterfs/glustershd.log -S
/var/run/gluster/6c565af3c4462a80c526b26f74d90dca.socket --xlator-option
*replicate*.node-uuid=f7976943-b81a-4bb4-a1fb-06253bf064c4
└─1192 /usr/sbin/glusterfsd -s 10.250.1.2 --volfile-id
replica1.10.250.1.2.array0-brick1 -p
/var/run/gluster/vols/replica1/10.250.1.2-array0-brick1.pid -S
/var/run/gluster/9329e7359a1938faf4767c564e490de5.socket --brick-name
/array0/brick1 -l /var/log/glusterfs/bricks/array0-brick1.log --xlator-option
*-posix.glusterd-uuid=f7976943-b81a-4bb4-a1fb-06253bf064c4 --brick-port 49152
--xlator-option replica1-server.listen-port=49152
jan 30 16:52:43 ProdigyX systemd[1]: Starting GlusterFS, a clustered
file-system server...
jan 30 16:52:48 ProdigyX bash[1174]: Local brick online.
jan 30 16:53:19 ProdigyX systemd[1]: Started GlusterFS, a clustered file-system
server.
[root at ProdigyX ~]# gluster pool list
UUID Hostname State
2f6697f4-2529-4072-910c-8862fdc43562 10.250.1.1 Disconnected
f7976943-b81a-4bb4-a1fb-06253bf064c4 localhost Connected
[root at ProdigyX ~]# gluster peer status
Number of Peers: 1
Hostname: 10.250.1.1
Uuid: 2f6697f4-2529-4072-910c-8862fdc43562
State: Peer in Cluster (Disconnected)
[root at ProdigyX ~]# gluster volume status
Status of volume: replica1
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 10.250.1.2:/array0/brick1 49152 0 Y 1192
Self-heal Daemon on localhost N/A N/A Y 1175
Task Status of Volume replica1
------------------------------------------------------------------------------
There are no active volume tasks
[root at ProdigyX ~]# netstat -nap | grep 49151
tcp 0 1 10.250.1.2:49151 10.250.1.1:24007 SYN_SENT
1163/glusterd
tcp 0 0 10.250.1.2:24007 10.250.1.1:49151 ESTABLISHED
1163/glusterd
Node2:
[root at BUNKER ~]# uptime
23:42:08 up 3 days, 6:54, 1 user, load average: 0,00, 0,01, 0,05
[root at BUNKER ~]# systemctl status firewalld --no-pager --full
● firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor
preset: enabled)
Active: active (running) since tor 2018-01-30 16:47:48 CET; 3 days ago
Docs: man:firewalld(1)
Main PID: 670 (firewalld)
CGroup: /system.slice/firewalld.service
└─670 /usr/bin/python -Es /usr/sbin/firewalld --nofork --nopid
jan 30 16:47:47 BUNKER systemd[1]: Starting firewalld - dynamic firewall
daemon...
jan 30 16:47:48 BUNKER systemd[1]: Started firewalld - dynamic firewall daemon.
jan 30 16:47:49 BUNKER firewalld[670]: WARNING: ICMP type 'beyond-scope' is not
supported by the kernel for ipv6.
jan 30 16:47:49 BUNKER firewalld[670]: WARNING: beyond-scope: INVALID_ICMPTYPE:
No supported ICMP type., ignoring for run-time.
jan 30 16:47:49 BUNKER firewalld[670]: WARNING: ICMP type 'failed-policy' is
not supported by the kernel for ipv6.
jan 30 16:47:49 BUNKER firewalld[670]: WARNING: failed-policy:
INVALID_ICMPTYPE: No supported ICMP type., ignoring for run-time.
jan 30 16:47:49 BUNKER firewalld[670]: WARNING: ICMP type 'reject-route' is not
supported by the kernel for ipv6.
jan 30 16:47:49 BUNKER firewalld[670]: WARNING: reject-route: INVALID_ICMPTYPE:
No supported ICMP type., ignoring for run-time.
[root at BUNKER ~]# systemctl status glusterd --no-pager --full
● glusterd.service - GlusterFS, a clustered file-system server
Loaded: loaded (/etc/systemd/system/glusterd.service; enabled; vendor
preset: disabled)
Drop-In: /etc/systemd/system/glusterd.service.d
└─override.conf
Active: active (running) since tor 2018-01-30 16:47:52 CET; 3 days ago
Main PID: 870 (glusterd)
CGroup: /system.slice/glusterd.service
├─ 870 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
├─1147 /usr/sbin/glusterfs -s localhost --volfile-id
gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l
/var/log/glusterfs/glustershd.log -S
/var/run/gluster/941dd362146b7df8e34dbccb220962e5.socket --xlator-option
*replicate*.node-uuid=2f6697f4-2529-4072-910c-8862fdc43562
└─1395 /usr/sbin/glusterfsd -s 10.250.1.1 --volfile-id
replica1.10.250.1.1.raid0array1-brick2 -p
/var/run/gluster/vols/replica1/10.250.1.1-raid0array1-brick2.pid -S
/var/run/gluster/b1821a9027697b21620a8c5abc7a8fb9.socket --brick-name
/raid0array1/brick2 -l /var/log/glusterfs/bricks/raid0array1-brick2.log
--xlator-option *-posix.glusterd-uuid=2f6697f4-2529-4072-910c-8862fdc43562
--brick-port 49152 --xlator-option replica1-server.listen-port=49152
jan 30 16:47:50 BUNKER systemd[1]: Starting GlusterFS, a clustered file-system
server...
jan 30 16:47:52 BUNKER systemd[1]: Started GlusterFS, a clustered file-system
server.
[root at BUNKER ~]# gluster pool list
UUID Hostname State
f7976943-b81a-4bb4-a1fb-06253bf064c4 10.250.1.2 Connected
2f6697f4-2529-4072-910c-8862fdc43562 localhost Connected
[root at BUNKER ~]# gluster peer status
Number of Peers: 1
Hostname: 10.250.1.2
Uuid: f7976943-b81a-4bb4-a1fb-06253bf064c4
State: Peer in Cluster (Connected)
[root at BUNKER ~]# gluster volume status
Status of volume: replica1
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 10.250.1.2:/array0/brick1 49152 0 Y 1192
Brick 10.250.1.1:/raid0array1/brick2 49152 0 Y 1395
Self-heal Daemon on localhost N/A N/A Y 1147
Self-heal Daemon on 10.250.1.2 N/A N/A Y 1175
Task Status of Volume replica1
------------------------------------------------------------------------------
There are no active volume tasks
[root at BUNKER ~]# netstat -nap | grep 49151
tcp 0 0 10.250.1.1:24007 10.250.1.2:49151 SYN_RECV
-
tcp 0 0 10.250.1.1:49151 10.250.1.2:24007 ESTABLISHED
870/glusterd
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
More information about the Bugs
mailing list