[Gluster-users] Firewall ports with v 3.5.2 grumble time
Todd Stansell
todd at stansell.org
Thu Oct 30 18:21:32 UTC 2014
This is because in 3.4, they changed the brick port range. It's mentioned
on
https://forge.gluster.org/gluster-docs-project/pages/GlusterFS_34_Release_No
tes:
"Brick ports will now listen from 49152 onwards (instead of 24009 onwards
as with previous releases). The brick port assignment scheme is now
compliant with IANA guidelines."
Sadly, documentation for gluster is very difficult to find what you need, in
my experience.
Todd
-----Original Message-----
From: gluster-users-bounces at gluster.org
[mailto:gluster-users-bounces at gluster.org] On Behalf Of Osborne, Paul
(paul.osborne at canterbury.ac.uk)
Sent: Thursday, October 30, 2014 6:59 AM
To: gluster-users at gluster.org
Subject: [Gluster-users] Firewall ports with v 3.5.2 grumble time
Hi,
I have a requirement to run my gluster hosts within a firewalled section of
network and where the consumer hosts are in a different segment due to IP
address preservation, part of our security policy requires that we run local
firewalls on every host so I have to get the network access locked down
appropriately.
I am running 3.5.2 using the packages provided in the Gluster package
repository as my Linux distribution only includes packages for 3.2 which
seems somewhat ancient.
Following the documentation here:
http://www.gluster.org/community/documentation/index.php/Basic_Gluster_Troub
leshooting
I opened up the relevant ports:
34865 - 34867 for gluster
111 for the portmapper
24009 - 24012 as I am using 2 bricks
This though contradicts:
http://gluster.org/community/documentation/index.php/Gluster_3.2:_Installing
_GlusterFS_on_Red_Hat_Package_Manager_(RPM)_Distributions
Which says:
"Ensure that TCP ports 111, 24007,24008, 24009-(24009 + number of bricks
across all volumes) are open on all Gluster servers. If you will be using
NFS, open additional ports 38465 to 38467"
What has not been helpful is that there was no mention of port: 2049 for NFS
over TCP - which would have been helpful and probably my own mistake as I
should have known.
To really confuse matters I noticed that the bricks were not syncing anyway,
and a look at the logs reveals:
/var/log/glusterfs/glfsheal-www.log:[2014-10-30 07:39:48.428286] I
[client-handshake.c:1462:client_setvolume_cbk] 0-www-client-1: Connected to
111.222.333.444:49154, attached to remote volume '/srv/hod/lampe-www'.
along with other entries that show that I also actually need ports: 49154
and 49155 open.
even gluster volume status reveals some of the ports:
gluster> volume status
Status of volume: www
Gluster process Port Online Pid
----------------------------------------------------------------------------
--
Brick 194.82.210.140:/srv/hod/lampe-www 49154 Y 3035
Brick 194.82.210.130:/srv/hod/lampe-www 49155 Y
16160
NFS Server on localhost 2049 Y
16062
Self-heal Daemon on localhost N/A Y
16072
NFS Server on gfse-isr-01 2049 Y 3040
Self-heal Daemon on gfse-isr-01 N/A Y 3045
Task Status of Volume www
----------------------------------------------------------------------------
--
There are no active volume tasks
So my query here is, if the bricks are actually using 49154 & 49155 (which
they appear to be) why is this not mentioned in the documentation and are
there any other ports that I should be aware of?
Thanks
Paul
--
Paul Osborne
Senior Systems Engineer
Infrastructure Services
IT Department
Canterbury Christ Church University
_______________________________________________
Gluster-users mailing list
Gluster-users at gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
More information about the Gluster-users
mailing list