[Gluster-users] Firewall ports with v 3.5.2 grumble time

Jeremy Young jrm16020 at gmail.com
Thu Oct 30 17:59:40 UTC 2014


Hi Paul,

I will agree from experience that finding accurate, up-to-date
documentation on how to do some basic configuration of a Gluster volume can
be difficult.  However, this blog post mentions the updated firewall ports.

http://www.jamescoyle.net/how-to/457-glusterfs-firewall-rules

Get rid of 24009-24012 in your firewall configuration and replace them with
49152-4915X.  If you don't actually need NFS, you can exclude the 3486X
ports that you've opened as well.

________________________________________
From: gluster-users-bounces at gluster.org <gluster-users-bounces at gluster.org>
on behalf of Osborne, Paul (paul.osborne at canterbury.ac.uk) <
paul.osborne at canterbury.ac.uk>
Sent: Thursday, October 30, 2014 8:58 AM
To: gluster-users at gluster.org
Subject: [Gluster-users] Firewall ports with v 3.5.2 grumble time

Hi,

I have a requirement to run my gluster hosts within a firewalled section of
network and where the consumer hosts are in a different segment due to IP
address preservation, part of our security policy requires that we run
local firewalls on every host so I have to get the network access locked
down appropriately.

I am running 3.5.2 using the packages provided in the Gluster package
repository as my Linux distribution only includes packages for 3.2 which
seems somewhat ancient.

Following the documentation here:
http://www.gluster.org/community/documentation/index.php/Basic_Gluster_Troubleshooting

I opened up the relevant ports:

34865 – 34867  for gluster
111 for the portmapper
24009 – 24012 as I am using 2 bricks

This though contradicts:

http://gluster.org/community/documentation/index.php/Gluster_3.2:_Installing_GlusterFS_on_Red_Hat_Package_Manager_(RPM)_Distributions

Which says:

"Ensure that TCP ports 111, 24007,24008, 24009-(24009 + number of bricks
across all volumes) are open on all Gluster servers. If you will be using
NFS, open additional ports 38465 to 38467"

What has not been helpful is that there was no mention of port: 2049 for
NFS over TCP - which would have been helpful and probably my own mistake as
I should have known.

To really confuse matters I noticed that the bricks were not syncing
anyway, and a look at the logs reveals:

/var/log/glusterfs/glfsheal-www.log:[2014-10-30 07:39:48.428286] I
[client-handshake.c:1462:client_setvolume_cbk] 0-www-client-1: Connected to
111.222.333.444:49154, attached to remote volume '/srv/hod/lampe-www'.

along with other entries that show that I also actually need ports:  49154
and 49155 open.

even gluster volume status reveals some of the ports:

gluster> volume status
Status of volume: www
Gluster process                                         Port    Online  Pid
------------------------------------------------------------------------------
Brick 194.82.210.140:/srv/hod/lampe-www                 49154   Y       3035
Brick 194.82.210.130:/srv/hod/lampe-www                 49155   Y
16160
NFS Server on localhost                                 2049    Y
16062
Self-heal Daemon on localhost                           N/A     Y
16072
NFS Server on gfse-isr-01                               2049    Y       3040
Self-heal Daemon on gfse-isr-01                         N/A     Y       3045

Task Status of Volume www
------------------------------------------------------------------------------
There are no active volume tasks


So my query here is, if the bricks are actually using 49154 & 49155 (which
they appear to be) why is this not mentioned in the documentation and are
there any other ports that I should be aware of?

Thanks

Paul
--

Paul Osborne
Senior Systems Engineer
Infrastructure Services
IT Department
Canterbury Christ Church University
_______________________________________________
Gluster-users mailing list
Gluster-users at gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

-- 
Jeremy Young <jrm16020 at gmail.com>, M.S., RHCSA
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20141030/f85b620e/attachment.html>


More information about the Gluster-users mailing list