[Gluster-users] Transport endpoint is not connected
Strahil Nikolov
hunter86_bg at yahoo.com
Tue Jun 4 11:48:02 UTC 2019
Hi David,
You can ensure that 49152-49160 are opened in advance...You never know when you will need to deploy another Gluster Volume.
best Regards,Strahil Nikolov
В понеделник, 3 юни 2019 г., 18:16:00 ч. Гринуич-4, David Cunningham <dcunningham at voisonics.com> написа:
Hello all,
We confirmed that the network provider blocking port 49152 was the issue. Thanks for all the help.
On Thu, 30 May 2019 at 16:11, Strahil <hunter86_bg at yahoo.com> wrote:
You can try to run a ncat from gfs3:
ncat -z -v gfs1 49152
ncat -z -v gfs2 49152
If ncat fails to connect -> it's definately a firewall.
Best Regards,
Strahil Nikolov
On May 30, 2019 01:33, David Cunningham <dcunningham at voisonics.com> wrote:
Hi Ravi,
I think it probably is a firewall issue with the network provider. I was hoping to see a specific connection failure message we could send to them, but will take it up with them anyway.
Thanks for your help.
On Wed, 29 May 2019 at 23:10, Ravishankar N <ravishankar at redhat.com> wrote:
I don't see a "Connected to gvol0-client-1" in the log. Perhaps a firewall issue like the last time? Even in the earlier add-brick log from the other email thread, connection to the 2nd brick was not established.
-Ravi
On 29/05/19 2:26 PM, David Cunningham wrote:
Hi Ravi and Joe,
The command "gluster volume status gvol0" shows all 3 nodes as being online, even on gfs3 as below. I've attached the glfsheal-gvol0.log, in which I can't see anything like a connection error. Would you have any further suggestions? Thank you.
[root at gfs3 glusterfs]# gluster volume status gvol0
Status of volume: gvol0
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick gfs1:/nodirectwritedata/gluster/gvol0 49152 0 Y 7706
Brick gfs2:/nodirectwritedata/gluster/gvol0 49152 0 Y 7625
Brick gfs3:/nodirectwritedata/gluster/gvol0 49152 0 Y 7307
Self-heal Daemon on localhost N/A N/A Y 7316
Self-heal Daemon on gfs1 N/A N/A Y 40591
Self-heal Daemon on gfs2 N/A N/A Y 7634
Task Status of Volume gvol0
------------------------------------------------------------------------------
There are no active volume tasks
On Wed, 29 May 2019 at 16:26, Ravishankar N <ravishankar at redhat.com> wrote:
On 29/05/19 6:21 AM, David Cunningham wrote:
--
David Cunningham, Voisonics Limited
http://voisonics.com/
USA: +1 213 221 1092
New Zealand: +64 (0)28 2558 3782
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190604/770b6861/attachment.html>
More information about the Gluster-users
mailing list