[Gluster-users] Transport endpoint is not connected

Strahil hunter86_bg at yahoo.com
Thu May 30 04:11:51 UTC 2019


You can try to run a ncat from gfs3:

ncat -z -v gfs1 49152
ncat -z -v gfs2 49152

If ncat fails to connect ->  it's definately a firewall.

Best Regards,
Strahil NikolovOn May 30, 2019 01:33, David Cunningham <dcunningham at voisonics.com> wrote:
>
> Hi Ravi,
>
> I think it probably is a firewall issue with the network provider. I was hoping to see a specific connection failure message we could send to them, but will take it up with them anyway. 
>
> Thanks for your help.
>
>
> On Wed, 29 May 2019 at 23:10, Ravishankar N <ravishankar at redhat.com> wrote:
>>
>> I don't see a "Connected to gvol0-client-1" in the log.  Perhaps a firewall issue like the last time? Even in the earlier add-brick log from the other email thread, connection to the 2nd brick was not established.
>>
>> -Ravi
>>
>> On 29/05/19 2:26 PM, David Cunningham wrote:
>>>
>>> Hi Ravi and Joe,
>>>
>>> The command "gluster volume status gvol0" shows all 3 nodes as being online, even on gfs3 as below. I've attached the glfsheal-gvol0.log, in which I can't see anything like a connection error. Would you have any further suggestions? Thank you.
>>>
>>> [root at gfs3 glusterfs]# gluster volume status gvol0
>>> Status of volume: gvol0
>>> Gluster process                             TCP Port  RDMA Port  Online  Pid
>>> ------------------------------------------------------------------------------
>>> Brick gfs1:/nodirectwritedata/gluster/gvol0 49152     0          Y       7706 
>>> Brick gfs2:/nodirectwritedata/gluster/gvol0 49152     0          Y       7625 
>>> Brick gfs3:/nodirectwritedata/gluster/gvol0 49152     0          Y       7307 
>>> Self-heal Daemon on localhost               N/A       N/A        Y       7316 
>>> Self-heal Daemon on gfs1                    N/A       N/A        Y       40591
>>> Self-heal Daemon on gfs2                    N/A       N/A        Y       7634 
>>>  
>>> Task Status of Volume gvol0
>>> ------------------------------------------------------------------------------
>>> There are no active volume tasks
>>>
>>>
>>> On Wed, 29 May 2019 at 16:26, Ravishankar N <ravishankar at redhat.com> wrote:
>>>>
>>>>
>>>> On 29/05/19 6:21 AM, David Cunningham wrote:
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190530/fde36918/attachment.html>


More information about the Gluster-users mailing list