[Gluster-users] different brick using the same port?
Atin Mukherjee
amukherj at redhat.com
Mon Jun 19 14:39:23 UTC 2017
On Mon, Jun 19, 2017 at 7:01 PM, Joe Julian <joe at julianfamily.org> wrote:
> Isn't this just brick multiplexing?
>
I initially thought about it but with brick multiplexing the pid should be
the same which is not the case here.
>
>
> On June 19, 2017 5:55:54 AM PDT, Atin Mukherjee <amukherj at redhat.com>
> wrote:
>>
>>
>>
>> On Sun, Jun 18, 2017 at 1:40 PM, Yong Zhang <hiscal at outlook.com> wrote:
>>
>>> Hi, all
>>>
>>>
>>>
>>> I found two of my bricks from different volumes are using the same port
>>> 49154 on the same glusterfs server node, is this normal?
>>>
>>
>> No it's not.
>>
>> Can you please help me with the following information:
>>
>> 1. gluster --version
>> 2. glusterd log & cmd_history logs from both the nodes
>> 3. If you are using latest gluster release (3.11) then glusterd statedump
>> output by executing
>> # kill -SIGUSR1 $(pidof glusterd)
>> the file will be available in /var/run/gluster
>>
>>
>>>
>>> Status of volume: home-rabbitmq-qa
>>>
>>> Gluster process TCP Port RDMA Port
>>> Online Pid
>>>
>>> ------------------------------------------------------------
>>> ------------------
>>>
>>> Brick 10.10.1.100:/glusterfsvolumes/home/ho
>>>
>>> me-rabbitmq-qa/brick 49154 0
>>> Y 1538
>>>
>>> Brick 10.10.1.101:/glusterfsvolumes/home/ho
>>>
>>> me-rabbitmq-qa/brick 49154 0 Y
>>> 1584
>>>
>>> Self-heal Daemon on localhost N/A N/A Y
>>> 4624
>>>
>>> Self-heal Daemon on devshglus02.acslocal.ho
>>>
>>> neywell.com N/A N/A
>>> Y 2218
>>>
>>>
>>>
>>> Task Status of Volume home-rabbitmq-qa
>>>
>>> ------------------------------------------------------------
>>> ------------------
>>>
>>> There are no active volume tasks
>>>
>>> Status of volume: paas-ota-qa
>>>
>>> Gluster process TCP Port RDMA Port
>>> Online Pid
>>>
>>> ------------------------------------------------------------
>>> ------------------
>>>
>>> Brick 10.10.1.100:/glusterfsvolumes/paas/pa
>>>
>>> as-ota-qa/brick 49154 0
>>> Y 10320
>>>
>>> Brick 10.10.1.101:/glusterfsvolumes/paas/pa
>>>
>>> as-ota-qa/brick 49154 0
>>> Y 987
>>>
>>> Self-heal Daemon on localhost N/A N/A Y
>>> 4624
>>>
>>> Self-heal Daemon on devshglus02.acslocal.ho
>>>
>>> neywell.com N/A N/A
>>> Y 2218
>>>
>>>
>>>
>>> Task Status of Volume paas-ota-qa
>>>
>>> ------------------------------------------------------------
>>> ------------------
>>>
>>> There are no active volume tasks
>>>
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>>
>>
>>
> --
> Sent from my Android device with K-9 Mail. Please excuse my brevity.
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170619/7e305422/attachment.html>
More information about the Gluster-users
mailing list