[Gluster-users] different brick using the same port?

Joe Julian joe at julianfamily.org
Mon Jun 19 13:31:03 UTC 2017


Isn't this just brick multiplexing?

On June 19, 2017 5:55:54 AM PDT, Atin Mukherjee <amukherj at redhat.com> wrote:
>On Sun, Jun 18, 2017 at 1:40 PM, Yong Zhang <hiscal at outlook.com> wrote:
>
>> Hi, all
>>
>>
>>
>> I found two of my bricks from different volumes are using the same
>port
>> 49154 on the same glusterfs server node, is this normal?
>>
>
>No it's not.
>
>Can you please help me with the following information:
>
>1. gluster --version
>2. glusterd log & cmd_history logs from both the nodes
>3. If you are using latest gluster release (3.11) then glusterd
>statedump
>output by executing
>    # kill -SIGUSR1 $(pidof glusterd)
>    the file will be available in /var/run/gluster
>
>
>>
>> Status of volume: home-rabbitmq-qa
>>
>> Gluster process                             TCP Port  RDMA Port 
>Online
>> Pid
>>
>> ------------------------------------------------------------
>> ------------------
>>
>> Brick 10.10.1.100:/glusterfsvolumes/home/ho
>>
>> me-rabbitmq-qa/brick                        49154     0          Y
>> 1538
>>
>> Brick 10.10.1.101:/glusterfsvolumes/home/ho
>>
>> me-rabbitmq-qa/brick                        49154     0          Y
>> 1584
>>
>> Self-heal Daemon on localhost               N/A       N/A        Y
>> 4624
>>
>> Self-heal Daemon on devshglus02.acslocal.ho
>>
>> neywell.com                                 N/A       N/A        Y
>> 2218
>>
>>
>>
>> Task Status of Volume home-rabbitmq-qa
>>
>> ------------------------------------------------------------
>> ------------------
>>
>> There are no active volume tasks
>>
>> Status of volume: paas-ota-qa
>>
>> Gluster process                             TCP Port  RDMA Port 
>Online
>> Pid
>>
>> ------------------------------------------------------------
>> ------------------
>>
>> Brick 10.10.1.100:/glusterfsvolumes/paas/pa
>>
>> as-ota-qa/brick                             49154     0          Y
>> 10320
>>
>> Brick 10.10.1.101:/glusterfsvolumes/paas/pa
>>
>> as-ota-qa/brick                             49154     0          Y
>> 987
>>
>> Self-heal Daemon on localhost               N/A       N/A        Y
>> 4624
>>
>> Self-heal Daemon on devshglus02.acslocal.ho
>>
>> neywell.com                                 N/A       N/A        Y
>> 2218
>>
>>
>>
>> Task Status of Volume paas-ota-qa
>>
>> ------------------------------------------------------------
>> ------------------
>>
>> There are no active volume tasks
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170619/82d469ad/attachment.html>


More information about the Gluster-users mailing list