[Gluster-users] "gluster volume heal <volume_name> info" does notshow all bricks

Strahil hunter86_bg at yahoo.com
Sat Mar 23 19:54:16 UTC 2019


Hi  Tomasz,

Do you have a firewall in between the nodes?
Can you test with local firewall (on each node) down ?

Best Regards,
Strahil NikolovOn Mar 23, 2019 05:39, Tomasz Chmielewski <mangoo at wpkg.org> wrote:
>
> There are three replicated bricks: repo01, repo02 and repo03. 
>
> All bricks are online and show the same info for commands like "gluster 
> volume info" or "gluster volume status"; "gluster peer status" show the 
> other bricks connected. However, "gluster volume heal storage info" only 
> shows the first two bricks - does anyone have an idea why? If it 
> matters, repo03 was added later. Running gluster 5.5. 
>
>
> # gluster volume heal storage info 
> Brick repo01:/gluster/data 
> Status: Connected 
> Number of entries: 0 
>
> Brick repo02:/gluster/data 
> Status: Connected 
> Number of entries: 0 
>
>
>
> Other info: 
>
>
> # gluster volume info 
>
> Volume Name: storage 
> Type: Replicate 
> Volume ID: 8e533781-01fc-4c8a-b220-9691346fbe3c 
> Status: Started 
> Snapshot Count: 0 
> Number of Bricks: 1 x 3 = 3 
> Transport-type: tcp 
> Bricks: 
> Brick1: repo01:/gluster/data 
> Brick2: repo02:/gluster/data 
> Brick3: repo03:/gluster/data 
> Options Reconfigured: 
> transport.address-family: inet 
> performance.readdir-ahead: on 
> nfs.disable: on 
> auth.allow: 127.0.0.1,10.192.0.30,10.192.0.31,10.192.0.32 
>
>
>
> # gluster volume status 
> Status of volume: storage 
> Gluster process                             TCP Port  RDMA Port  Online  
> Pid 
> ------------------------------------------------------------------------------ 
> Brick repo01:/gluster/data                                          
> 49153     0          Y       1829 
> Brick repo02:/gluster/data                                          
> 49153     0          Y       81077 
> Brick repo03:/gluster/data                                          
> 49153     0          Y       2497 
> Self-heal Daemon on localhost               N/A       N/A        Y       
> 81100 
> Self-heal Daemon on 10.192.0.30             N/A       N/A        Y       
> 1852 
> Self-heal Daemon on repo03                                          N/A  
>       N/A        Y       2520 
>
> Task Status of Volume storage 
> ------------------------------------------------------------------------------ 
> There are no active volume tasks 
>
>
> Tomasz Chmielewski 
> https://lxadm.com 
> _______________________________________________ 
> Gluster-users mailing list 
> Gluster-users at gluster.org 
> https://lists.gluster.org/mailman/listinfo/gluster-users 


More information about the Gluster-users mailing list