[Gluster-users] What a brick is missing in `sudo gluster volume status`?

Carlos Capriotti capriotti.carlos at gmail.com
Sat Mar 22 19:01:59 UTC 2014


One thing that caught my eyes:

auth.allow: 172.17.*.*

Can you remove that, restart glusterd/the nodes and try again ?

Also, do you have firewall/iptables rules enabled ? If yes, consider
testing with iptables/firewall disabled.




On Sat, Mar 22, 2014 at 7:09 PM, Peng Yu <pengyu.ut at gmail.com> wrote:

> Hi,
>
> There should be two bricks in the volume "gv". But `sudo gluster
> volume status` does not show `betelgeuse:/mnt/raid6/glusterfs_export`.
> Does anybody know what is wrong with this? Thanks.
>
> pengy at rigel:~$ sudo gluster volume status
> Status of volume: gv
> Gluster process                        Port    Online    Pid
>
> ------------------------------------------------------------------------------
> Brick rigel:/mnt/raid6/glusterfs_export            49152    Y    38971
> NFS Server on localhost                    N/A    N    N/A
> Self-heal Daemon on localhost                N/A    N    N/A
>
> There are no active volume tasks
> pengy at rigel:~$ sudo gluster volume info
>
> Volume Name: gv
> Type: Replicate
> Volume ID: 64754d6c-3736-41d8-afb5-d8071a6a6a07
> Status: Started
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: rigel:/mnt/raid6/glusterfs_export
> Brick2: betelgeuse:/mnt/raid6/glusterfs_export
> Options Reconfigured:
> auth.allow: 172.17.*.*
>
> --
> Regards,
> Peng
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20140322/6552a570/attachment.html>


More information about the Gluster-users mailing list