[Gluster-users] What a brick is missing in `sudo gluster volume status`?
Peng Yu
pengyu.ut at gmail.com
Sat Mar 22 19:14:35 UTC 2014
Hi,
Here are the respective IP addresses of both servers. Why should I
remove "auth.allow: 172.17.*.*"? (And how to remove it?)
pengy at rigel:~$ ifconfig |grep -A 7 '^br1'
br1 Link encap:Ethernet HWaddr c8:1f:66:e2:90:45
inet addr:172.17.1.1 Bcast:172.17.255.255 Mask:255.255.0.0
inet6 addr: fe80::ca1f:66ff:fee2:9045/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:312191 errors:0 dropped:0 overruns:0 frame:0
TX packets:210807 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:3741197826 (3.7 GB) TX bytes:25954291 (25.9 MB)
pengy at betelgeuse:~$ ifconfig |grep -A 7 '^br1'
br1 Link encap:Ethernet HWaddr c8:1f:66:df:01:0b
inet addr:172.17.2.1 Bcast:172.17.255.255 Mask:255.255.0.0
inet6 addr: fe80::ca1f:66ff:fedf:10b/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:197382 errors:0 dropped:0 overruns:0 frame:0
TX packets:90443 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:11914450 (11.9 MB) TX bytes:10016451 (10.0 MB)
Here are are the firewall information. I don't see anything is wrong.
Do you see anything wrong? Thanks.
pengy at rigel:~$ sudo ufw app list
Available applications:
OpenSSH
pengy at rigel:~$ sudo ufw status
Status: inactive
pengy at rigel:~$ sudo iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT udp -- anywhere anywhere udp dpt:domain
ACCEPT tcp -- anywhere anywhere tcp dpt:domain
ACCEPT udp -- anywhere anywhere udp dpt:bootps
ACCEPT tcp -- anywhere anywhere tcp dpt:bootps
Chain FORWARD (policy ACCEPT)
target prot opt source destination
ACCEPT tcp -- anywhere 192.168.122.200 state
NEW,RELATED,ESTABLISHED tcp dpt:ssh
ACCEPT all -- anywhere 192.168.122.0/24 ctstate
RELATED,ESTABLISHED
ACCEPT all -- 192.168.122.0/24 anywhere
ACCEPT all -- anywhere anywhere
REJECT all -- anywhere anywhere
reject-with icmp-port-unreachable
REJECT all -- anywhere anywhere
reject-with icmp-port-unreachable
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
pengy at betelgeuse:~$ sudo ufw app list
Available applications:
OpenSSH
pengy at betelgeuse:~$ sudo ufw status
Status: inactive
pengy at betelgeuse:~$ sudo iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT udp -- anywhere anywhere udp dpt:domain
ACCEPT tcp -- anywhere anywhere tcp dpt:domain
ACCEPT udp -- anywhere anywhere udp dpt:bootps
ACCEPT tcp -- anywhere anywhere tcp dpt:bootps
Chain FORWARD (policy ACCEPT)
target prot opt source destination
ACCEPT all -- anywhere 192.168.122.0/24 ctstate
RELATED,ESTABLISHED
ACCEPT all -- 192.168.122.0/24 anywhere
ACCEPT all -- anywhere anywhere
REJECT all -- anywhere anywhere
reject-with icmp-port-unreachable
REJECT all -- anywhere anywhere
reject-with icmp-port-unreachable
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
On Sat, Mar 22, 2014 at 2:01 PM, Carlos Capriotti
<capriotti.carlos at gmail.com> wrote:
> One thing that caught my eyes:
>
> auth.allow: 172.17.*.*
>
> Can you remove that, restart glusterd/the nodes and try again ?
>
> Also, do you have firewall/iptables rules enabled ? If yes, consider testing
> with iptables/firewall disabled.
>
>
>
>
> On Sat, Mar 22, 2014 at 7:09 PM, Peng Yu <pengyu.ut at gmail.com> wrote:
>>
>> Hi,
>>
>> There should be two bricks in the volume "gv". But `sudo gluster
>> volume status` does not show `betelgeuse:/mnt/raid6/glusterfs_export`.
>> Does anybody know what is wrong with this? Thanks.
>>
>> pengy at rigel:~$ sudo gluster volume status
>> Status of volume: gv
>> Gluster process Port Online Pid
>>
>> ------------------------------------------------------------------------------
>> Brick rigel:/mnt/raid6/glusterfs_export 49152 Y 38971
>> NFS Server on localhost N/A N N/A
>> Self-heal Daemon on localhost N/A N N/A
>>
>> There are no active volume tasks
>> pengy at rigel:~$ sudo gluster volume info
>>
>> Volume Name: gv
>> Type: Replicate
>> Volume ID: 64754d6c-3736-41d8-afb5-d8071a6a6a07
>> Status: Started
>> Number of Bricks: 1 x 2 = 2
>> Transport-type: tcp
>> Bricks:
>> Brick1: rigel:/mnt/raid6/glusterfs_export
>> Brick2: betelgeuse:/mnt/raid6/glusterfs_export
>> Options Reconfigured:
>> auth.allow: 172.17.*.*
>>
>> --
>> Regards,
>> Peng
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
>
--
Regards,
Peng
More information about the Gluster-users
mailing list