[Gluster-devel] gluster volume info output and some questions/advice

Gandalf Corvotempesta gandalf.corvotempesta at gmail.com
Sat Mar 11 09:57:42 UTC 2017


Hi to all

let's assume this volume info output:

Volume Name: r2
Type: Distributed-Replicate
Volume ID: 24a0437a-daa0-4044-8acf-7aa82efd76fd
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: Server1:/home/gfs/r2_0
Brick2: Server2:/home/gfs/r2_1
Brick3: Server1:/home/gfs/r2_2
Brick4: Server2:/home/gfs/r2_3

Can someone explain me how to read "Number of bricks" ?

Is the first number the number of "replicated bricks" and the second 
number the number of replica?

In this case, 2 bricks are replicated 2 times ?

So, a "Number of Bricks: 2 x 3 = 6" means that 2 bricks are replicated 3 
times, right ?

Would be possible to add some indentation to this output? Something like 
this would be much easier to read and understand:

Volume Name: r2
Type: Distributed-Replicate
Volume ID: 24a0437a-daa0-4044-8acf-7aa82efd76fd
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
   Brick1: Server1:/home/gfs/r2_0
   Brick2: Server2:/home/gfs/r2_1

Brick3: Server1:/home/gfs/r2_2
   Brick4: Server2:/home/gfs/r2_3

Or, if you don't want blank lines:

Volume Name: r2
Type: Distributed-Replicate
Volume ID: 24a0437a-daa0-4044-8acf-7aa82efd76fd
Status: Started
Number of Bricks: 2 x 3 = 6
Transport-type: tcp
Bricks:
   Brick1: Server1:/home/gfs/r2_0
    -> Brick2: Server2:/home/gfs/r2_1
    -> Brick3: Server3:/home/gfs/r2_2
   Brick4: Server1:/home/gfs/r2_3
    -> Brick5: Server2:/home/gfs/r2_4
    -> Brick6: Server3:/home/gfs/r2_5

Now some questions:

is SNMP integration planed ? A SMUX Peer integrated in Gluster would be 
awesome
for monitoring and monitoring a storage cluster is mandatory :)
Just a single line to add in snmpd.conf and we are ready to go.

Currently, which are the monitoring options that we could use ? We have 
some Zabbix servers here
that use SNMP for monitoring. Any workaround with gluster ?

Probably, an easier workaround to implement in gluster would be 
triggering an SNMP trap.
When some events occur, gluster could automatically trigger a trap.
I think would be easier to develop than creating a whole SNMP SMUX and 
in this case,
a single configuration set on a single node, would be gluster-wide.
If you have 50 nodes, you just need to perform a single configuration on 
a single node
to enable traps for 50 nodes automatically.

You could ask (from CLI) for snmp target host to set cluster-wide and 
then all nodes will be able to trigger some traps.

In example:

gluster volume settest-volume snmp-trap-community 'public'
gluster volume settest-volume snmp-trap-server '1.2.3.4'

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-devel/attachments/20170311/ffa0fa83/attachment-0001.html>


More information about the Gluster-devel mailing list