[Gluster-users] nagios-gluster plugin

Ramesh Nachimuthu rnachimu at redhat.com
Mon Nov 30 09:37:45 UTC 2015


You are hitting the nrpe payload size issue. currently NRPE supports only 1024 bytes as payload. We have to increase the payload size. This issue is being tracked in nagios tracker http://tracker.nagios.org/view.php?id=564. In the mean time, you can rebuild nrpe with the patch http://tracker.nagios.org/file_download.php?file_id=269&type=bug and try again.

Note: You have to update nrpe on the storage node and nrpe-plugins on the nagios server side after rebuilding nrpe with the above patch.

Regards,
Ramesh

----- Original Message -----
> From: "Amudhan P" <amudhan83 at gmail.com>
> To: gluster-users at gluster.org
> Sent: Monday, November 30, 2015 2:37:40 PM
> Subject: [Gluster-users] nagios-gluster plugin
> 
> Hi,
> 
> I am trying to use nagios-gluster plugin to monitor my gluster test setup in
> Ubuntu 14.04 server.
> 
> OS : Ubuntu 14.04
> Gluster version : 3.7.6
> Nagios version : core 3.5.1
> 
> My current setup.
> 
> node 1 = nagios monitor server
> node 2 = gluster data node with 10 brick (172.16.5.66)
> node 3 = gluster data node with 10 brick
> 
> 
> normal nagios nrpe command works fine
> 
> root at node1:~$ /usr/lib/nagios/plugins/check_nrpe -H 172.16.5.66 -c check_load
> OK - load average: 0.00, 0.01, 0.05|load1=0.000;15.000;30.000;0;
> load5=0.010;10.000;25.000;0; load15=0.050;5.000;20.000;
> 
> But when i try to run discovery.py.i am getting error below
> 
> root at node1:~$ /usr/local/lib/nagios/plugins/gluster/discovery.py -c vmgfstst
> -H 172.16.5.66
> Traceback (most recent call last):
> File "/usr/local/lib/nagios/plugins/gluster/discovery.py", line 541, in
> <module>
> clusterdata = discoverCluster(args.hostip, args.cluster, args.timeout)
> File "/usr/local/lib/nagios/plugins/gluster/discovery.py", line 90, in
> discoverCluster
> componentlist = discoverVolumes(hostip, timeout)
> File "/usr/local/lib/nagios/plugins/gluster/discovery.py", line 58, in
> discoverVolumes
> timeout=timeout)
> File "/usr/local/lib/nagios/plugins/gluster/server_utils.py", line 118, in
> execNRPECommand
> resultDict = json.loads(outputStr)
> File "/usr/lib/python2.7/json/__init__.py", line 338, in loads
> return _default_decoder.decode(s)
> File "/usr/lib/python2.7/json/decoder.py", line 366, in decode
> obj, end = self.raw_decode(s, idx=_w(s, 0).end())
> File "/usr/lib/python2.7/json/decoder.py", line 382, in raw_decode
> obj, end = self.scan_once(s, idx)
> ValueError: ('Invalid control character at: line 1 column 1024 (char 1023)',
> '{"vmgfsvol1": {"name": "vmgfsvol1", "disperseCount": "10", "bricks":
> [{"brickpath": "/media/disk1", "brickaddress": "172.16.5.66", "hostUuid":
> "9408184a-298e-4b6c-9273-d1a460ca139a"}, {"brickpath": "/media/disk2",
> "brickaddress": "172.16.5.66", "hostUuid":
> "9408184a-298e-4b6c-9273-d1a460ca139a"}, {"brickpath": "/media/disk3",
> "brickaddress": "172.16.5.66", "hostUuid":
> "9408184a-298e-4b6c-9273-d1a460ca139a"}, {"brickpath": "/media/disk4",
> "brickaddress": "172.16.5.66", "hostUuid":
> "9408184a-298e-4b6c-9273-d1a460ca139a"}, {"brickpath": "/media/disk5",
> "brickaddress": "172.16.5.66", "hostUuid":
> "9408184a-298e-4b6c-9273-d1a460ca139a"}, {"brickpath": "/media/disk6",
> "brickaddress": "172.16.5.66", "hostUuid":
> "9408184a-298e-4b6c-9273-d1a460ca139a"}, {"brickpath": "/media/disk7",
> "brickaddress": "172.16.5.66", "hostUuid":
> "9408184a-298e-4b6c-9273-d1a460ca139a"}, {"brickpath": "/media/disk8",
> "brickaddress": "172.16.5.66", "hostUuid":
> "9408184a-298e-4b6c-9273-d1a460ca139a"}, {"brickpath": "/media/disk9",
> "brickaddre\n')
> 
> 
> But when i run discover volume list command it works.
> rootr at node1:~$ /usr/lib/nagios/plugins/check_nrpe -H 172.16.5.66 -c
> discover_volume_list
> {"vmgfsvol1": {"type": "DISTRIBUTED_DISPERSE", "name": "vmgfsvol1"}}
> 
> 
> Looking for help to solve this issue.
> 
> 
> regards
> Amudhan P
> 
> 
> 
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users


More information about the Gluster-users mailing list