[Gluster-users] Monitoring tools for GlusterFS

Artem Russakovskii archon810 at gmail.com
Sat Aug 22 17:21:52 UTC 2020


The output currently has some whitespace issues.

1. The space shift under Cluster is different than under Volumes, making
the output look a bit inconsistent.
2. Can you please fix the tabulation for when volume names are varying in
length? This output is shifted and looks messy as a result for me.

Cluster:
         Status: Healthy                 GlusterFS: 7.7
         Nodes: 4/4                      Volumes: 3/3

Volumes:
             XX2                   Replicate          Started (UP) - 4/4
Bricks Up
                                                      Capacity: (54.03%
used) 553.00 GiB/1024.00 GiB (used/total)

 XXXXXXXXXXXXX_data3                   Replicate          Started (UP) -
4/4 Bricks Up
                                                      Capacity: (78.41%
used) 392.00 GiB/500.00 GiB (used/total)

 XXXXXXXXX_data1                   Replicate          Started (UP) - 4/4
Bricks Up
                                                      Capacity: (94.24%
used) 9.00 TiB/10.00 TiB (used/total)

Sincerely,
Artem

--
Founder, Android Police <http://www.androidpolice.com>, APK Mirror
<http://www.apkmirror.com/>, Illogical Robot LLC
beerpla.net | @ArtemR <http://twitter.com/ArtemR>


On Fri, Aug 21, 2020 at 7:36 AM Sachidananda Urs <sacchi at kadalu.io> wrote:

>
>
> On Fri, Aug 21, 2020 at 4:07 AM Gilberto Nunes <gilberto.nunes32 at gmail.com>
> wrote:
>
>> Hi Sachidananda!
>> I am trying to use the latest release of gstatus, but when I cut off one
>> of the nodes, I get timeout...
>>
>
> I tried to reproduce, but couldn't. How did you cut off the node? I killed
> all the gluster processes on one of the nodes and I see this.
> You can see one of the bricks is shown as offline. And nodes are 2/3. Can
> you please tell me the steps to reproduce the issue.
>
> root at master-node:/mnt/gluster/movies# gstatus -a
>
>
> Cluster:
>
>          Status: Degraded                GlusterFS: 9dev
>
>          Nodes: 2/3                      Volumes: 1/1
>
>
> Volumes:
>
>           snap-1                   Replicate          Started (PARTIAL) -
> 1/2 Bricks Up
>
>                                                       Capacity: (12.02%
> used) 5.00 GiB/40.00 GiB (used/total)
>
>                                                       Self-Heal:
>
>                                                          slave-1:/mnt/brick1/snapr1/r11
> (7 File(s) to heal).
>
>                                                       Snapshots: 2
>
>                                                          Name:
> snap_1_today_GMT-2020.08.15-15.39.10
>
>                                                          Status: Started
>     Created On: 2020-08-15 15:39:10 +0000
>
>                                                          Name:
> snap_2_today_GMT-2020.08.15-15.39.20
>
>                                                          Status: Stopped
>     Created On: 2020-08-15 15:39:20 +0000
>
>                                                       Bricks:
>
>                                                          Distribute Group
> 1:
>
>                                                             slave-1:/mnt/brick1/snapr1/r11
>   (Online)
>
>                                                             slave-2:/mnt/brick1/snapr2/r22
>   (Offline)
>
>                                                       Quota: Off
>
>                                                       Note:
> glusterd/glusterfsd is down in one or more nodes.
>
>                                                             Sizes might
> not be accurate.
>
>
>
> root at master-node:/mnt/gluster/movies#
>
>> ________
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://bluejeans.com/441850968
>
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20200822/26485de8/attachment.html>


More information about the Gluster-users mailing list