[Gluster-Maintainers] [Gluster-devel] [Need Feedback] Monitoring

Amar Tumballi atumball at redhat.com
Tue Aug 1 07:34:18 UTC 2017


On Wed, Jun 14, 2017 at 10:38 PM, Amar Tumballi <atumball at redhat.com> wrote:

>
>
> On Wed, Jun 14, 2017 at 11:30 AM, Michael Scherer <mscherer at redhat.com>
> wrote:
>
>> Le mardi 13 juin 2017 à 11:14 -0400, Amar Tumballi a écrit :
>> > All,
>> >
>> > Please update the github issue [1], with what are the metrics you need
>> to
>> > see periodically. These may be metrics which helps you to understand the
>> > health of the process, or the counters which gives insight into things
>> to
>> > monitor bottlenecks.
>> >
>> > I know there exists 'statedump' feature already. It provides some
>> > information, but it also does provide more than required info like dump
>> of
>> > all inode table entries etc. What I am looking here is more of metrics
>> > based on which, we can get a timebased graph.
>> >
>> > A simple example would be what is the number of malloc/free we have done
>> > till now, and what is the total 'in-use' buffers (like you have info in
>> > mem-pool), so you can see depending on work load how the memory usage
>> > varies. A sample implementation i have looks like this [2].
>> >
>> > Feel free to ask questions, add pointers, and suggestions. This is not
>> > about the tool for plotting graph, more of what should get in the graph.
>>
>> So the first question is:
>> - who is gonna consume the stats ?
>>
>>
> Sysadmins
> Developers
> Support personnels
>
>
>> A sysadmin will not want the same stuff as someone focused on having a
>> SLA to fullfill (like "all request must respond under X seconds").
>>
>>
> We will differentiate at the display level to define what 'Dashboard' you
> would need. Admins and Devs would choose different profiles.
>
>
>> A team lead or a manager will not care about the same stuff (like,
>> number of client served, to show to $upper_management that the systemm
>> is used).
>>
>> And a developper will not want the same stuff either, as I am quite sure
>> that they are likely the only ones caring about malloc/free, along with
>> people focused on optimisation.
>>
>>
> As explained earlier, I want to hear from all different angle, and have
> code to provide all those information. We can't different builds, different
> commands for different people. It will be differentiated at the display
> layer.
>
> So, please everyone, add more data at [1]
>
>
All,

It would be a good feature to have for GlusterFS 4.0. Please start making
list of things you want to see from components you own (if you are
developer). I see some efforts in these lines from ndevos/jdarcy, on
mem-pools  [10], and RaghavendraG on mallinfo [11].

If you are an Admin, please give us feedback on what do you want to see ?
(on a graph).

[10] -
http://lists.gluster.org/pipermail/gluster-devel/2017-July/053348.html
[11] -
http://lists.gluster.org/pipermail/gluster-devel/2017-July/053215.html

It would be great to add your points in github issues, so we can complete
them before next major release.

Regards,
> Amar
>
> [1] - https://github.com/gluster/glusterfs/issues/168
>
>
>
>> --
>> Michael Scherer
>> Sysadmin, Community Infrastructure and Platform, OSAS
>>
>>
>>
>> _______________________________________________
>> Gluster-devel mailing list
>> Gluster-devel at gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-devel
>>
>
>
>
> --
> Amar Tumballi (amarts)
>



-- 
Amar Tumballi (amarts)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/maintainers/attachments/20170801/acb55bd3/attachment.html>


More information about the maintainers mailing list