[Gluster-devel] How long should metrics collection on a cluster take?

Aravinda Vishwanathapura Krishna Murthy avishwan at redhat.com
Thu Jul 26 04:13:43 UTC 2018

On Wed, Jul 25, 2018 at 11:54 PM Yaniv Kaul <ykaul at redhat.com> wrote:

> On Tue, Jul 24, 2018, 7:20 PM Pranith Kumar Karampuri <pkarampu at redhat.com>
> wrote:
>> hi,
>>       Quite a few commands to monitor gluster at the moment take almost a
>> second to give output.
>> Some categories of these commands:
>> 1) Any command that needs to do some sort of mount/glfs_init.
>>      Examples: 1) heal info family of commands 2) statfs to find
>> space-availability etc (On my laptop replica 3 volume with all local
>> bricks, glfs_init takes 0.3 seconds on average)
>> 2) glusterd commands that need to wait for the previous command to
>> unlock. If the previous command is something related to lvm snapshot which
>> takes quite a few seconds, it would be even more time consuming.
>> Nowadays container workloads have hundreds of volumes if not thousands.
>> If we want to serve any monitoring solution at this scale (I have seen
>> customers use upto 600 volumes at a time, it will only get bigger) and lets
>> say collecting metrics per volume takes 2 seconds per volume(Let us take
>> the worst example which has all major features enabled like
>> snapshot/geo-rep/quota etc etc), that will mean that it will take 20
>> minutes to collect metrics of the cluster with 600 volumes. What are the
>> ways in which we can make this number more manageable? I was initially
>> thinking may be it is possible to get gd2 to execute commands in parallel
>> on different volumes, so potentially we could get this done in ~2 seconds.
>> But quite a few of the metrics need a mount or equivalent of a
>> mount(glfs_init) to collect different information like statfs, number of
>> pending heals, quota usage etc. This may lead to high memory usage as the
>> size of the mounts tend to be high.
>> I wanted to seek suggestions from others on how to come to a conclusion
>> about which path to take and what problems to solve.
> I would imagine that in gd2 world:
> 1. All stats would be in etcd.

Only static state information stored in etcd by gd2. For real-time status
gd2 still has to reach respective nodes to collect the details. For
example, Volume utilization is changed by multiple mounts which are
external to gd2, to keep track of real-time status gd2 has to poll bricks
utilization on every node and update etcd.

> 2. There will be a single API call for GetALLVolumesStats or something and
> we won't be asking the client to loop, or there will be a similar efficient
> single API to query and deliver stats for some volumes in a batch ('all
> bricks in host X' for example).

Single API available for Volume stats, but this API is expensive because
the real-time stats not stored in etcd.

> Worth looking how it's implemented elsewhere in K8S.
> In any case, when asking for metrics I assume the latest already available
> would be returned and we are not going to fetch them when queried. This is
> both fragile (imagine an entity that doesn't respond well) and adds latency
> and will be inaccurate anyway a split second later.
> Y.
>> I will be happy to raise github issues based on our conclusions on this
>> mail thread.
>> --
>> Pranith
>> _______________________________________________
>> Gluster-devel mailing list
>> Gluster-devel at gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-devel
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel

Aravinda VK
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-devel/attachments/20180726/2a14b725/attachment-0001.html>

More information about the Gluster-devel mailing list