<div dir="ltr">To add an additional data point... The operator will need to regularly reconcile the true state of the gluster cluster with the desired state stored in kubernetes. This task will be required frequently (i.e., operator-framework defaults to every 5s even if there are no config changes).<div>The actual amount of data we will need to query from the cluster is currently TBD and likely significantly affected by Heketi/GD1 vs. GD2 choice.</div><div><br></div><div>-John</div><div><br></div></div><br><div class="gmail_quote"><div dir="ltr">On Wed, Jul 25, 2018 at 5:45 AM Pranith Kumar Karampuri <<a href="mailto:pkarampu@redhat.com">pkarampu@redhat.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Jul 24, 2018 at 10:10 PM, Sankarshan Mukhopadhyay <span dir="ltr"><<a href="mailto:sankarshan.mukhopadhyay@gmail.com" target="_blank">sankarshan.mukhopadhyay@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span>On Tue, Jul 24, 2018 at 9:48 PM, Pranith Kumar Karampuri<br>
<<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a>> wrote:<br>
> hi,<br>
> Quite a few commands to monitor gluster at the moment take almost a<br>
> second to give output.<br>
<br>
</span>Is this at the (most) minimum recommended cluster size?<br></blockquote><div><br></div><div>Yes, with a single volume with 3 bricks i.e. 3 nodes in cluster.<br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<span><br>
> Some categories of these commands:<br>
> 1) Any command that needs to do some sort of mount/glfs_init.<br>
> Examples: 1) heal info family of commands 2) statfs to find<br>
> space-availability etc (On my laptop replica 3 volume with all local bricks,<br>
> glfs_init takes 0.3 seconds on average)<br>
> 2) glusterd commands that need to wait for the previous command to unlock.<br>
> If the previous command is something related to lvm snapshot which takes<br>
> quite a few seconds, it would be even more time consuming.<br>
><br>
> Nowadays container workloads have hundreds of volumes if not thousands. If<br>
> we want to serve any monitoring solution at this scale (I have seen<br>
> customers use upto 600 volumes at a time, it will only get bigger) and lets<br>
> say collecting metrics per volume takes 2 seconds per volume(Let us take the<br>
> worst example which has all major features enabled like<br>
> snapshot/geo-rep/quota etc etc), that will mean that it will take 20 minutes<br>
> to collect metrics of the cluster with 600 volumes. What are the ways in<br>
> which we can make this number more manageable? I was initially thinking may<br>
> be it is possible to get gd2 to execute commands in parallel on different<br>
> volumes, so potentially we could get this done in ~2 seconds. But quite a<br>
> few of the metrics need a mount or equivalent of a mount(glfs_init) to<br>
> collect different information like statfs, number of pending heals, quota<br>
> usage etc. This may lead to high memory usage as the size of the mounts tend<br>
> to be high.<br>
><br>
<br>
</span>I am not sure if starting from the "worst example" (it certainly is<br>
not) is a good place to start from.</blockquote><div><br></div><div>I didn't understand your statement. Are you saying 600 volumes is a worst example?<br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> That said, for any environment<br>
with that number of disposable volumes, what kind of metrics do<br>
actually make any sense/impact?<br></blockquote><div><br></div><div>Same metrics you track for long running volumes. It is just that the way the metrics</div><div>are interpreted will be different. On a long running volume, you would look at the metrics</div><div>and try to find why is the volume not giving performance as expected in the last 1 hour. Where as</div><div>in this case, you would look at metrics and find the reason why volumes that were</div><div>created and deleted in the last hour didn't give performance as expected. </div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="m_2965230752172933805HOEnZb"><div class="m_2965230752172933805h5"><br>
> I wanted to seek suggestions from others on how to come to a conclusion<br>
> about which path to take and what problems to solve.<br>
><br>
> I will be happy to raise github issues based on our conclusions on this mail<br>
> thread.<br>
><br>
> --<br>
> Pranith<br>
><br>
<br>
<br>
<br>
<br>
<br>
</div></div><span class="m_2965230752172933805HOEnZb"><font color="#888888">-- <br>
sankarshan mukhopadhyay<br>
<<a href="https://about.me/sankarshan.mukhopadhyay" rel="noreferrer" target="_blank">https://about.me/sankarshan.mukhopadhyay</a>><br>
_______________________________________________<br>
Gluster-devel mailing list<br>
<a href="mailto:Gluster-devel@gluster.org" target="_blank">Gluster-devel@gluster.org</a><br>
<a href="https://lists.gluster.org/mailman/listinfo/gluster-devel" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-devel</a><br>
</font></span></blockquote></div><br><br clear="all"><br>-- <br><div class="m_2965230752172933805gmail_signature" data-smartmail="gmail_signature"><div dir="ltr">Pranith<br></div></div>
</div></div>
_______________________________________________<br>
Gluster-devel mailing list<br>
<a href="mailto:Gluster-devel@gluster.org" target="_blank">Gluster-devel@gluster.org</a><br>
<a href="https://lists.gluster.org/mailman/listinfo/gluster-devel" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-devel</a></blockquote></div>