[Gluster-devel] New project on the Forge - gstatus

Joe Julian joe at julianfamily.org
Sun May 25 19:58:52 UTC 2014


echo -n

On May 25, 2014 12:18:30 PM PDT, Vipul Nayyar <nayyar_vipul at yahoo.com> wrote:
>Hello,
>
>
>Thanks Avati for your suggestion. I tried studying the output generated
>my the meta xlator. One of the problems that I'm facing is that in
>order to enable latency measurement with meta, I need to write '1' into
>the the file .meta/measure_latency. But due to some reason I'm not able
>to do that. I don't get any permission error while writing. But the
>text written in the file is always the default one. So far I've only
>been able to change the digit by hard coding it in the meta source and
>building it again. I ran glusterfs under gdb and set a break point on
> measure_file_write(). The redacted gdb output can be found at
>http://fpaste.org/104770/
>
>As you can see when I try to write 1 (echo 1 > measure_latency) into
>the file, the data received in iov is "1\n" which gives the correct
>value in num and this->ctx->measure_latency. But somehow,
>measure_file_write is called again with data iov="\n\n". Therefore I
>feel that this write process is not doing it's job properly. If you
>think that I've made some mistake please do notify me.
>
>Regards
>Vipul Nayyar 
>
>
>On Saturday, 17 May 2014 6:24 PM, Kaushal M <kshlmster at gmail.com>
>wrote:
> 
>
>
>Avati that's awesome! I didn't know that the meta xlator could do that
>already.
>
>
>
>On Sat, May 17, 2014 at 9:30 AM, Anand Avati <avati at gluster.org> wrote:
>
>KP, Vipul,
>>
>>
>>It will be awesome to get io-stats like instrumentation on the client
>side. Here are some further thoughts on how to implement that. If you
>have a recent git HEAD build, I would suggest that you explore the
>latency stats on the client side exposed through meta at
>$MNT/.meta/graphs/active/$xlator/profile. You can enable latency
>measurement with "echo 1 > $MNT/.meta/measure_latency". I would suggest
>extending these stats with the extra ones io-stats has, and make
>glusterfsiostats expose these stats. 
>>
>>
>>If you can compare libglusterfs/src/latency.c:gf_latency_begin(),
>gf_latency_end() and gf_latency_udpate() with the macros in
>io-stats.c UPDATE_PROFILE_STATS() and START_FOP_LATENCY(), you will
>quickly realize how a lot of logic is duplicated between io-stats and
>latency.c. If you can enhance latency.c and make it capture the
>remaining stats what io-stats is capturing, the benefits of this
>approach would be:
>>
>>
>>- stats are already getting captured at all xlator levels, and not
>just at the position where io-stats is inserted
>>- file like interface makes the stats more easily inspectable and
>consumable, and updated on the fly
>>- conforms with the way rest of the internals are exposed through
>$MNT/.meta
>>
>>
>>In order to this, you might want to look into:
>>
>>
>>- latency.c as of today captures fop count, mean latency, total time,
>whereas io-stats measures these along with min-time, max-time and
>block-size histogram.
>>- extend gf_proc_dump_latency_info() to dump the new stats
>>- either prettify that output like 'volume profile info' output, or
>JSONify it like xlators/meta/src/frames-file.c
>>- add support for cumulative vs interval stats (store an extra copy of
>this->latencies[])
>>
>>
>>etc..
>>
>>
>>Thanks!
>>
>>
>>
>>
>>On Fri, Apr 25, 2014 at 9:09 PM, Krishnan Parthasarathi
><kparthas at redhat.com> wrote:
>>
>>[Resending due to gluster-devel mailing list issue]
>>>
>>>Apologies for the late reply.
>>>
>>>glusterd uses its socket connection with brick processes (where
>io-stats xlator is loaded) to
>>>gather information from io-stats via an RPC request. This facility is
>restricted to brick processes
>>>as it stands today.
>>>
>>>Some background ...
>>>io-stats xlator is loaded, both in GlusterFS mounts and brick
>processes. So, we have the capabilities
>>>to monitor I/O statistics on both sides. To collect I/O statistics at
>the server side, we have
>>>
>>># gluster volume profile VOLNAME [start | info | stop]
>>>AND
>>>#gluster volume top VOLNAME info [and other options]
>>>
>>>We don't have a usable way of gathering I/O statistics (not
>monitoring, though the counters could be enhanced)
>>>at the client-side, ie. for a given mount point. This is the gap
>glusterfsiostat aims to fill. We need to remember
>>>that the machines hosting GlusterFS mounts may not have glusterd
>installed on them.
>>>
>>>We are considering rrdtool as a possible statistics database because
>it seems like a natural choice for storing time-series
>>>data. rrdtool is capable of answering high-level statistical queries
>on statistics that were logged in it by io-stats xlator
>>>over and above printing running counters periodically.
>>>
>>>Hope this gives some more clarity on what we are thinking.
>>>
>>>thanks,
>>>Krish
>>>----- Original Message -----
>>>
>>>> Probably me not understanding.
>>>
>>>> the comment "iostats making data available to glusterd over RPC" -
>is what I
>>>> latched on to. I wondered whether this meant that a socket could be
>opened
>>>> that way to get at the iostats data flow.
>>>
>>>> Cheers,
>>>
>>>> PC
>>>
>>>> ----- Original Message -----
>>>
>>>> > From: "Vipul Nayyar" <nayyar_vipul at yahoo.com>
>>>>
>>>> > To: "Paul Cuzner" <pcuzner at redhat.com>, "Krishnan Parthasarathi"
>>>> > <kparthas at redhat.com>
>>>>
>>>> > Cc: "Vijay Bellur" <vbellur at redhat.com>, "gluster-devel"
>>>> > <gluster-devel at nongnu.org>
>>>>
>>>> > Sent: Thursday, 20 February, 2014 5:06:27 AM
>>>>
>>>> > Subject: Re: [Gluster-devel] New project on the Forge - gstatus
>>>>
>>>
>>>> > Hi Paul,
>>>>
>>>
>>>> > I'm really not sure, if this can be done in python(at least
>comfortably).
>>>> > Maybe we can tread on the same path as Justin's glusterflow in
>python. But
>>>> > I
>>>> > don't think, all the io-stats counters will be available with the
>way how
>>>> > Justin's used Jeff Darcy's previous work to build his tool. I can
>be wrong.
>>>> > My knowledge is a bit incomplete and based on very less
>experience as a
>>>> > user
>>>> > and an amateur Gluster developer. Please do correct me, if I can
>be.
>>>>
>>>
>>>> > Regards
>>>>
>>>> > Vipul Nayyar
>>>>
>>>_______________________________________________
>>>Gluster-devel mailing list
>>>Gluster-devel at gluster.org
>>>http://supercolony.gluster.org/mailman/listinfo/gluster-devel
>>>
>>
>>_______________________________________________
>>Gluster-devel mailing list
>>Gluster-devel at gluster.org
>>http://supercolony.gluster.org/mailman/listinfo/gluster-devel
>>
>>
>
>------------------------------------------------------------------------
>
>_______________________________________________
>Gluster-devel mailing list
>Gluster-devel at gluster.org
>http://supercolony.gluster.org/mailman/listinfo/gluster-devel

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-devel/attachments/20140525/51ee1a65/attachment-0001.html>


More information about the Gluster-devel mailing list