[Gluster-devel] Proposal for change regarding latency calculation
Krishnan Parthasarathi
kparthas at redhat.com
Wed Jul 16 07:20:36 UTC 2014
Vipul,
----- Original Message -----
> Hello,
> Following is a proposal for modifying the io profiling capability of the
> io-stats xlator. I recently sent in a patch(review.gluster.org/#/c/8244/)
> regarding that, which uses the already written latency related functions in
> io-stats to dump info through meta and added some more data containers which
> would track some more fops related info each time a request goes through
> io-stats. Currently, before the io-stats' custom latency functions can run,
> the measure_latency and count_fop_hits option should be enabled. I propose
> to remove these two options entirely from io-stats.
> In order to track io performance, these options should be enabled all the
> time, or removed entirely, so that a record of io requests can be kept since
> mount time, else enabling these options only when it is required will not
> give you the average statistics over the whole period since the start. This
> is based on the methodology of Linux kernel itself, since it internally
> maintains the io statistics data structures all the time and presents it via
> /proc filesystem whenever required. Enabling of any option is not required,
> and the data available represents statistics since the boot time.
> I would like to know the views over this, if having io-stats profiling info
> available all the time would be a good thing?
Could you run the following experiment to measure the effect of profiling being enabled always?
- Fix the I/O workload to be run.
- Setup 1 (control group) : Run the fixed workload on a volume with both the profiling options NOT set.
- Setup 2 : Run the (same) fixed workload on the same volume with the profiling options set.
- In both setup, measure the latencies observed by the said workload. You could use time(1) command
for a crude measurement.
This should allow us to make an informed decision on whether there is any performance effect
when profiling is enabled on a volume by default.
> Apart from this, I was going over latency.c in libglusterfs, which does a
> fine job of maintaining latency info for every xlator and encountered an
> anomaly which I thought should be dealt with. The function
> gf_proc_dump_latency_info which dumps the latency array for the specified
> xlator consists of a last line which in the end flushes this array through
> memset after every dump. That means, you get different latency info every
> time you read the profile file in meta. I think, flushing the data structure
> after every dump is wrong since, you don't get overall stats since one
> enabled the option at the top of meta, and more importantly, multiple
> applications reading this file can get wrong info, since it gets cleared
> after one read only.
Clearing of the statistics on every request sounds incorrect to me. Could you please send a patch to fix this?
thanks,
Krish
> If my reasons seem apt for you, I'll send a patch over for evaluation.
> Regards
> Vipul Nayyar
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-devel/attachments/20140716/daeb447b/attachment.html>
More information about the Gluster-devel
mailing list