[Gluster-devel] [Qusetions] How profile affect glusterfs performance?
Pranith Kumar Karampuri
pkarampu at redhat.com
Mon Jun 5 02:50:20 UTC 2017
I think there have been improvements here to use special instructions to do
the increments instead of taking spin-locks and doing increments. So may be
it doesn't affect performance as much anymore. I think if you don't see a
difference, then the enhancements are doing a good job :-).
Which version of gluster are you using?
On Mon, Jun 5, 2017 at 8:09 AM, Xie Changlong <xiechanglong.d at gmail.com>
wrote:
> Hi all
>
> It's said[1] that profile based on io-stats, if you enable this feature,
> it can affect system performance while the profile information is being
> collected.
>
> I do some tests on my two linux+vmware virtual machine with replica(lack
> of resources ). And the results shows no diffrence to me, following is the
> test case
> #dd if=/dev/zero of=test bs=4k count=524288
> #fio --filename=test -iodepth=64 -ioengine=libaio --direct=1 --rw=read
> --bs=1m --size=2g --numjobs=4 --runtime=10 --group_reporting
> --name=test-read
> #fio -filename=test -iodepth=64 -ioengine=libaio -direct=1 -rw=write
> -bs=1m -size=2g -numjobs=4 -runtime=20 -group_reporting -name=test-write
> #fio -filename=test -iodepth=64 -ioengine=libaio -direct=1 -rw=randread
> -bs=4k -size=2G -numjobs=64 -runtime=20 -group_reporting
> -name=test-rand-read
> #fio -filename=test -iodepth=64 -ioengine=libaio -direct=1 -rw=randwrite
> -bs=4k -size=2G -numjobs=64 -runtime=20 -group_reporting
> -name=test-rand-write
> It's said that fio is only for lagre files, also i suspect that the test
> infrastructure is too small. The question is that, if you guys have
> detailed data for how profile affect performance?
>
> More, we wanna gain the detail r/w iops/bandwidth data for each brick. It
> seems that only profile can provide relatived data to be calculated?if i'm
> wrong pls corrent me.
>
> If profile really affect peformance so much, would you mind a new command
> such as "gluster volume io [nfs]" to acquire brick r/w fops/data? Or just
> help us review it?
>
> [1] https://access.redhat.com/documentation/en-us/red_hat_gluste
> r_storage/3.2/html/administration_guide/chap-monitoring_red_
> hat_storage_workload#sect-Running_the_Volume_Profile_Command
> --
> Thanks
> -Xie
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
>
--
Pranith
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-devel/attachments/20170605/d1a48b1a/attachment.html>
More information about the Gluster-devel
mailing list