[Gluster-devel] Iusses with Random read/write

Benjamin Turner bennyturns at gmail.com
Fri Jul 24 14:40:16 UTC 2015


Pre 3.7 glusterfs had a singe threaded event listener that would peg out a
CPU to 100% causing it to become CPU bound.  With the 3.7 release we
changed to a multi threaded event listener that enabled the CPU load to be
spread across multiple threads / cores.  In my experience I still see
workloads becoming CPU bound with the default of 2, so I monitor things
with top -H while running my tests to look for hot threads(threads sitting
at 100% CPU).  In my testing I find that event threads =4 works best for
me, but each env is different and is worth doing some tuning to see what
works best  for you.

HTH

-b

On Fri, Jul 24, 2015 at 9:44 AM, Subrata Ghosh <subrata.ghosh at ericsson.com>
wrote:

>  Hi All,
>
> Thank you very much for the help. We  will  experiment and check .Your
> last two suggestions might help for better clarity.
>
>
>
> We are using gluster 3.3.2. Right now we have somelimitations in the code
> base to upgrade. We have a plan to upgrade later to the latest.
>
>
>
> Regards,
>
> Subrata
>
>
>
> *From:* Benjamin Turner [mailto:bennyturns at gmail.com]
> *Sent:* Thursday, July 23, 2015 11:22 PM
> *To:* Susant Palai
> *Cc:* Subrata Ghosh; Gluster Devel
> *Subject:* Re: [Gluster-devel] Iusses with Random read/write
>
>
>
> I run alot of random IO tests with gluster and it has really come a long
> way in the 3.7 release.  What version are you running on?  I have a couple
> of suggestions:
>
>
>
> -Run on 3.7 if you are not already.
>
> -Run the IOZone test you are running on the back end without gluster to
> verify that your HW meets your perf needs.  SSDs really scream with random
> IO if your current HW won't meet your needs.
>
> -When you are running tests watch top -H on both clients and servers, look
> for any threads hitting 100%
>
> -If you see hot threads bump up the server.event-threads and/or
> client.event-threads from the default of 2
>
>
>
> HTH!
>
>
>
> -b
>
>
>
>
>
> On Thu, Jul 23, 2015 at 3:04 AM, Susant Palai <spalai at redhat.com> wrote:
>
> ++CCing gluster-devel to have more eye on this problem.
>
> Susant
>
> ----- Original Message -----
> > From: "Subrata Ghosh" <subrata.ghosh at ericsson.com>
> > To: "Susant Palai <spalai at redhat.com> (spalai at redhat.com)" <
> spalai at redhat.com>, "Vijay Bellur <vbellur at redhat.com>
> > (vbellur at redhat.com)" <vbellur at redhat.com>
> > Cc: "Subrata Ghosh" <subrata.ghosh at ericsson.com>
> > Sent: Sunday, 19 July, 2015 7:57:28 PM
> > Subject: Iusses with Random read/write
> >
> > Hi Vijay/Prashant,
> >
> > How you are  you :).
> >
> > We need your  immediate  help / suggestion  to meet our random I/IO
> > performance metrics.
> > Currently we have performance issues with random read/write - our basic
> > requirement 20 MB/sec for random I/O.
> >
> > We tried with both "iozone" and "fio", received almost same ( random I/O)
> > performance which is not meeting our fundamental I/IO requirements.
> >
> > Our use case is as below.
> >
> > "Application running on different cards Writes/Reads (random) continuous
> > files to the volume comprising with storage belonging from different
> cards
> > in the distributed system, where replica presence  across cards and
> > applications are using non-local storages."
> > We have verified and identified  the bottleneck mostly on Gluster Client
> side
> > inside the application, however gluster server to server I/O speed looks
> > enough good. Performance tuning on gluster server side would not
> expected to
> > help.
> >
> > We also cross verified checked using NFS client we are getting far better
> > performance, but we cannot use NFS client /libgfapi because of use case
> > limitation ( brick failures cases etc..)
> >
> > Please throw some lights or thoughts to improve gluster client to
> achieve >
> > 20 MB/Secs
> >
> > Observations:
> >
> > Fio:
> >
> >
> > lease find the test results of Random write & read in 2 APPs scenarios.
> >
> > Scenario
> >
> > APP_1
> >
> > APP_2
> >
> > File size
> >
> > No of AMC's
> >
> > Random-Write
> >
> > 3.06  MB/s
> >
> > 3.02 MB/s
> >
> > 100 MB
> >
> > 4
> >
> > Random-Read
> >
> > 8.1 MB/s
> >
> > 8.4 MB/s
> >
> > 100 MB
> >
> > 4
> >
> >
> >
> > Iozone:
> >
> > ./iozone -R -l 1 -u 1 -r 4k -s 2G -F /home/cdr/f1 | tee -a
> > /tmp/iozone_results.txt &
> >
> >
> > APP 1
> >
> > APP2
> >
> > File Size : 2GB
> >
> > File size : 2GB
> >
> > Record size = 4 Kbytes
> >
> > Record size = 4 Kbytes
> >
> > Output is in Kbytes/sec
> >
> > Output is in Kbytes/sec
> >
> >
> >
> >
> > Initial write    41061.78
> >
> > Initial write    41167.36
> >
> >
> > Rewrite    40395.64
> >
> > Rewrite    40810.41
> >
> >
> > Read   262685.69
> >
> > Read   269644.62
> >
> >
> > Re-read  263751.66
> >
> > Re-read   270760.62
> >
> >
> > Reverse Read   27715.72
> >
> > Reverse Read    28604.22
> >
> >
> > Stride read   83776.44
> >
> > Stride read    84347.88
> >
> >
> > Random read    16239.74 (15.8 MB/s )
> >
> > Random read    15815.94  (15.4 MB/s )
> >
> >
> > Mixed workload    16260.95
> >
> > Mixed workload    15787.55
> >
> >
> > Random write     3356.57 (3.3 MB/s )
> >
> > Random write     3365.17 ( 3.3 MB/s)
> >
> >
> > Pwrite    40914.55
> >
> > Pwrite    40692.34
> >
> >
> > Pread   260613.83
> >
> > Pread   269850.59
> >
> >
> > Fwrite    40412.40
> >
> > Fwrite    40369.78
> >
> >
> > Fread   261506.61
> >
> > Fread   267142.41
> >
> >
> >
> > Some of the info on performance testing is at
> >
> http://www.gluster.org/community/documentation/index.php/Performance_Testing
> > Also pls check iozone limitations listed there.
> >
> > "WARNING: random I/O testing in iozone is very restricted by iozone
> > constraint that it must randomly read then randomly write the entire
> file!
> > This is not what we want - instead it should randomly read/write for some
> > fraction of file size or time duration, allowing us to spread out more on
> > the disk while not waiting too long for test to finish. This is why fio
> > (below) is the preferred test tool for random I/O workloads."
> >
> >
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-devel/attachments/20150724/7dab7d10/attachment.html>


More information about the Gluster-devel mailing list