[Gluster-users] kernel parameters for improving gluster writes on millions of small writes (long)
Harry Mangalam
hjmangalam at gmail.com
Thu Jul 26 14:59:45 UTC 2012
Hi Bryan,
thanks for the suggestion. In fact, we're using XFS for the
underlying filesystem (under 3ware controllers) and it was tuned (or
at least I thought it was) for large files. We do get decent perf on
large file reads and writes as long as the writes are fairly large.
I'll post my controller and XFS settings to see if they seem odd.
I was experimenting more last night after the latest revelations and
discovered some more things that may be illuminating.
I wrote 2 tiny perl scripts; one that wrote ~400MB in short writes
(burp) and another (bigburp) that created the same-sized string but
in-memory and then wrote it in one write. The burp script took about
4 times as long to write to a file on a gluster fs (and sync) as did
the bigburp script.
So, if the writes are as I described previously (the output of
individual writes of <100bytes), the performance is very poor and the
gluster process is driven very high - > 100% for several seconds, I'm
assuming due to queued instructions). If the same amount of data is
written in a single write, the performance is pretty good and while
the gluster process goes high, it doesn't exceed about 60% and it
lasts only a few sec.
Why should this be? Why should Linux file caching care if the data to
be written is the result of a single write or the result of lots of
writes (other than the function call overhead - would that explain
it?)...? I can test that with oprofile, but it doesn't explain why
the gluster process takes so much longer to process one than the
other. From its POV, it should just be data, regardless from where it
came. Or am I missing some critical point?
If it does matter that it's not just the size of the files but the way
they are created that has a large effect on gluster write performance,
then gluster (or at least the native gluster client) will not be
appropriate for a lot of bioinformatics apps, many of which use this
kind of write profile.
hjm
On Thu, Jul 26, 2012 at 6:23 AM, Washer, Bryan <bwasher at netsuite.com> wrote:
>
>
> Harry,
>
> Just a question but what file system are you using under the gluster system?
> You may need to tune that before you continue to try and tune the output
> system. I found that by tuning using the xfs file system and tuning it for
> very large files I was able to improve my performance quite a bit. In this
> case though I was working with a lot of big files so my tuning would not
> help you..but just wanted to make sure you had looked at this detail in your
> setup.
>
> Bryan
>
>
> -----Original Message-----
> From: gluster-users-bounces at gluster.org
> [mailto:gluster-users-bounces at gluster.org] On Behalf Of Harry Mangalam
> Sent: Wednesday, July 25, 2012 8:02 PM
> To: gluster-users
> Subject: [Gluster-users] kernel parameters for improving gluster writes on
> millions of small writes (long)
>
> This is a continuation of my previous posts about improving write perf
> when trapping millions of small writes to a gluster filesystem.
> I was able to improve write perf by ~30x by running STDOUT thru gzip
> to consolidate and reduce the output stream.
>
> Today, another similar problem, having to do with yet another
> bioinformatics program (which these days typically handle the 'short
> reads' that come out of the majority of sequencing hardware, each read
> being 30-150 characters, with some metadata typically in an ASCII file
> containing millions of such entries). Reading them doesn't seem to be
> a problem (at least on our systems) but writing them is quite awful..
>
> The program is called 'art_illumina' from the Broad Inst's 'ALLPATHS'
> suite and it generates an artificial Illumina data set from an input
> genome. In this case about 5GB of the type of data described above.
> Like before, the gluster process goes to >100% and the program itself
> slows to ~20-30% of a CPU. In this case, the app's output cannot be
> extrnally trapped by redirecting thru gzip since the output flag
> specifies the base filename for 2 files that are created internally
> and then written directly. This prevents even setting up a named pipe
> to trap and process the output.
>
> Since this gluster storage was set up specifically for bioinformatics,
> this is a repeating problem and while some of the issues can be dealt
> with by trapping and converting output, it would be VERY NICE if we
> could deal with it at the OS level.
>
> The gluster volume is running over IPoIB on QDR IB and looks like this:
> Volume Name: gl
> Type: Distribute
> Volume ID: 21f480f7-fc5a-4fd8-a084-3964634a9332
> Status: Started
> Number of Bricks: 8
> Transport-type: tcp,rdma
> Bricks:
> Brick1: bs2:/raid1
> Brick2: bs2:/raid2
> Brick3: bs3:/raid1
> Brick4: bs3:/raid2
> Brick5: bs4:/raid1
> Brick6: bs4:/raid2
> Brick7: bs1:/raid1
> Brick8: bs1:/raid2
> Options Reconfigured:
> performance.write-behind-window-size: 1024MB
> performance.flush-behind: on
> performance.cache-size: 268435456
> nfs.disable: on
> performance.io-cache: on
> performance.quick-read: on
> performance.io-thread-count: 64
> auth.allow: 10.2.*.*,10.1.*.*
>
> I've tried to increase every caching option that might improve this
> kind of performance, but it doesn't seem to help. At this point, I'm
> wondering whether changing the client (or server) kernel parameters
> will help.
>
> The client's meminfo is:
> cat /proc/meminfo
> MemTotal: 529425924 kB
> MemFree: 241833188 kB
> Buffers: 355248 kB
> Cached: 279699444 kB
> SwapCached: 0 kB
> Active: 2241580 kB
> Inactive: 278287248 kB
> Active(anon): 190988 kB
> Inactive(anon): 287952 kB
> Active(file): 2050592 kB
> Inactive(file): 277999296 kB
> Unevictable: 16856 kB
> Mlocked: 16856 kB
> SwapTotal: 563198732 kB
> SwapFree: 563198732 kB
> Dirty: 1656 kB
> Writeback: 0 kB
> AnonPages: 486876 kB
> Mapped: 19808 kB
> Shmem: 164 kB
> Slab: 1475476 kB
> SReclaimable: 1205944 kB
> SUnreclaim: 269532 kB
> KernelStack: 5928 kB
> PageTables: 27312 kB
> NFS_Unstable: 0 kB
> Bounce: 0 kB
> WritebackTmp: 0 kB
> CommitLimit: 827911692 kB
> Committed_AS: 536852 kB
> VmallocTotal: 34359738367 kB
> VmallocUsed: 1227732 kB
> VmallocChunk: 33888774404 kB
> HardwareCorrupted: 0 kB
> AnonHugePages: 376832 kB
> HugePages_Total: 0
> HugePages_Free: 0
> HugePages_Rsvd: 0
> HugePages_Surp: 0
> Hugepagesize: 2048 kB
> DirectMap4k: 201088 kB
> DirectMap2M: 15509504 kB
> DirectMap1G: 521142272 kB
>
> and the server's meminfo is:
>
> $ cat /proc/meminfo
> MemTotal: 32861400 kB
> MemFree: 1232172 kB
> Buffers: 29116 kB
> Cached: 30017272 kB
> SwapCached: 44 kB
> Active: 18840852 kB
> Inactive: 11772428 kB
> Active(anon): 492928 kB
> Inactive(anon): 75264 kB
> Active(file): 18347924 kB
> Inactive(file): 11697164 kB
> Unevictable: 0 kB
> Mlocked: 0 kB
> SwapTotal: 16382900 kB
> SwapFree: 16382680 kB
> Dirty: 8 kB
> Writeback: 0 kB
> AnonPages: 566876 kB
> Mapped: 14212 kB
> Shmem: 1276 kB
> Slab: 429164 kB
> SReclaimable: 324752 kB
> SUnreclaim: 104412 kB
> KernelStack: 3528 kB
> PageTables: 16956 kB
> NFS_Unstable: 0 kB
> Bounce: 0 kB
> WritebackTmp: 0 kB
> CommitLimit: 32813600 kB
> Committed_AS: 3053096 kB
> VmallocTotal: 34359738367 kB
> VmallocUsed: 340196 kB
> VmallocChunk: 34342345980 kB
> HardwareCorrupted: 0 kB
> AnonHugePages: 200704 kB
> HugePages_Total: 0
> HugePages_Free: 0
> HugePages_Rsvd: 0
> HugePages_Surp: 0
> Hugepagesize: 2048 kB
> DirectMap4k: 6656 kB
> DirectMap2M: 2072576 kB
> DirectMap1G: 31457280 kB
>
> Does this suggest any approach? Is there a doc that suggests optimal
> kernel parameters for gluster?
>
> I guess the only other option is to use the glusterfs as an NFS mount
> and use the NFS client's caching..? That will help on a single
> process but decrease the overall cluster bandwidth considerably.
>
> --
> Harry Mangalam - Research Computing, OIT, Rm 225 MSTB, UC Irvine
> [m/c 2225] / 92697 Google Voice Multiplexer: (949) 478-4487
> 415 South Circle View Dr, Irvine, CA, 92697 [shipping]
> MSTB Lat/Long: (33.642025,-117.844414) (paste into Google Maps)
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
>
> NOTICE: This email and any attachments may contain confidential and
> proprietary information of NetSuite Inc. and is for the sole use of the
> intended recipient for the stated purpose. Any improper use or distribution
> is prohibited. If you are not the intended recipient, please notify the
> sender; do not review, copy or distribute; and promptly delete or destroy
> all transmitted information. Please note that all communications and
> information transmitted through this email system may be monitored by
> NetSuite or its agents and that all incoming email is automatically scanned
> by a third party spam and filtering service.
--
Harry Mangalam - Research Computing, OIT, Rm 225 MSTB, UC Irvine
[m/c 2225] / 92697 Google Voice Multiplexer: (949) 478-4487
415 South Circle View Dr, Irvine, CA, 92697 [shipping]
MSTB Lat/Long: (33.642025,-117.844414) (paste into Google Maps)
More information about the Gluster-users
mailing list