[Gluster-users] How to evaluate the glusterfs performance with small file workload?

nlxswig nlxswig at 126.com
Wed Mar 20 02:08:24 UTC 2013




Hi Vennlig hilsen
Thanks for your reply.
For the Glusterfs write cache, there is clearly description in the document of "performance in gluster" that was released by Gluster, it said that in order to avoid the memory cache coherency
problem, then didn't use write cache in the client end.
There is another thing make me confuse is what's the difference between O_SYNC and O_DIRECT. As we all know,O_DIRECT will not use any cache, O_SYNC means the read/write process will be blocked until the request operation complete. Does the O_SYNC options use any cache? or not use? For the small size file operation, which access mode should we use to avoid the memory cache influence? O_SYNC or O_DIRECT?


       Lixin Niu


At 2013-03-18 18:59:03,"Torbjørn Thorsen" <torbjorn at trollweb.no> wrote:
>On Mon, Mar 18, 2013 at 11:27 AM, nlxswig <nlxswig at 126.com> wrote:
>> Hi guys
>>     1: What kind of benchmark should I use to test the small file operation
>> ?
>
>I've been wondering a bit about the same thing.
>I was thinking it would be nice to have something record and
>synthesize IO patterns.
>One could record a process which does a lot of handling of small
>files, for example Dovecot,
>and be able to replay those IO patterns on top of any filesystem.
>
>A quick look around revealed ioreplay[1].
>It seems to work by replaying strace output, which is cool idea.
>I haven't tried it, but it looks to be a nice testing tool.
>
>[1]: https://code.google.com/p/ioapps/wiki/ioreplay
>
>>     4: From the glusterfs document, I get that in order to avoid the cache
>> coherency there is no write cache feature.
>>
>>         Does it mean that there is no inference of memory cache for small
>> file write performance of glusterfs?
>>
>>         So, when we testing glusterfs with:
>>
>>         "dd if=/dev/zero of=test.img bs=10k count=1 oflag=direct" and
>>
>>         "dd if=/dev/zero of=test.img bs=10k count=1"
>>
>>         These two commands should get the same write performance.
>>
>>         While when I do this, the results of these two commands are not same
>> each other. and the gap is big.
>>
>>         How to explain?
>
>My impression is that there are write caching features,
>but Gluster tries hard to maintain coherency and correctness regarding writes.
>For one type of cache, see the write-behind translator that is enabled
>by default.
>
>AFAIK, the difference between the to dd invocations is that the first
>one disables
>all caches, while the last one doesn't even wait for the sync before finishing.
>My understanding leads me to say that the first one can't use cache at all,
>while the second one uses all the cache there is.
>
>Try to run the last one with "conv=fsync".
>This will sync the file at the end of writing, ensuring that when dd
>returns the data should be on disk. This will probably even out the
>run time for the two invocations.
>
>
>
>--
>Vennlig hilsen
>Torbjørn Thorsen
>Utvikler / driftstekniker
>
>Trollweb Solutions AS
>- Professional Magento Partner
>www.trollweb.no
>
>Telefon dagtid: +47 51215300
>Telefon kveld/helg: For kunder med Serviceavtale
>
>Besøksadresse: Luramyrveien 40, 4313 Sandnes
>Postadresse: Maurholen 57, 4316 Sandnes
>
>Husk at alle våre standard-vilkår alltid er gjeldende
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130320/c6ecbb32/attachment.html>


More information about the Gluster-users mailing list