[Gluster-users] How to evaluate the glusterfs performance with small file workload?

nlxswig nlxswig at 126.com
Mon Mar 18 10:27:25 UTC 2013

Hi guys

    I have met some troubles when I want to evaluate the glusterfs performance with small file workload.

    1: What kind of benchmark should I use to test the small file operation ?

        As we all know, we can use iozone tools to test the large file operation, while for the sake of memory cache, 

       if we testing small file operation with iozone, the result will not correct. Then, what kind of benchmark should 

       I use? 

       How about "dd oflag=direct"?

   2: How to simulate the the large scale clients concurrence accessing operation?

       When we use iozone, there is a cluster can help us doing multiple clients testing. While, if the number of 

       clients is about hounds , it's difficult for us to deploy so many clients at the same time? Could we

       deploy multiple processes on one client at the same time to simulate multiple clients concurrence? 

    3: For the small file operation, how to increase the workload of a single client? 

    4: From the glusterfs document, I get that in order to avoid the cache coherency there is no write cache feature.

        Does it mean that there is no inference of memory cache for small file write performance of glusterfs?

        So, when we testing glusterfs with:

        "dd if=/dev/zero of=test.img bs=10k count=1 oflag=direct" and

        "dd if=/dev/zero of=test.img bs=10k count=1"

        These two commands should get the same write performance.

        While when I do this, the results of these two commands are not same each other. and the gap is big.

        How to explain?

    5: How to tuning for the small file operation on glusterfs?


        If you know, please let me know, many thanks


        Best Regards

        Lixin Niu


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130318/5a59df1d/attachment.html>

More information about the Gluster-users mailing list