[Gluster-users] Performance

Mohit Anchlia mohitanchlia at gmail.com
Wed Apr 20 19:26:34 UTC 2011


Thanks! Looks like at any rate 20MB/s is miserable for 4 X 10K SAS.
What are your recommendations on that regard? Should I try software
RAID0 instead?

How can I tell if it's the controller or disks itself is a problem?

Thanks for your help!

On Wed, Apr 20, 2011 at 12:19 PM, Joe Landman
<landman at scalableinformatics.com> wrote:
> On 04/20/2011 03:05 PM, Mohit Anchlia wrote:
>>
>> Like I mentioned several times my tests are running concurrent threads
>> and doing concurrent writes. So overall throughput per sec should be
>> atleast 20 X 3 = 60 MB/s.
>
> Get a copy of fio installed (yum install fio), and use the following as an
> input file to it.  Call it sw_.fio
>
> [sw]
> rw=write
> size=10g
> directory=/data/mnt-stress
> iodepth=32
> direct=0
> blocksize=512k
> numjobs=12
> nrfiles=1
> ioengine=vsync
> loops=1
> group_reporting
> create_on_open=1
> create_serialize=0
>
>
> run this as
>
>        fio sw_.fio
>
> then use the following as sr_.fio
>
> [sw]
> rw=read
> size=10g
> directory=/data/mnt-stress
> iodepth=32
> direct=0
> blocksize=512k
> numjobs=12
> nrfiles=1
> ioengine=vsync
> loops=1
> group_reporting
> create_on_open=1
> create_serialize=0
>
> run this as
>        echo 3 > /proc/sys/vm/drop_caches # note the space after "3"
>        fio sr_.fio
>
> This will run 12 simultaneous IOs, and theoretically distribute across most
> of your nodes (with some oversubscription).  Please report back the WRITE:
> and READ: portions.
>
> Run status group 0 (all jobs):
>  WRITE: io=122694MB, aggrb=2219.5MB/s, minb=2272.8MB/s, maxb=2272.8MB/s,
> mint=55281msec, maxt=55281msec
>
> Run status group 0 (all jobs):
>   READ: io=122694MB, aggrb=1231.4MB/s, minb=1260.9MB/s, maxb=1260.9MB/s,
> mint=99645msec, maxt=99645msec
>
> fio is one of the best load generators out there, and I'd strongly urge you
> to leverage it for your tests.
>
>
>
> --
> Joseph Landman, Ph.D
> Founder and CEO
> Scalable Informatics Inc.
> email: landman at scalableinformatics.com
> web  : http://scalableinformatics.com
>       http://scalableinformatics.com/sicluster
> phone: +1 734 786 8423 x121
> fax  : +1 866 888 3112
> cell : +1 734 612 4615
>



More information about the Gluster-users mailing list