[Gluster-users] Performance
paul simpson
paul at realisestudio.com
Wed Apr 20 21:43:32 UTC 2011
many thanks for sharing guys. an informative read indeed!
i've 4x dells - each running 12 drives on PERC 600. was dissapointed to
hear they're so bad! we never got round to doing intensive tests this in
depth. 12x2T WD RE4 (sata) is giving me ~600Mb/s write on the bare
filesystem. joe, does that tally with your expectations for 12 SATA drives
running RAID6? (i'd put more faith in your gut reaction than our last
tests...) ;)
-p
On 20 April 2011 21:02, Mohit Anchlia <mohitanchlia at gmail.com> wrote:
> Thanks a lot for taking time and effort. I will try raw performance
> first but that will only be going to one disk instead of 4. But I
> think it definitely makes sense as the first step.
>
> On Wed, Apr 20, 2011 at 12:53 PM, Joe Landman
> <landman at scalableinformatics.com> wrote:
> > On 04/20/2011 03:43 PM, Mohit Anchlia wrote:
> >>
> >> Thanks! Is there any recommended configuration you want me to use when
> >> using mdadm?
> >>
> >> I got this link:
> >>
> >> http://tldp.org/HOWTO/Software-RAID-HOWTO-5.html#ss5.1
> >
> > First things first, break the RAID0, and then lets measure performance
> per
> > disk, to make sure nothing else bad is going on.
> >
> > dd if=/dev/zero of=/dev/DISK bs=128k count=80k oflag=direct
> > dd of=/dev/null if=/dev/DISK bs=128k count=80k iflag=direct
> >
> > for /dev/DISK being one of the drives in your existing RAID0. Once we
> know
> > the raw performance, I'd suggest something like this
> >
> > mdadm --create /dev/md0 --metadata=1.2 --chunk=512 \
> > --raid-devices=4 /dev/DISK1 /dev/DISK2 \
> > /dev/DISK3 /dev/DISK4
> > mdadm --examine --scan | grep "md\/0" >> /etc/mdadm.conf
> >
> > then
> >
> > dd if=/dev/zero of=/dev/md0 bs=128k count=80k oflag=direct
> > dd of=/dev/null if=/dev/md0 bs=128k count=80k iflag=direct
> >
> > and lets see how it behaves. If these are good, then
> >
> > mkfs.xfs -l version=2 -d su=512k,sw=4,agcount=32 /dev/md0
> >
> > (yeah, I know, gluster folk have a preference for ext* ... we generally
> > don't recommend ext* for anything other than OS drives ... you might need
> to
> > install xfsprogs and the xfs kernel module ... which kernel are you using
> > BTW?)
> >
> > then
> >
> > mount -o logbufs=4,logbsize=64k /dev/md0 /data
> > mkdir stress
> >
> >
> > dd if=/dev/zero of=/data/big.file bs=128k count=80k oflag=direct
> > dd of=/dev/null if=/data/big.file bs=128k count=80k iflag=direct
> >
> > and see how it handles things.
> >
> > When btrfs finally stabilizes enough to be used, it should be a
> reasonable
> > replacement for xfs, but this is likely to be a few years.
> >
> > --
> > Joseph Landman, Ph.D
> > Founder and CEO
> > Scalable Informatics Inc.
> > email: landman at scalableinformatics.com
> > web : http://scalableinformatics.com
> > http://scalableinformatics.com/sicluster
> > phone: +1 734 786 8423 x121
> > fax : +1 866 888 3112
> > cell : +1 734 612 4615
> >
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20110420/e8fe2145/attachment.html>
More information about the Gluster-users
mailing list