[Gluster-users] Performance
    Joe Landman 
    landman at scalableinformatics.com
       
    Wed Apr 20 19:53:21 UTC 2011
    
    
  
On 04/20/2011 03:43 PM, Mohit Anchlia wrote:
> Thanks! Is there any recommended configuration you want me to use when
> using mdadm?
>
> I got this link:
>
> http://tldp.org/HOWTO/Software-RAID-HOWTO-5.html#ss5.1
First things first, break the RAID0, and then lets measure performance 
per disk, to make sure nothing else bad is going on.
	dd if=/dev/zero of=/dev/DISK bs=128k count=80k oflag=direct
	dd of=/dev/null if=/dev/DISK bs=128k count=80k iflag=direct
for /dev/DISK being one of the drives in your existing RAID0.  Once we 
know the raw performance, I'd suggest something like this
	mdadm --create /dev/md0 --metadata=1.2 --chunk=512 \
		--raid-devices=4 /dev/DISK1 /dev/DISK2 	   \
				 /dev/DISK3 /dev/DISK4
	mdadm --examine --scan | grep "md\/0" >> /etc/mdadm.conf
then
	dd if=/dev/zero of=/dev/md0 bs=128k count=80k oflag=direct
	dd of=/dev/null if=/dev/md0 bs=128k count=80k iflag=direct
and lets see how it behaves.  If these are good, then
	mkfs.xfs -l version=2 -d su=512k,sw=4,agcount=32 /dev/md0
(yeah, I know, gluster folk have a preference for ext* ... we generally 
don't recommend ext* for anything other than OS drives ... you might 
need to install xfsprogs and the xfs kernel module ... which kernel are 
you using BTW?)
then
	mount -o logbufs=4,logbsize=64k /dev/md0 /data
	mkdir stress
	dd if=/dev/zero of=/data/big.file bs=128k count=80k oflag=direct
	dd of=/dev/null if=/data/big.file bs=128k count=80k iflag=direct
and see how it handles things.
When btrfs finally stabilizes enough to be used, it should be a 
reasonable replacement for xfs, but this is likely to be a few years.
-- 
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics Inc.
email: landman at scalableinformatics.com
web  : http://scalableinformatics.com
        http://scalableinformatics.com/sicluster
phone: +1 734 786 8423 x121
fax  : +1 866 888 3112
cell : +1 734 612 4615
    
    
More information about the Gluster-users
mailing list