[Gluster-users] Performance
Joe Landman
landman at scalableinformatics.com
Wed Apr 20 17:49:37 UTC 2011
On 04/20/2011 01:42 PM, Mohit Anchlia wrote:
> mount
> /dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw)
> proc on /proc type proc (rw)
> sysfs on /sys type sysfs (rw)
> devpts on /dev/pts type devpts (rw,gid=5,mode=620)
> /dev/sdb1 on /boot type ext3 (rw)
> tmpfs on /dev/shm type tmpfs (rw)
> none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
> sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
> /dev/sda1 on /data type ext3 (rw)
> glusterfs#dsdb1:/stress-volume on /data/mnt-stress type fuse
> (rw,allow_other,default_permissions,max_read=131072)
ok ...
so gluster is running atop /data/mnt-stress, which is on /dev/sda1 and
is ext3.
Could you do this
dd if=/dev/zero of=/data/big.file bs=128k count=80k
echo 3 > /proc/sys/vm/drop_caches
dd of=/dev/null if=/data/big.file bs=128k
so we can see the write and then read performance using 128k blocks?
Also, since you are using the gluster native client, you don't get all
the nice NFS caching bits. Gluster native client is somewhat slower
than the NFS client.
So lets start with the write/read speed of the system before we deal
with the gluster side of things.
>
>
> On Wed, Apr 20, 2011 at 10:39 AM, Joe Landman
> <landman at scalableinformatics.com> wrote:
>> On 04/20/2011 01:35 PM, Mohit Anchlia wrote:
>>>
>>> Should that command be there by default? I couldn't find lsscsi
>>
>> How about
>>
>> mount
>>
>> output?
>>
>>
>> --
>> Joseph Landman, Ph.D
>> Founder and CEO
>> Scalable Informatics Inc.
>> email: landman at scalableinformatics.com
>> web : http://scalableinformatics.com
>> http://scalableinformatics.com/sicluster
>> phone: +1 734 786 8423 x121
>> fax : +1 866 888 3112
>> cell : +1 734 612 4615
>>
--
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics Inc.
email: landman at scalableinformatics.com
web : http://scalableinformatics.com
http://scalableinformatics.com/sicluster
phone: +1 734 786 8423 x121
fax : +1 866 888 3112
cell : +1 734 612 4615
More information about the Gluster-users
mailing list