[Gluster-users] Gluster-users Digest, Vol 9, Issue 66

Raghavendra G raghavendra at zresearch.com
Fri Jan 23 10:17:46 UTC 2009


above afr with afr as a subvolume

On Fri, Jan 23, 2009 at 12:59 AM, Evan <_Gluster at devnada.com> wrote:

> Where should I put the write-behind translator?
> Just above afr with afr as a subvolume? Or should I put it just above my
> localBrick volume and below afr?
>
>
> Here is the output using /dev/zero:
> # time dd if=/dev/zero of=/mnt/gluster/disktest count=1024 bs=1024
> 1024+0 records in
> 1024+0 records out
> 1048576 bytes (1.0 MB) copied, 1.90119 s, 552 kB/s
>
> real    0m2.098s
> user    0m0.000s
> sys     0m0.016s
>
> # time dd if=/dev/zero of=/tmp/disktest count=1024 bs=1024
> 1024+0 records in
> 1024+0 records out
> 1048576 bytes (1.0 MB) copied, 0.0195388 s, 53.7 MB/s
>
> real    0m0.026s
> user    0m0.000s
> sys     0m0.028s
>
>
> Thanks
>
>
> On Thu, Jan 22, 2009 at 12:52 PM, Anand Avati <avati at zresearch.com> wrote:
>
>> Do you have write-behind loaded on the client side? For IO testing,
>> use /dev/zero instead of /dev/urandom.
>>
>> avati
>>
>> On Fri, Jan 23, 2009 at 2:14 AM, Evan <_Gluster at devnada.com> wrote:
>> > I have a 2 node single process AFR setup with 1.544Mbps bandwidth
>> between
>> > the 2 nodes. When I write a 1MB file to the gluster share it seems to
>> AFR to
>> > the other node in real time killing my disk IO speeds on the gluster
>> mount
>> > point. Is there anyway to fix this? Ideally I would like to see near
>> real
>> > disk IO speeds from/to the local gluster mount point and let the afr
>> play
>> > catch up in the background as the bandwidth becomes available.
>> >
>> > Gluster Spec File (same on both nodes) http://pastebin.com/m58dc49d4
>> > IO speed tests:
>> > # time dd if=/dev/urandom of=/mnt/gluster/disktest count=1024 bs=1024
>> > 1024+0 records in
>> > 1024+0 records out
>> > 1048576 bytes (1.0 MB) copied, 8.34701 s, 126 kB/s
>> >
>> > real    0m8.547s
>> > user    0m0.000s
>> > sys     0m0.372s
>> >
>> > # time dd if=/dev/urandom of=/tmp/disktest count=1024 bs=1024
>> > 1024+0 records in
>> > 1024+0 records out
>> > 1048576 bytes (1.0 MB) copied, 0.253865 s, 4.1 MB/s
>> >
>> > real    0m0.259s
>> > user    0m0.000s
>> > sys     0m0.284s
>> >
>> >
>> > Thanks
>> >
>> > _______________________________________________
>> > Gluster-users mailing list
>> > Gluster-users at gluster.org
>> > http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>> >
>> >
>>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>
>


-- 
Raghavendra G
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20090123/0dffce15/attachment.html>


More information about the Gluster-users mailing list