[Gluster-devel] error durring write for stripe & unify translator

mic at digitaltadpole.com mic at digitaltadpole.com
Sun Mar 4 04:46:14 UTC 2007


That did the trick!  It's already working across 6 storage nodes.
Thanks for such a quick turn-around, do you ever get to sleep? ;-)

Thanks!
-Mic

Quoting "Amar S. Tumballi" <amar at zresearch.com>:

> Mic,
>  Thanks for letting us know about the issue. Actually we found that the last
> moment changes I made in rr scheduler was leading to an infinite loop of
> glusterfs client :( That error is fixed now. (check the same ftp   
> dir, you have
> pre2.2 tarball). Also, while in our testing, found that stripe was   
> not complete.
> Even thats fixed now. You can try the glusterfs with same config file now.
>
> Regards,
> Amar
> (bulde on #gluster)
>
> On Sat, Mar 03, 2007 at 06:51:23PM -0500, mic at digitaltadpole.com wrote:
>> Before I ask for help let me just say... wow! What an amazing product!
>> This has the potential to shake the SAN market profoundly. I was
>> disappointed in Luster because it made itself sounds like it didn't
>> require a SAN but you folks are strait forward and to the point. Kudos!
>>
>> Now on to the problem:
>> The process for the glusterfs client is spiking to 100% cpu usage and
>> not responding (have to kill -9) whenever I add a stripe or unify
>> translator to the client volume spec.
>>
>> There isn't anything in the client log, but the server logs show:
>> [Mar 03 19:11:01] [ERROR/common-utils.c:52/full_rw()]
>> libglusterfs:full_rw: 0 bytes r/w instead of 113
>>
>> This only occurs on file writes. I can touch and read files just fine.
>> None of these problems appear when I just mount a remote volume
>> without the translator.
>>
>> I'm using the latest glusterfs-1.3.0-pre2 code on centos with
>> fuse-2.6.3 (had to apply a patch before fuse module loaded)
>>
>>
>> My client volspec is below:
>>
>> volume client0
>>   type protocol/client
>>   option transport-type tcp/client
>>   option remote-host 192.168.1.201
>>   option remote-port 6996
>>   option remote-subvolume testgl
>> end-volume
>>
>> volume client1
>>   type protocol/client
>>   option transport-type tcp/client
>>   option remote-host 192.168.1.202
>>   option remote-port 6996
>>   option remote-subvolume testgl
>> end-volume
>>
>> volume stripe
>>    type cluster/stripe
>>    subvolumes client1 client0
>>    option stripe-size 131072 # 128k
>> end-volume
>>
>> #volume bricks
>> #  type cluster/unify
>> #  subvolumes client1 client0
>> #  option scheduler rr
>> #end-volume
>>
>>
>>
>>
>>
>>
>>
>> _______________________________________________
>> Gluster-devel mailing list
>> Gluster-devel at nongnu.org
>> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>>
>








More information about the Gluster-devel mailing list