[Gluster-users] AFR not working
Raghavendra G
raghavendra.hg at gmail.com
Thu Dec 18 05:09:06 UTC 2008
Hi,
I've been trying to reproduce your problem. Some observations,
* I've run into 'No such file or directory' errors, but it was due to dd not
creating the file due to block size being 0 (bs=0).
* As per dd error msgs, there is a space between '/' and 'mnt' (/ mnt).
dd: opening `/ mnt/glusterfs/24427/30087.20476 ': No such file or directory
please make sure that the file is getting created in first place (file may
not be created due to invalid parameters to dd, like bs=0 in above case).
* since I am not able to reproduce it on my setup, is it possible for you to
try out the test with
- afr-self-heal turned on
- afr-self-heal turned off
- following options control afr self heal
option data-self-heal off
option metadata-self-heal off
option entry-self-heal off
regards,
On Wed, Dec 10, 2008 at 7:51 PM, <a_pirania at poczta.onet.pl> wrote:
> I have a problem. I have run two servers and two clients. On both clients
> operate in the background loop:
>
> for ((j=0; j< $RANDOM; j++)) {
> PLIK=$RANDOM.$RANDOM
> dd if=/dev/urandom of=/mnt/glusterfs/$KAT/$PLIK bs=$RANDOM count=1
> dd if=/mnt/glusterfs/$KAT/$PLIK of=/dev/null
> rm -f /mnt/glusterfs/$KAT/$PLIK
> }
>
>
>
> If both servers are connected to everything is fine. If one stops working
> after several minutes will go back to the server, a client I have:
>
> dd: opening `/ mnt/glusterfs/24427/30087.20476 ': No such file or directory
> dd: opening `/ mnt/glusterfs/24427/30087.20476 ': No such file or directory
> dd: opening `/ mnt/glusterfs/24427/18649.25895 ': No such file or directory
>
>
> after a few seconds, everything is working again.
>
> I think that the client is trying to read the file from the new server. I
> think this is not work?
>
>
> client:
>
> volume client1
> type protocol/client
> option transport-type tcp/client
> option remote-host 10.0.1.130
> option remote-port 6996
> option remote-subvolume posix1
> end-volume
>
> volume client2
> type protocol/client
> option transport-type tcp/client
> option remote-host 10.0.1.131
> option remote-port 6996
> option remote-subvolume posix2
> end-volume
>
> volume afr
> type cluster/afr
> subvolumes client1 client2
> end-volume
>
> volume rh
> type performance/read-ahead
> option page-size 100KB
> option page-count 3
> subvolumes afr
> end-volume
>
> volume wh
> type performance/write-behind
> option aggregate-size 1MB
> option flush-behind on
> subvolumes rh
> end-volume
>
>
> server:
>
> volume posix1
> type storage/posix
> option directory /var/storage/glusterfs
> option debug on
> end-volume
>
> volume posix-locks
> type features/posix-locks
> option mandatory on
> subvolumes posix1
> end-volume
>
> volume io-thr
> type performance/io-threads
> option thread-count 2
> option cache-size 64MB
> subvolumes posix-locks
> end-volume
>
> volume server
> type protocol/server
> option transport-type tcp/server
> option listen-port 6996
> subvolumes io-thr
> option auth.ip.posix1.allow 10.*.*.*
> end-volume
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>
>
--
Raghavendra G
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20081218/d41776b8/attachment.html>
More information about the Gluster-users
mailing list