[Gluster-devel] error on server: [protocol.c:259:gf_block_unserialize_transport] server: EOF from peer

Kevan Benson kbenson at a-1networks.com
Thu May 15 16:34:13 UTC 2008


Титов Александр wrote:
> I have two storage machines (glusterfs server) with hardware raid 1+0.
> Server configuration here:
> volume brick
>   type storage/posix
>   option directory /gfs_local/export
> end-volume
> 
> volume brick-ns
>   type storage/posix
>   option directory /gfs_local/export_ns
> end-volume
> 
> volume server
>   type protocol/server
>   subvolumes brick brick-ns
>   option transport-type tcp/server
>   option auth.ip.brick.allow 10.10.*.*
>   option auth.ip.brick-ns.allow 10.10.*.*
> end-volume
> 
> and 10 machines use glusterfs client (include servers) with this
> configuration:
> volume storage1
>   type protocol/client
>   option transport-type tcp/client
>   option remote-host 10.10.0.12
>   option remote-subvolume brick
> end-volume
> 
> volume storage1-ns
>   type protocol/client
>   option transport-type tcp/client
>   option remote-host 10.10.0.12
>   option remote-subvolume brick-ns
> end-volume
> 
> volume storage2
>   type protocol/client
>   option transport-type tcp/client
>   option remote-host 10.10.0.13
>   option remote-subvolume brick
> end-volume
> 
> volume storage2-ns
>   type protocol/client
>   option transport-type tcp/client
>   option remote-host 10.10.0.13
>   option remote-subvolume brick-ns
> end-volume
> 
> volume mirror1
>   type cluster/afr
>   subvolumes storage1 storage2
> end-volume
> 
> volume ns1
>   type cluster/afr
>   subvolumes storage1-ns storage2-ns
> end-volume
> 
> volume gfs
>   type cluster/unify
>   subvolumes mirror1
>   option namespace ns1
>   option scheduler rr
>   option rr.limits.min-free-disk 5%
>   option rr.refresh-interval 10
> end-volume
> 
> Error message "Input/output error" in list dirctories occurs after remove
> file on one of the client from this directory, bat this error is reproduced
> in spot.

I wouldn't think that it would cause an error, but your unify translator 
here isn't doing anything for you.  It's meant to allow two or more 
disparate shares to be used to share one set of files.  For example, 
storing half the files on one server and half on the other, but a 
listing from the client shows all files.  You only have one subvolume 
specified for the unify though, so it's basically just passing through 
all requests to that subvlume (which happens to be an AFR in this case).

Functionally, your configuration should work the same without that unify 
xlator, except that the unify scheduler limits will (theoretically) 
prevent you from filling the disks on the servers (a single large file 
write started when you have more than 5% free might still fill it up, if 
I understand correctly).

-- 

-Kevan Benson
-A-1 Networks





More information about the Gluster-devel mailing list