[Gluster-users] Can a distributed-only volume be HA for writes?

Ian Macintosh ian.macintosh at igmac.co.uk
Wed May 8 13:40:53 UTC 2013


No, it must be replicated or distributed-replicated for that.

On 7 May 2013 18:04, "Andrew Denton" <andrewd at sterling.net> wrote:
>
> I'm testing out using gluster for storing backup images. I don't have
> any data redundancy requirements beyond RAID, I just want the volume to
> still be writable when one (or more?) nodes are down.
>
> I tried it, but I'm getting a "transport endpoint is not connected"
> error when I try to write to a volume where not all the servers are
> reachable. Some writes fail, some writes succeed. I'm assuming this is
> because the path hash points to the missing server. Is there some way to
> get the client to try the write on another server? Currently I'm testing
> 3.4.0-0.3.alpha3 on CentOS 6.4 (i686! my vintage test servers don't do
> long mode unfortunately)
>
> If I missed some documentation that talks about this, please point me to
it!
>
> Here are some specifics of what I tried:
>
> [root at 192.168.254.50 ~]# gluster volume info
>
> Volume Name: testvol
> Type: Distribute
> Volume ID: 45db51e0-18ed-4180-882e-f208ffa01452
> Status: Started
> Number of Bricks: 4
> Transport-type: tcp
> Bricks:
> Brick1: 192.168.254.50:/mnt/brick1
> Brick2: 192.168.254.51:/mnt/brick1
> Brick3: 192.168.254.50:/mnt/brick2
> Brick4: 192.168.254.51:/mnt/brick2
>
> My client has testvol mounted:
> 192.168.254.50:/testvol on /mnt/gluster-test type fuse.glusterfs
> (rw,default_permissions,allow_other,max_read=131072)
>
> I crudely killed all gluster services on 192.168.254.51 with "pkill
> -KILL gluster".
> From the other node,
> [root at 192.168.254.50 ~]# gluster peer status
> Number of Peers: 1
>
> Hostname: 192.168.254.51
> Uuid: a9793617-3813-4721-a827-475790685f2c
> State: Peer in Cluster (Disconnected)
>
> Try to write to the volume from a client:
>
> [root at client testvol]# dd if=/dev/zero of=foo bs=1M count=100
> 100+0 records in
> 100+0 records out
> 104857600 bytes (105 MB) copied, 1.30653 s, 80.3 MB/s
>
> [root at client testvol]# dd if=/dev/zero of=bar bs=1M count=100
> dd: opening `bar': Transport endpoint is not connected
>
> My client logs show entries like this:
> [2013-05-07 16:47:14.776711] W [common-utils.c:2330:gf_ports_reserved]
> 0-glusterfs-socket:  is not a valid port identifier
> [2013-05-07 16:47:14.776948] W [socket.c:514:__socket_rwv]
> 0-testvol-client-1: readv failed (No data available)
> [2013-05-07 16:47:14.779218] W [common-utils.c:2330:gf_ports_reserved]
> 0-glusterfs-socket:  is not a valid port identifier
> [2013-05-07 16:47:14.779380] W [socket.c:514:__socket_rwv]
> 0-testvol-client-3: readv failed (No data available)
> [2013-05-07 16:47:16.732909] W
> [client-rpc-fops.c:2624:client3_3_lookup_cbk] 0-testvol-client-1: remote
> operation failed: Transport endpoint is not connected. Path: /
> (00000000-0000-0000-0000-000000000001)
>
>
>
> - Andrew
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130508/d4a9790b/attachment.html>


More information about the Gluster-users mailing list