[Gluster-users] Distributed volume going to Read only mode if any of the Brick is not available

M S Vishwanath Bhat msvbhat at gmail.com
Wed May 20 11:59:21 UTC 2015


On 20 May 2015 at 17:18, Varadharajan S <rajanvaradhu at gmail.com> wrote:

> Hi Team,
> Anyone can suggest my below query, so that I can get clear idea
>
> Regards,
> Varad
> On 19 May 2015 20:28, "Varadharajan S" <rajanvaradhu at gmail.com> wrote:
>
>> FYI
>> On 19 May 2015 20:25, "Varadharajan S" <rajanvaradhu at gmail.com> wrote:
>>
>>> Hi,
>>> Replication means, I won't get space. Distribution is not like striping
>>> right? If one brick is not available in the volume, other bricks can
>>> distribute data in between. If I do any tuning will get solution?
>>>
>> In a pure distribute volume, there is no duplicate of a file. So when a
brick/server containing the file goes down, you loose those data.

And about the newly created data: *IF* the file gets hashed to the
brick/server which is down you get errors. If they get hashed to
bricks/servers which are online, they work just fine.

NOTE: If you create a new directory, they get distributed among the bricks
which are up. Meaning in your case they get distributed among the 3 bricks
which are up.


So if you want redundancy but do not want the space disadvantage of pure
replication,  why don't try disperse volume.
http://www.gluster.org/community/documentation/index.php/Features/disperse
 But for that you will have to upgrade to latest glusterfs-3.7 version.

HTH

Best Regards,
Vishwanath

 On 19 May 2015 20:02, "Atin Mukherjee" <atin.mukherjee83 at gmail.com> wrote:
>>>
>>>>
>>>> On 19 May 2015 17:10, "Varadharajan S" <rajanvaradhu at gmail.com> wrote:
>>>> >
>>>> > Hi,
>>>> >
>>>> > We are using Ubuntu 14.04 server and for storage purpose we
>>>> configured gluster 3.5 as distributed volume and find the below details,
>>>> >
>>>> > 1).4 Servers - 14.04 Ubuntu Server and each server disks free spaces
>>>> are  configured as ZFS raiddz2 volume
>>>> >
>>>> > 2). Each server has /pool/gluster zfs volume and capacity as - 5 TB,8
>>>> TB,6 TB and 10 TB
>>>> >
>>>> > 3). Bricks are - rep1,rep2,rep3 and st1 and all the bricks are
>>>> connected as Distributed Volume and mounted on each system as,
>>>> >
>>>> >   For E.x in rep1 -> mount -t glusterfs  rep1:/glustervol  /data.
>>>> >                   rep2  -> mount  -t glusterfs  rep2:/glustervol
>>>> /data
>>>> >                   rep3  -> mount  -t glusterfs  rep3:/glustervol
>>>> /data
>>>> >                   st1    ->  mount  -t glusterfs  st1:/glustervol
>>>> /data
>>>> >
>>>> > So we get /data is having  around 29 TB and all our applications
>>>> data's are stored in /data mount point.
>>>> >
>>>> > Details about volume:
>>>> >
>>>> > volume glustervol-client-0
>>>> >     type protocol/client
>>>> >     option send-gids true
>>>> >     option password b217da9d1d8b-bb55
>>>> >     option username 9d76-4553-8c75
>>>> >     option transport-type tcp
>>>> >     option remote-subvolume /pool/gluster
>>>> >     option remote-host rep1
>>>> >     option ping-timeout 42
>>>> > end-volume
>>>> >
>>>> > volume glustervol-client-1
>>>> >     type protocol/client
>>>> >     option send-gids true
>>>> >     option password b217da9d1d8b-bb55
>>>> >     option username jkd76-4553-5347
>>>> >     option transport-type tcp
>>>> >     option remote-subvolume /pool/gluster
>>>> >     option remote-host rep2
>>>> >     option ping-timeout 42
>>>> > end-volume
>>>> >
>>>> > volume glustervol-client-2
>>>> >     type protocol/client
>>>> >     option send-gids true
>>>> >     option password b217da9d1d8b-bb55
>>>> >     option username 19d7-5a190c2
>>>> >     option transport-type tcp
>>>> >     option remote-subvolume /pool/gluster
>>>> >     option remote-host rep3
>>>> >     option ping-timeout 42
>>>> > end-volume
>>>> >
>>>> > volume glustervol-client-3
>>>> >     type protocol/client
>>>> >     option send-gids true
>>>> >     option password b217da9d1d8b-bb55
>>>> >     option username c75-5436b5a168347
>>>> >     option transport-type tcp
>>>> >     option remote-subvolume /pool/gluster
>>>> >     option remote-host st1
>>>> >
>>>> >     option ping-timeout 42
>>>> > end-volume
>>>> >
>>>> > volume glustervol-dht
>>>> >     type cluster/distribute
>>>> >     subvolumes glustervol-client-0 glustervol-client-1
>>>> glustervol-client-2 glustervol-client-3
>>>> > end-volume
>>>> >
>>>> > volume glustervol-write-behind
>>>> >     type performance/write-behind
>>>> >     subvolumes glustervol-dht
>>>> > end-volume
>>>> >
>>>> > volume glustervol-read-ahead
>>>> >     type performance/read-ahead
>>>> >     subvolumes glustervol-write-behind
>>>> > end-volume
>>>> >
>>>> > volume glustervol-io-cache
>>>> >     type performance/io-cache
>>>> >     subvolumes glustervol-read-ahead
>>>> > end-volume
>>>> >
>>>> > volume glustervol-quick-read
>>>> >     type performance/quick-read
>>>> >     subvolumes glustervol-io-cache
>>>> > end-volume
>>>> >
>>>> > volume glustervol-open-behind
>>>> >     type performance/open-behind
>>>> >     subvolumes glustervol-quick-read
>>>> > end-volume
>>>> >
>>>> > volume glustervol-md-cache
>>>> >     type performance/md-cache
>>>> >     subvolumes glustervol-open-behind
>>>> > end-volume
>>>> >
>>>> > volume glustervol
>>>> >     type debug/io-stats
>>>> >     option count-fop-hits off
>>>> >     option latency-measurement off
>>>> >     subvolumes glustervol-md-cache
>>>> > end-volume
>>>> >
>>>> >
>>>> > ap at rep3:~$ sudo gluster volume info
>>>> >
>>>> > Volume Name: glustervol
>>>> > Type: Distribute
>>>> > Volume ID: 165b-XXXXX
>>>> > Status: Started
>>>> > Number of Bricks: 4
>>>> > Transport-type: tcp
>>>> > Bricks:
>>>> > Brick1: rep1:/pool/gluster
>>>> > Brick2: rep2:/pool/gluster
>>>> > Brick3: rep3:/pool/gluster
>>>> > Brick4: st1:/pool/gluster
>>>> >
>>>> > Problem:
>>>> >
>>>> > If we shutdown any of the bricks , the volume size is reduced (this
>>>> is ok) but from the other servers , i can see my mount point /data but it's
>>>> only listing contents and i can't write or edit any single files/folders.
>>>> >
>>>> > Solution Required:
>>>> >
>>>> > If anyone brick is not available, From other servers should allow for
>>>> Write and edit functions
>>>> This is expected since you are using distributed volume. You wouldn't
>>>> be able to write/edit files belonging to the brick which is down. Solution
>>>> would be to migrate to distributed replicate volume.
>>>> >
>>>> > Please let us know, what can i try further ?
>>>> >
>>>> > Regards,
>>>> > Varad
>>>> >
>>>> >
>>>> > _______________________________________________
>>>> > Gluster-users mailing list
>>>> > Gluster-users at gluster.org
>>>> > http://www.gluster.org/mailman/listinfo/gluster-users
>>>>
>>>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150520/52f8ec3f/attachment.html>


More information about the Gluster-users mailing list