[Gluster-users] Distributed volume going to Read only mode if any of the Brick is not available
Atin Mukherjee
amukherj at redhat.com
Wed May 20 11:52:08 UTC 2015
On 05/20/2015 05:18 PM, Varadharajan S wrote:
> Hi Team,
> Anyone can suggest my below query, so that I can get clear idea
>
> Regards,
> Varad
> On 19 May 2015 20:28, "Varadharajan S" <rajanvaradhu at gmail.com> wrote:
>
>> FYI
>> On 19 May 2015 20:25, "Varadharajan S" <rajanvaradhu at gmail.com> wrote:
>>
>>> Hi,
>>> Replication means, I won't get space. Distribution is not like striping
>>> right? If one brick is not available in the volume, other bricks can
>>> distribute data in between. If I do any tuning will get solution?
Distributed volume means that your file doesn't end up going to a single
brick, the load is distributed across the bricks. The way it works with
the help of hashing applied on the file to find out the brick.
HTH,
Atin
>>> On 19 May 2015 20:02, "Atin Mukherjee" <atin.mukherjee83 at gmail.com>
>>> wrote:
>>>
>>>>
>>>> On 19 May 2015 17:10, "Varadharajan S" <rajanvaradhu at gmail.com> wrote:
>>>>>
>>>>> Hi,
>>>>>
>>>>> We are using Ubuntu 14.04 server and for storage purpose we configured
>>>> gluster 3.5 as distributed volume and find the below details,
>>>>>
>>>>> 1).4 Servers - 14.04 Ubuntu Server and each server disks free spaces
>>>> are configured as ZFS raiddz2 volume
>>>>>
>>>>> 2). Each server has /pool/gluster zfs volume and capacity as - 5 TB,8
>>>> TB,6 TB and 10 TB
>>>>>
>>>>> 3). Bricks are - rep1,rep2,rep3 and st1 and all the bricks are
>>>> connected as Distributed Volume and mounted on each system as,
>>>>>
>>>>> For E.x in rep1 -> mount -t glusterfs rep1:/glustervol /data.
>>>>> rep2 -> mount -t glusterfs rep2:/glustervol /data
>>>>> rep3 -> mount -t glusterfs rep3:/glustervol /data
>>>>> st1 -> mount -t glusterfs st1:/glustervol
>>>> /data
>>>>>
>>>>> So we get /data is having around 29 TB and all our applications
>>>> data's are stored in /data mount point.
>>>>>
>>>>> Details about volume:
>>>>>
>>>>> volume glustervol-client-0
>>>>> type protocol/client
>>>>> option send-gids true
>>>>> option password b217da9d1d8b-bb55
>>>>> option username 9d76-4553-8c75
>>>>> option transport-type tcp
>>>>> option remote-subvolume /pool/gluster
>>>>> option remote-host rep1
>>>>> option ping-timeout 42
>>>>> end-volume
>>>>>
>>>>> volume glustervol-client-1
>>>>> type protocol/client
>>>>> option send-gids true
>>>>> option password b217da9d1d8b-bb55
>>>>> option username jkd76-4553-5347
>>>>> option transport-type tcp
>>>>> option remote-subvolume /pool/gluster
>>>>> option remote-host rep2
>>>>> option ping-timeout 42
>>>>> end-volume
>>>>>
>>>>> volume glustervol-client-2
>>>>> type protocol/client
>>>>> option send-gids true
>>>>> option password b217da9d1d8b-bb55
>>>>> option username 19d7-5a190c2
>>>>> option transport-type tcp
>>>>> option remote-subvolume /pool/gluster
>>>>> option remote-host rep3
>>>>> option ping-timeout 42
>>>>> end-volume
>>>>>
>>>>> volume glustervol-client-3
>>>>> type protocol/client
>>>>> option send-gids true
>>>>> option password b217da9d1d8b-bb55
>>>>> option username c75-5436b5a168347
>>>>> option transport-type tcp
>>>>> option remote-subvolume /pool/gluster
>>>>> option remote-host st1
>>>>>
>>>>> option ping-timeout 42
>>>>> end-volume
>>>>>
>>>>> volume glustervol-dht
>>>>> type cluster/distribute
>>>>> subvolumes glustervol-client-0 glustervol-client-1
>>>> glustervol-client-2 glustervol-client-3
>>>>> end-volume
>>>>>
>>>>> volume glustervol-write-behind
>>>>> type performance/write-behind
>>>>> subvolumes glustervol-dht
>>>>> end-volume
>>>>>
>>>>> volume glustervol-read-ahead
>>>>> type performance/read-ahead
>>>>> subvolumes glustervol-write-behind
>>>>> end-volume
>>>>>
>>>>> volume glustervol-io-cache
>>>>> type performance/io-cache
>>>>> subvolumes glustervol-read-ahead
>>>>> end-volume
>>>>>
>>>>> volume glustervol-quick-read
>>>>> type performance/quick-read
>>>>> subvolumes glustervol-io-cache
>>>>> end-volume
>>>>>
>>>>> volume glustervol-open-behind
>>>>> type performance/open-behind
>>>>> subvolumes glustervol-quick-read
>>>>> end-volume
>>>>>
>>>>> volume glustervol-md-cache
>>>>> type performance/md-cache
>>>>> subvolumes glustervol-open-behind
>>>>> end-volume
>>>>>
>>>>> volume glustervol
>>>>> type debug/io-stats
>>>>> option count-fop-hits off
>>>>> option latency-measurement off
>>>>> subvolumes glustervol-md-cache
>>>>> end-volume
>>>>>
>>>>>
>>>>> ap at rep3:~$ sudo gluster volume info
>>>>>
>>>>> Volume Name: glustervol
>>>>> Type: Distribute
>>>>> Volume ID: 165b-XXXXX
>>>>> Status: Started
>>>>> Number of Bricks: 4
>>>>> Transport-type: tcp
>>>>> Bricks:
>>>>> Brick1: rep1:/pool/gluster
>>>>> Brick2: rep2:/pool/gluster
>>>>> Brick3: rep3:/pool/gluster
>>>>> Brick4: st1:/pool/gluster
>>>>>
>>>>> Problem:
>>>>>
>>>>> If we shutdown any of the bricks , the volume size is reduced (this is
>>>> ok) but from the other servers , i can see my mount point /data but it's
>>>> only listing contents and i can't write or edit any single files/folders.
>>>>>
>>>>> Solution Required:
>>>>>
>>>>> If anyone brick is not available, From other servers should allow for
>>>> Write and edit functions
>>>> This is expected since you are using distributed volume. You wouldn't be
>>>> able to write/edit files belonging to the brick which is down. Solution
>>>> would be to migrate to distributed replicate volume.
>>>>>
>>>>> Please let us know, what can i try further ?
>>>>>
>>>>> Regards,
>>>>> Varad
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> Gluster-users mailing list
>>>>> Gluster-users at gluster.org
>>>>> http://www.gluster.org/mailman/listinfo/gluster-users
>>>>
>>>
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
--
~Atin
More information about the Gluster-users
mailing list