[Gluster-users] Creating cluster replica on 2 nodes 2 bricks each.

Nithya Balachandran nbalacha at redhat.com
Thu Jan 11 04:02:07 UTC 2018


Hi Jose,

Gluster is working as expected. The Distribute-replicated type just means
that there are now 2 replica sets and files will be distributed across
them.

A volume of type Replicate (1xn where n is the number of bricks in the
replica set) indicates there is no distribution  (all files on the
volume will be present on all the bricks in the volume).


A volume of type Distributed-Replicate indicates the volume is both
distributed (as in files will only be created on one of the replicated
sets) and replicated. So in the above example, a file will exist on either
Brick1 and Brick2 or Brick3 and Brick4.


After the add brick, the volume will have a total capacity of 28TB and
store 2 copies of every file. Let me know if that is not what you are
looking for.


Regards,
Nithya


On 10 January 2018 at 20:40, Jose Sanchez <josesanc at carc.unm.edu> wrote:

>
>
> Hi Nithya
>
> This is what i have so far, I have peer both cluster nodes together as
> replica, from node 1A and 1B , now when i tried to add it , i get the error
> that it is already part of a volume. when i run the cluster volume info , i
> see that has switch to distributed-replica.
>
> Thanks
>
> Jose
>
>
>
>
>
> [root at gluster01 ~]# gluster volume status
> Status of volume: scratch
> Gluster process                             TCP Port  RDMA Port  Online
> Pid
> ------------------------------------------------------------
> ------------------
> Brick gluster01ib:/gdata/brick1/scratch     49152     49153      Y
> 3140
> Brick gluster02ib:/gdata/brick1/scratch     49153     49154      Y
> 2634
> Self-heal Daemon on localhost               N/A       N/A        Y
> 3132
> Self-heal Daemon on gluster02ib             N/A       N/A        Y
> 2626
>
>
> Task Status of Volume scratch
> ------------------------------------------------------------
> ------------------
> There are no active volume tasks
>
>
> [root at gluster01 ~]#
>
> [root at gluster01 ~]# gluster volume info
>
>
> Volume Name: scratch
> Type: *Replicate*
> Volume ID: a6e20f7d-13ed-4293-ab8b-d783d1748246
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp,rdma
> Bricks:
> Brick1: gluster01ib:/gdata/brick1/scratch
> Brick2: gluster02ib:/gdata/brick1/scratch
> Options Reconfigured:
> performance.readdir-ahead: on
> nfs.disable: on
> [root at gluster01 ~]#
>
>
> -------------------------------------
>
> [root at gluster01 ~]# gluster volume add-brick scratch replica 2
> gluster01ib:/gdata/brick2/scratch gluster02ib:/gdata/brick2/scratch
> volume add-brick: failed: /gdata/brick2/scratch is already part of a volume
>
>
> [root at gluster01 ~]# gluster volume status
> Status of volume: scratch
> Gluster process                             TCP Port  RDMA Port  Online
> Pid
> ------------------------------------------------------------
> ------------------
> Brick gluster01ib:/gdata/brick1/scratch     49152     49153      Y
> 3140
> Brick gluster02ib:/gdata/brick1/scratch     49153     49154      Y
> 2634
> Self-heal Daemon on gluster02ib             N/A       N/A        Y
> 2626
> Self-heal Daemon on localhost               N/A       N/A        Y
> 3132
>
>
> Task Status of Volume scratch
> ------------------------------------------------------------
> ------------------
> There are no active volume tasks
>
>
> [root at gluster01 ~]# gluster volume info
>
>
> Volume Name: scratch
> Type: *Distributed-Replicate*
> Volume ID: a6e20f7d-13ed-4293-ab8b-d783d1748246
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 2 x 2 = 4
> Transport-type: tcp,rdma
> Bricks:
> Brick1: gluster01ib:/gdata/brick1/scratch
> Brick2: gluster02ib:/gdata/brick1/scratch
> Brick3: gluster01ib:/gdata/brick2/scratch
> Brick4: gluster02ib:/gdata/brick2/scratch
> Options Reconfigured:
> performance.readdir-ahead: on
> nfs.disable: on
> [root at gluster01 ~]#
>
>
>
> --------------------------------
> Jose Sanchez
> Center of Advanced Research Computing
> Albuquerque, NM 87131-0001
> carc.unm.edu
>
>
> On Jan 9, 2018, at 9:04 PM, Nithya Balachandran <nbalacha at redhat.com>
> wrote:
>
> Hi,
>
> Please let us know what commands you ran so far and the output of the *gluster
> volume info* command.
>
> Thanks,
> Nithya
>
> On 9 January 2018 at 23:06, Jose Sanchez <josesanc at carc.unm.edu> wrote:
>
>> Hello
>>
>> We are trying to setup Gluster for our project/scratch storage HPC
>> machine using a replicated mode with 2 nodes, 2 bricks each (14tb each).
>>
>> Our goal is to be able to have a replicated system between node 1 and 2
>> (A bricks) and add an additional 2 bricks (B bricks)  from the 2 nodes. so
>> we can have a total of 28tb replicated mode.
>>
>> Node 1 [ (Brick A) (Brick B) ]
>> Node 2 [ (Brick A) (Brick B) ]
>> --------------------------------------------
>> 14Tb + 14Tb = 28Tb
>>
>> At this  I was able to create the replica nodes between node 1 and 2
>> (brick A) but I’ve not been able to add to the replica together, Gluster
>> switches to distributed replica   when i add it with only 14Tb.
>>
>> Any help will be appreciated.
>>
>> Thanks
>>
>> Jose
>>
>> ---------------------------------
>> Jose Sanchez
>> Center of Advanced Research Computing
>> Albuquerque, NM 87131
>> carc.unm.edu
>>
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180111/36706aa8/attachment.html>


More information about the Gluster-users mailing list