[Gluster-users] Creating cluster replica on 2 nodes 2 bricks each.

Jose Sanchez josesanc at carc.unm.edu
Wed Jan 10 15:10:13 UTC 2018



Hi Nithya

This is what i have so far, I have peer both cluster nodes together as replica, from node 1A and 1B , now when i tried to add it , i get the error that it is already part of a volume. when i run the cluster volume info , i see that has switch to distributed-replica. 

Thanks

Jose





[root at gluster01 ~]# gluster volume status
Status of volume: scratch
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick gluster01ib:/gdata/brick1/scratch     49152     49153      Y       3140 
Brick gluster02ib:/gdata/brick1/scratch     49153     49154      Y       2634 
Self-heal Daemon on localhost               N/A       N/A        Y       3132 
Self-heal Daemon on gluster02ib             N/A       N/A        Y       2626 
 
Task Status of Volume scratch
------------------------------------------------------------------------------
There are no active volume tasks
 
[root at gluster01 ~]#

[root at gluster01 ~]# gluster volume info
 
Volume Name: scratch
Type: Replicate
Volume ID: a6e20f7d-13ed-4293-ab8b-d783d1748246
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp,rdma
Bricks:
Brick1: gluster01ib:/gdata/brick1/scratch
Brick2: gluster02ib:/gdata/brick1/scratch
Options Reconfigured:
performance.readdir-ahead: on
nfs.disable: on
[root at gluster01 ~]#


-------------------------------------

[root at gluster01 ~]# gluster volume add-brick scratch replica 2 gluster01ib:/gdata/brick2/scratch gluster02ib:/gdata/brick2/scratch
volume add-brick: failed: /gdata/brick2/scratch is already part of a volume


[root at gluster01 ~]# gluster volume status
Status of volume: scratch
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick gluster01ib:/gdata/brick1/scratch     49152     49153      Y       3140 
Brick gluster02ib:/gdata/brick1/scratch     49153     49154      Y       2634 
Self-heal Daemon on gluster02ib             N/A       N/A        Y       2626 
Self-heal Daemon on localhost               N/A       N/A        Y       3132 
 
Task Status of Volume scratch
------------------------------------------------------------------------------
There are no active volume tasks
 
[root at gluster01 ~]# gluster volume info
 
Volume Name: scratch
Type: Distributed-Replicate
Volume ID: a6e20f7d-13ed-4293-ab8b-d783d1748246
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 2 = 4
Transport-type: tcp,rdma
Bricks:
Brick1: gluster01ib:/gdata/brick1/scratch
Brick2: gluster02ib:/gdata/brick1/scratch
Brick3: gluster01ib:/gdata/brick2/scratch
Brick4: gluster02ib:/gdata/brick2/scratch
Options Reconfigured:
performance.readdir-ahead: on
nfs.disable: on
[root at gluster01 ~]# 



--------------------------------
Jose Sanchez
Center of Advanced Research Computing
Albuquerque, NM 87131-0001
carc.unm.edu <http://carc.unm.edu/>


> On Jan 9, 2018, at 9:04 PM, Nithya Balachandran <nbalacha at redhat.com> wrote:
> 
> Hi,
> 
> Please let us know what commands you ran so far and the output of the gluster volume info command.
> 
> Thanks,
> Nithya
> 
> On 9 January 2018 at 23:06, Jose Sanchez <josesanc at carc.unm.edu <mailto:josesanc at carc.unm.edu>> wrote:
> Hello
> 
> We are trying to setup Gluster for our project/scratch storage HPC machine using a replicated mode with 2 nodes, 2 bricks each (14tb each). 
> 
> Our goal is to be able to have a replicated system between node 1 and 2 (A bricks) and add an additional 2 bricks (B bricks)  from the 2 nodes. so we can have a total of 28tb replicated mode. 
> 
> Node 1 [ (Brick A) (Brick B) ]
> Node 2 [ (Brick A) (Brick B) ]
> --------------------------------------------
> 		14Tb + 14Tb = 28Tb
> 
> At this  I was able to create the replica nodes between node 1 and 2 (brick A) but I’ve not been able to add to the replica together, Gluster switches to distributed replica   when i add it with only 14Tb.
> 
> Any help will be appreciated.
> 
> Thanks
> 
> Jose
> 
> ---------------------------------
> Jose Sanchez
> Center of Advanced Research Computing
> Albuquerque, NM 87131
> carc.unm.edu <http://carc.unm.edu/>
> 
> 
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org <mailto:Gluster-users at gluster.org>
> http://lists.gluster.org/mailman/listinfo/gluster-users <http://lists.gluster.org/mailman/listinfo/gluster-users>
> 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180110/b0b68da7/attachment-0001.html>


More information about the Gluster-users mailing list