[Gluster-users] glusterfs replication issue

Curro Rodriguez curro at tyba.com
Tue Jun 30 10:08:10 UTC 2015


I am using gluster 3.7.2 on docker 1.6.2 on centos 7.1.
I had 2 brick replication gluster, but I lost one of them, so I added a new
one and deleted from replica the old one.

When I check the status shows this

gluster volume info

Volume Name: datastore
Type: Replicate
Volume ID: 4dd0a307-e44c-4655-9216-8d470f3a0d33
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Brick1: gfs-2.xx.xx.xx:/raw
Brick2: gfs-3.xx.xx.xx:/raw
Options Reconfigured:
cluster.self-heal-daemon: enable

When I mount with the client from other servers to add data to gluster
works fine, and the new data are replicated without problems.

The issue is that I haven't the old data from gfs-2 on gfs-3. Both bricks
are probed and on pool list. Connected and UUIDs are fine.

When I check volume with status I have this:

gluster volume status
Status of volume: datastore
Gluster process                             TCP Port  RDMA Port  Online  Pid
Brick gfs-2.xx.xx.xx:/raw                49152     0          Y       116
Brick gfs-3.xx.xx.xx:/raw                49153     0          Y       132
NFS Server on localhost                     N/A       N/A        N       N/A
Self-heal Daemon on localhost               N/A       N/A        Y       110
NFS Server on gfs-3.xx.xx.cc             N/A       N/A        N       N/A
Self-heal Daemon on gfs-3.xx.xx.cc       N/A       N/A        Y       122

Task Status of Volume datastore
There are no active volume tasks

I don't know how to repair this, I tried with gluster volume heal datastore
on gfs-3 but It gives a great out of files, what must I have to do? Go on
with heal on gfs-3?

Thank you in advance.

Kind regards.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150630/d18bc6c6/attachment.html>

More information about the Gluster-users mailing list