[Gluster-users] restoring brick on new server failes on glusterfs

Merlin Morgenstern merlin.morgenstern at gmail.com
Thu Nov 19 20:00:01 UTC 2015


I am triying to attach a brick from another server to a local gluster
development server. Therfore I have done a dd from a snapshot on production
and a dd on the lvm volume on development. Then I deleted the .glusterfs
folder on root.

Unfortunatelly forming a new brick failed nevertheless with the info that
this brick is already part of a volume. (how does gluster know that?!)

I then issued the following:

sudo setfattr -x trusted.gfid /bricks/staging/brick1/
sudo setfattr -x trusted.glusterfs.volume-id /bricks/staging/brick1/
sudo /etc/init.d/glusterfs-server restart

Magically gluster still seems to know that this brick is from another
server as it knows the peered gluster nodes which are aparently different
on the dev server:

sudo gluster volume create staging
node1:/bricks/staging/brick1

volume create: staging: failed: Staging failed on gs3. Error: Host node1 is
not in 'Peer in Cluster' state

Staging failed on gs2. Error: Host node1 is not in 'Peer in Cluster' state

Is there a way to achive a restorage of that brick on a new server? Thank
you for any help on this.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20151119/8af09c88/attachment.html>


More information about the Gluster-users mailing list