[Gluster-users] Hosed installation

Ryan Nix ryan.nix at gmail.com
Fri Oct 10 12:53:09 UTC 2014


So I had to force the volume to stop.  It seems the replace-brick function
was hung-up, and not matter what I did, restart the gluster daemon, etc, it
wouldn't work.  I also did a yum erase gluster*, and removed the gluster
directory in /var/lib, then reinstalled.  Once I did that, I followed Joe's
instructions
http://joejulian.name/blog/glusterfs-path-or-a-prefix-of-it-is-already-part-of-a-volume/
and
was able to recreate the volume.

When you delete a volume in Gluster, is the .glusterfs directory supposed
to be automatically removed?  If not, will future versions of Gluster do
that?  Seems kind of silly that you have to go through Joe's instructions,
which are 2.5 years old now.

On Thu, Oct 9, 2014 at 11:11 AM, Ted Miller <tmiller at hcjb.org> wrote:

>  On 10/7/2014 1:56 PM, Ryan Nix wrote:
>
> Hello,
>
>  I seem to have hosed my installation while trying to replace a failed
> brick.  The instructions for replacing the brick with a different host
> name/IP on the Gluster site are no longer available so I used the
> instructions from the Redhat Storage class that I attended last week, which
> assumed the replacement had the same host name.
>
>
> http://community.gluster.org/q/a-replica-node-has-failed-completely-and-must-be-replaced-with-new-empty-hardware-how-do-i-add-the-new-hardware-and-bricks-back-into-the-replica-pair-and-begin-the-healing-process/
>
>  It seems the working brick (I had two servers with simple replication
> only) will not release the DNS entry of the failed brick.
>
>  Is there any way to simply reset Gluster completely?
>
> The simple way to "reset gluster completely" would be to delete the volume
> and start over.  Sometimes this is the quickest way, especially if you only
> have one or two volumes.
>
> If nothing has changed, deleting the volume will not affect the data on
> the brick.
>
> You can either:
> Find and follow the instructions to delete the "markers" that glusterfs
> puts on the brick, in which case the create process should be the same as
> any new volume creation.
> Otherwise, when you do the "volume create..." step, it will give you an
> error, something like 'brick already in use'.  You used to be able to
> override that by adding --force to the command line.  (Have not needed it
> lately, so don't know if it still works.)
>
> Hope this helps
> Ted Miller
> Elkhart, IN
>
>
>
>  Just to confirm, if I delete the volume so I can start over, deleting
> the volume will not delete the data.  Is this correct?  Finally, once the
> volume is deleted, do I have to do what Joe Julian recommended here?
> http://joejulian.name/blog/glusterfs-path-or-a-prefix-of-it-is-already-part-of-a-volume/
>
>  Thanks for any insights.
>
>  - Ryan
>
>
> _______________________________________________
> Gluster-users mailing listGluster-users at gluster.orghttp://supercolony.gluster.org/mailman/listinfo/gluster-users
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20141010/f18d018d/attachment.html>


More information about the Gluster-users mailing list