[Gluster-users] Hosed installation

Ryan Nix ryan.nix at gmail.com
Fri Oct 10 17:52:23 UTC 2014


Maybe we could have a script or something bundled with Gluster to perform
this operation, although hopefully it doesn't have to be run very often.

On Fri, Oct 10, 2014 at 9:33 AM, Joe Julian <joe at julianfamily.org> wrote:

>  The developers are of the mind to not delete anything that may possibly
> cause data loss, so they do leave that up to us to clean up manually.
>
>
> On 10/10/2014 5:53 AM, Ryan Nix wrote:
>
> So I had to force the volume to stop.  It seems the replace-brick function
> was hung-up, and not matter what I did, restart the gluster daemon, etc, it
> wouldn't work.  I also did a yum erase gluster*, and removed the gluster
> directory in /var/lib, then reinstalled.  Once I did that, I followed Joe's
> instructions
> http://joejulian.name/blog/glusterfs-path-or-a-prefix-of-it-is-already-part-of-a-volume/ and
> was able to recreate the volume.
>
>  When you delete a volume in Gluster, is the .glusterfs directory
> supposed to be automatically removed?  If not, will future versions of
> Gluster do that?  Seems kind of silly that you have to go through Joe's
> instructions, which are 2.5 years old now.
>
> On Thu, Oct 9, 2014 at 11:11 AM, Ted Miller <tmiller at hcjb.org> wrote:
>
>>  On 10/7/2014 1:56 PM, Ryan Nix wrote:
>>
>> Hello,
>>
>>  I seem to have hosed my installation while trying to replace a failed
>> brick.  The instructions for replacing the brick with a different host
>> name/IP on the Gluster site are no longer available so I used the
>> instructions from the Redhat Storage class that I attended last week, which
>> assumed the replacement had the same host name.
>>
>>
>> http://community.gluster.org/q/a-replica-node-has-failed-completely-and-must-be-replaced-with-new-empty-hardware-how-do-i-add-the-new-hardware-and-bricks-back-into-the-replica-pair-and-begin-the-healing-process/
>>
>>  It seems the working brick (I had two servers with simple replication
>> only) will not release the DNS entry of the failed brick.
>>
>>  Is there any way to simply reset Gluster completely?
>>
>>  The simple way to "reset gluster completely" would be to delete the
>> volume and start over.  Sometimes this is the quickest way, especially if
>> you only have one or two volumes.
>>
>> If nothing has changed, deleting the volume will not affect the data on
>> the brick.
>>
>> You can either:
>> Find and follow the instructions to delete the "markers" that glusterfs
>> puts on the brick, in which case the create process should be the same as
>> any new volume creation.
>> Otherwise, when you do the "volume create..." step, it will give you an
>> error, something like 'brick already in use'.  You used to be able to
>> override that by adding --force to the command line.  (Have not needed it
>> lately, so don't know if it still works.)
>>
>> Hope this helps
>> Ted Miller
>> Elkhart, IN
>>
>>
>>
>>  Just to confirm, if I delete the volume so I can start over, deleting
>> the volume will not delete the data.  Is this correct?  Finally, once the
>> volume is deleted, do I have to do what Joe Julian recommended here?
>> http://joejulian.name/blog/glusterfs-path-or-a-prefix-of-it-is-already-part-of-a-volume/
>>
>>  Thanks for any insights.
>>
>>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20141010/150de86c/attachment.html>


More information about the Gluster-users mailing list