[heketi-devel] Start Heketi setup afresh

Gaurav Chhabra varuag.chhabra at gmail.com
Fri Oct 6 04:46:26 UTC 2017


Thanks Talur for sharing the commands. Once i start with the setup
(hopefully next week), i will try these and let you know if i need further
help on this.


Regards,
Gaurav


On Thu, Oct 5, 2017 at 5:07 PM, Raghavendra Talur <rtalur at redhat.com> wrote:

> On Fri, Sep 29, 2017 at 6:28 PM, Raghavendra Talur <rtalur at redhat.com>
> wrote:
> > On Thu, Sep 28, 2017 at 7:42 PM, Gaurav Chhabra
> > <varuag.chhabra at gmail.com> wrote:
> >> Thanks for the response Talur. I will really appreciate if you already
> have
> >> the steps for cleaning up the setup so i could start afresh.
> >>
> >
> > I don't have the steps handy. Will work on it and send a PR on Monday.
>
> I got delayed due to other issues.
>
> Here is a sample set of steps:
>
> I have
>
> [root at dhcp42-96 ~]# heketi-cli volume list
> Id:0b83f800ef785ba6e889091f40f3d0d2
> Cluster:4151837e55c2c943e457e9c2b84ae1c7
> Name:vol_0b83f800ef785ba6e889091f40f3d0d2
> Id:0cb426c7677a10d96b0d50b58ddbedd2
> Cluster:4151837e55c2c943e457e9c2b84ae1c7
> Name:vol_0cb426c7677a10d96b0d50b58ddbedd2
> Id:17c98a4da7ef81fae4f364219920a1cb
> Cluster:4151837e55c2c943e457e9c2b84ae1c7
> Name:vol_17c98a4da7ef81fae4f364219920a1cb
> Id:23a4cfd7263155059d1d8c431840df7e
> Cluster:4151837e55c2c943e457e9c2b84ae1c7    Name:heketidbstorage
> Id:41ddd6957a762f5f174bc06bea2ffe2a
> Cluster:4151837e55c2c943e457e9c2b84ae1c7
> Name:vol_41ddd6957a762f5f174bc06bea2ffe2a
> Id:a7589a26144910f91ec7d1032c1e0e76
> Cluster:4151837e55c2c943e457e9c2b84ae1c7
> Name:vol_a7589a26144910f91ec7d1032c1e0e76
>
> [root at dhcp42-96 ~]# heketi-cli topology info | grep -e "Used"
>                 Id:9a6eb125daece741716034c4684a3687   Name:/dev/vdd
>         State:online    Size (GiB):499     Used (GiB):6       Free
> (GiB):493
>                 Id:d3ca1aff8d9f210b2d5bdb09525cf1fe   Name:/dev/vdb
>         State:online    Size (GiB):499     Used (GiB):8       Free
> (GiB):491
>                 Id:486b9e044b3caaa5f47aba35de22cd9e   Name:/dev/vdb
>         State:online    Size (GiB):499     Used (GiB):7       Free
> (GiB):492
>                 Id:f7a58f8760eab2a7d5917baf6b367a97   Name:/dev/vdd
>         State:online    Size (GiB):499     Used (GiB):7       Free
> (GiB):492
>                 Id:2812a51a72ed6735995d1368c36a4899   Name:/dev/vdd
>         State:online    Size (GiB):499     Used (GiB):5       Free
> (GiB):494
>                 Id:c2128fe17a620ce70821aaf7587dc48f   Name:/dev/vdb
>         State:online    Size (GiB):499     Used (GiB):9       Free
> (GiB):490
>
> 6 volumes and 6 disks.
>
> Then we delete volumes, except the *heketidbstorage* volumes
> [root at dhcp42-96 ~]# heketi-cli volume delete
> 0b83f800ef785ba6e889091f40f3d0d2
> [root at dhcp42-96 ~]# heketi-cli volume delete
> 0cb426c7677a10d96b0d50b58ddbedd2
> [root at dhcp42-96 ~]# heketi-cli volume delete
> 17c98a4da7ef81fae4f364219920a1cb
> [root at dhcp42-96 ~]# heketi-cli volume delete
> 41ddd6957a762f5f174bc06bea2ffe2a
> [root at dhcp42-96 ~]# heketi-cli volume delete
> a7589a26144910f91ec7d1032c1e0e76
>
> Now the state is
> [root at dhcp42-96 ~]# heketi-cli topology info | grep -e "Used"
>                 Id:9a6eb125daece741716034c4684a3687   Name:/dev/vdd
>         State:online    Size (GiB):499     Used (GiB):0       Free
> (GiB):499
>                 Id:d3ca1aff8d9f210b2d5bdb09525cf1fe   Name:/dev/vdb
>         State:online    Size (GiB):499     Used (GiB):2       Free
> (GiB):497
>                 Id:486b9e044b3caaa5f47aba35de22cd9e   Name:/dev/vdb
>         State:online    Size (GiB):499     Used (GiB):2       Free
> (GiB):497
>                 Id:f7a58f8760eab2a7d5917baf6b367a97   Name:/dev/vdd
>         State:online    Size (GiB):499     Used (GiB):0       Free
> (GiB):499
>                 Id:2812a51a72ed6735995d1368c36a4899   Name:/dev/vdd
>         State:online    Size (GiB):499     Used (GiB):0       Free
> (GiB):499
>                 Id:c2128fe17a620ce70821aaf7587dc48f   Name:/dev/vdb
>         State:online    Size (GiB):499     Used (GiB):2       Free
> (GiB):497
>
> Then I was able to delete the 3 empty devices above
> [root at dhcp42-96 ~]# heketi-cli device delete
> 2812a51a72ed6735995d1368c36a4899
> [root at dhcp42-96 ~]# heketi-cli device delete
> f7a58f8760eab2a7d5917baf6b367a97
> [root at dhcp42-96 ~]# heketi-cli device delete
> 9a6eb125daece741716034c4684a3687
>
>
> [root at dhcp42-96 ~]# heketi-cli topology info | grep -e "Used"
>                 Id:d3ca1aff8d9f210b2d5bdb09525cf1fe   Name:/dev/vdb
>         State:online    Size (GiB):499     Used (GiB):2       Free
> (GiB):497
>                 Id:486b9e044b3caaa5f47aba35de22cd9e   Name:/dev/vdb
>         State:online    Size (GiB):499     Used (GiB):2       Free
> (GiB):497
>                 Id:c2128fe17a620ce70821aaf7587dc48f   Name:/dev/vdb
>         State:online    Size (GiB):499     Used (GiB):2       Free
> (GiB):497
>
>
> The above procedures work if you are ok with deleting the volumes.
> In the scenarios where you can't, follow steps to disable device,
> remove device and then delete device.
>
> If you have more specific scenario, please let us know.
>
> Talur
>
>
>
>
> >
> >>
> >> Regards,
> >> Gaurav
> >>
> >>
> >> On Thu, Sep 28, 2017 at 5:36 PM, Raghavendra Talur <rtalur at redhat.com>
> >> wrote:
> >>>
> >>> On Wed, Sep 27, 2017 at 11:27 PM, Gaurav Chhabra
> >>> <varuag.chhabra at gmail.com> wrote:
> >>> > Hi,
> >>> >
> >>> >
> >>> > I tried setting up Gluster cluster on Google cloud. It went fine. I
> >>> > always
> >>> > stop the test instances when not in use. One day, when i started my
> >>> > Gluster
> >>> > instances, few bricks went offline and i tried bringing them up but i
> >>> > couldn't. I then tried removing the volumes but it gave me error
> related
> >>> > to
> >>> > "device in use" and i think i also saw "bricks in use" messages. I
> tried
> >>> > removing bricks but i couldn't. I finally thought of removing the
> extra
> >>> > raw
> >>> > disks from all nodes that i initially added. But i guess Heketi was
> >>> > unable
> >>> > to forget its past. I finally had to discard all the machines and
> >>> > created a
> >>> > fresh three node cluster. :( All i wanted was to have a clean slate
> to
> >>> > start
> >>> > with. In next two-three days, i will be starting with the actual
> setup
> >>> > on
> >>> > live environment and using it for managing Kubernetes. Though in that
> >>> > case,
> >>> > i will not be shutting down the cluster when not in use :) but i am
> >>> > wondering if there is a way of resetting all things (bricks, volumes,
> >>> > devices, nodes) in Gluster and Heketi, if such case arises, so i
> could
> >>> > start
> >>> > afresh with the Heketi setup. Data backup is not a concern here as i
> am
> >>> > considering this scenario only during the initial setup phase.
> >>> >
> >>> >
> >>> > Regards,
> >>> > Gaurav
> >>> >
> >>>
> >>> Heketi does provide you mechanism to disable and remove devices/nodes.
> >>> Once that is performed you can delete them. Yes, it is a iterative
> >>> process at the end of which you will be left with nothing in topology.
> >>>
> >>> One thing to note is that you need to clean the disks of partition
> >>> info for heketi to reuse them.
> >>>
> >>> Also, everything that heketi knows is in the db. If you delete the db
> >>> file you have essentially created a fresh heketi instance. You will
> >>> still have to clean the disks and gluster information though.
> >>>
> >>> Let us know if you are looking for a step by step guide to get this
> done.
> >>>
> >>> Talur
> >>>
> >>> > _______________________________________________
> >>> > heketi-devel mailing list
> >>> > heketi-devel at gluster.org
> >>> > http://lists.gluster.org/mailman/listinfo/heketi-devel
> >>> >
> >>
> >>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/heketi-devel/attachments/20171006/7bc33ee9/attachment.html>


More information about the heketi-devel mailing list