[heketi-devel] heketi-cli kubernetes gluster pvc endpoint issue

Jose A. Rivera jarrpa at redhat.com
Wed Feb 7 16:21:00 UTC 2018


What's the output of the topology load command?

Can you verify that glusterd is running and healthy on all the nodes?
I'm not sure about Ubuntu, but on Fedora we do "systemctl status
glusterd", so something like that.

Does running "glusterfs pool list" show all the nodes? This only needs
to be run on one of the nodes.

Finally, do you have a firewall up and does it have the requisite
ports open for GlusterFS?

--Jose

On Wed, Feb 7, 2018 at 9:20 AM, Ercan Aydoğan <ercan.aydogan at gmail.com> wrote:
> Yes, i delete heketi.db before every try
>
>
>> On 7 Feb 2018, at 18:18, Jose A. Rivera <jarrpa at redhat.com> wrote:
>>
>> When you deleted the cluster, did you also delete the heketi database?
>>
>> --Jose
>>
>> On Wed, Feb 7, 2018 at 3:30 AM, Ercan Aydoğan <ercan.aydogan at gmail.com> wrote:
>>> Gluster cluster on ubuntu 16.04 and i remove with this commands
>>>
>>> apt-get purge glusterfs-server -y  --allow-change-held-packages
>>> rm -rf /var/lib/glusterd
>>> rm -rf /var/log/glusterfs/
>>> wipefs -a --force /dev/nbd1
>>> wipefs -a --force /dev/nbd2
>>> wipefs -a --force /dev/nbd3
>>>
>>> after reboot  install with
>>>
>>> apt-get install -y software-properties-common
>>> add-apt-repository ppa:gluster/glusterfs-3.11
>>> apt-get update
>>> apt-get install -y glusterfs-server
>>>
>>> after this i’m using
>>>
>>>> /heketi-cli   topology  load --json=topology.json
>>>
>>>
>>> but
>>>
>>> i can’t create any volume with gluster cmd or heketi-cli maybe this is
>>> hostname or /etc/hostname issue.
>>>
>>> my current /etc/hosts is
>>>
>>> node 1
>>>
>>>
>>> root at pri:/var/log/glusterfs# cat /etc/hosts
>>> #127.0.0.1       localhost
>>> 127.0.0.1        pri.ostechnix.lan     pri
>>> ::1             localhost ip6-localhost ip6-loopback
>>> ff02::1         ip6-allnodes
>>> ff02::2         ip6-allrouters
>>>
>>> 51.15.90.60      sec.ostechnix.lan     sec
>>> 163.172.151.120  third.ostechnix.lan   third
>>> root at pri:/var/log/glusterfs#
>>>
>>> on every node i set 127.0.0.1 matching hostname.
>>>
>>>
>>>
>>>
>>>
>>>
>>> On 7 Feb 2018, at 11:51, Humble Chirammal <hchiramm at redhat.com> wrote:
>>>
>>> true, storage should be IP address. However afaict, it failed in "Peer in
>>> cluster" , because gluster cluster is formed with different IP/hostname and
>>> its stored in metadata. If you can delete the cluster and recreate it with
>>> "storage" in IP, it should work I believe.
>>>
>>> On Wed, Feb 7, 2018 at 2:16 PM, Ercan Aydoğan <ercan.aydogan at gmail.com>
>>> wrote:
>>>>
>>>> Hello,
>>>>
>>>> i have dedicated 3.11.3 version glusterfs 3 nodes. i can create volumes
>>>> with both gluster own command line utility and heketi-cli it’s ok.
>>>>
>>>> If i use  storage hostname FQDN i can create cluster with
>>>>
>>>> /heketi-cli   topology  load --json=topology.json
>>>>
>>>> after storageclass , secret and pvc creation i got this error.
>>>>
>>>> kubectl get pvc claim1 returns
>>>>
>>>> root at kubemaster ~ # kubectl describe pvc claim1
>>>> Name:          claim1
>>>> Namespace:     default
>>>> StorageClass:  fast
>>>> Status:        Pending
>>>> Volume:
>>>> Labels:        <none>
>>>> Annotations:
>>>> volume.beta.kubernetes.io/storage-provisioner=kubernetes.io/glusterfs
>>>> Finalizers:    []
>>>> Capacity:
>>>> Access Modes:
>>>> Events:
>>>>  Type     Reason              Age   From                         Message
>>>>  ----     ------              ----  ----                         -------
>>>>  Warning  ProvisioningFailed  21s   persistentvolume-controller  Failed
>>>> to provision volume with StorageClass "fast": create volume error: failed to
>>>> create endpoint/service error creating endpoint: Endpoints
>>>> "glusterfs-dynamic-claim1" is invalid: [subsets[0].addresses[0].ip: Invalid
>>>> value: "pri.ostechnix.lan": must be a valid IP address, (e.g. 10.9.8.7),
>>>> subsets[0].addresses[1].ip: Invalid value: "third.ostechnix.lan": must be a
>>>> valid IP address, (e.g. 10.9.8.7), subsets[0].addresses[2].ip: Invalid
>>>> value: "sec.ostechnix.lan": must be a valid IP address, (e.g. 10.9.8.7)]
>>>>
>>>>
>>>> my topology.json content is
>>>>
>>>> {
>>>>  "clusters": [
>>>>    {
>>>>      "nodes": [
>>>>        {
>>>>          "node": {
>>>>            "hostnames": {
>>>>              "manage": [
>>>>                "51.15.77.14"
>>>>              ],
>>>>              "storage": [
>>>>                "pri.ostechnix.lan"
>>>>              ]
>>>>            },
>>>>            "zone": 1
>>>>          },
>>>>          "devices": [
>>>>            "/dev/nbd1",
>>>>            "/dev/nbd2",
>>>>    "/dev/nbd3"
>>>>          ]
>>>>        },
>>>>        {
>>>>          "node": {
>>>>            "hostnames": {
>>>>              "manage": [
>>>>                "51.15.90.60"
>>>>              ],
>>>>              "storage": [
>>>>                "sec.ostechnix.lan"
>>>>              ]
>>>>            },
>>>>            "zone": 1
>>>>          },
>>>>          "devices": [
>>>>            "/dev/nbd1",
>>>>            "/dev/nbd2",
>>>>            "/dev/nbd3"
>>>>          ]
>>>>        },
>>>>        {
>>>>          "node": {
>>>>            "hostnames": {
>>>>              "manage": [
>>>>                "163.172.151.120"
>>>>              ],
>>>>              "storage": [
>>>>                "third.ostechnix.lan"
>>>>              ]
>>>>            },
>>>>            "zone": 1
>>>>          },
>>>>          "devices": [
>>>>            "/dev/nbd1",
>>>>            "/dev/nbd2",
>>>>             "/dev/nbd3"
>>>>          ]
>>>>        }
>>>>
>>>>
>>>>      ]
>>>>    }
>>>>  ]
>>>> }
>>>>
>>>>
>>>> Yes, it says storage must be ip for endpoint creation. But if i change
>>>>
>>>> manage : hostname
>>>> storage: ip address
>>>>
>>>> {
>>>>  "clusters": [
>>>>    {
>>>>      "nodes": [
>>>>        {
>>>>          "node": {
>>>>            "hostnames": {
>>>>              "manage": [
>>>>                "pri.ostechnix.lan"
>>>>              ],
>>>>              "storage": [
>>>>                "51.15.77.14"
>>>>              ]
>>>>            },
>>>>            "zone": 1
>>>>          },
>>>>          "devices": [
>>>>            "/dev/nbd1",
>>>>            "/dev/nbd2",
>>>>    "/dev/nbd3"
>>>>          ]
>>>>        },
>>>>        {
>>>>          "node": {
>>>>            "hostnames": {
>>>>              "manage": [
>>>>                "sec.ostechnix.lan"
>>>>              ],
>>>>              "storage": [
>>>>                "51.15.90.60"
>>>>              ]
>>>>            },
>>>>            "zone": 1
>>>>          },
>>>>          "devices": [
>>>>            "/dev/nbd1",
>>>>            "/dev/nbd2",
>>>>            "/dev/nbd3"
>>>>          ]
>>>>        },
>>>>        {
>>>>          "node": {
>>>>            "hostnames": {
>>>>              "manage": [
>>>>                "third.ostechnix.lan"
>>>>              ],
>>>>              "storage": [
>>>>                "163.172.151.120"
>>>>              ]
>>>>            },
>>>>            "zone": 1
>>>>          },
>>>>          "devices": [
>>>>            "/dev/nbd1",
>>>>            "/dev/nbd2",
>>>>             "/dev/nbd3"
>>>>          ]
>>>>        }
>>>>
>>>>
>>>>      ]
>>>>    }
>>>>  ]
>>>> }
>>>>
>>>> i can not create volume with heketi-cli.
>>>>
>>>> it says
>>>>
>>>> root at kubemaster ~/heketi # ./heketi-cli volume create --size=3 --replica=3
>>>> Error: volume create: vol_207bbf81f28b959c51448b919be3bb59: failed: Host
>>>> 51.15.90.60 is not in 'Peer in Cluster’ state
>>>>
>>>> i need advice how can fix this issue.
>>>>
>>>>
>>>> _______________________________________________
>>>> heketi-devel mailing list
>>>> heketi-devel at gluster.org
>>>> http://lists.gluster.org/mailman/listinfo/heketi-devel
>>>>
>>>
>>>
>>>
>>> --
>>> Cheers,
>>> Humble
>>>
>>> Red Hat Storage Engineering
>>> Mastering KVM Virtualization: http://amzn.to/2vFTXaW
>>> Website: http://humblec.com
>>>
>>>
>>>
>>> _______________________________________________
>>> heketi-devel mailing list
>>> heketi-devel at gluster.org
>>> http://lists.gluster.org/mailman/listinfo/heketi-devel
>>>
>


More information about the heketi-devel mailing list