[heketi-devel] heketi-cli kubernetes gluster pvc endpoint issue

Ercan Aydoğan ercan.aydogan at gmail.com
Wed Feb 7 17:02:51 UTC 2018


if kubernetes gluster pvc requires ip address i need ip based gluster peer but it’s not working and i want to make this happen. I’m wondering how others use gluster pvc. 

i can not create volume when i use storage address set to ip. 

Error: volume create: vol_207bbf81f28b959c51448b919be3bb59: failed: Host 51.15.90.60 is not in 'Peer in Cluster' state

I want to know how this can possible. last 2-3 days i am working this but no progress.


> On 7 Feb 2018, at 19:49, Jose A. Rivera <jarrpa at redhat.com> wrote:
> 
> What is the output of all those things when you use the "proper"
> configuration os manage and storage addresses?
> 
> On Wed, Feb 7, 2018 at 10:46 AM, Ercan Aydoğan <ercan.aydogan at gmail.com> wrote:
>> When i change
>> 
>> manage to ip address and
>> storage to FQDN / hostname  i can create volume but kubernetes storage
>> cluster not accept FQDN it requires ip address.
>> 
>> i can create with this topology.json
>> 
>> {
>>  "clusters": [
>>    {
>>      "nodes": [
>>        {
>>          "node": {
>>            "hostnames": {
>>              "manage": [
>>                "51.15.77.14"
>>              ],
>>              "storage": [
>>                "pri.ostechnix.lan"
>>              ]
>>            },
>>            "zone": 1
>>          },
>>          "devices": [
>>            "/dev/nbd1",
>>            "/dev/nbd2",
>>    "/dev/nbd3"
>>          ]
>>        },
>>        {
>>          "node": {
>>            "hostnames": {
>>              "manage": [
>>                "51.15.90.60"
>>              ],
>>              "storage": [
>>                "sec.ostechnix.lan"
>>              ]
>>            },
>>            "zone": 1
>>          },
>>          "devices": [
>>            "/dev/nbd1",
>>            "/dev/nbd2",
>>            "/dev/nbd3"
>>          ]
>>        },
>>        {
>>          "node": {
>>            "hostnames": {
>>              "manage": [
>>                "163.172.151.120"
>>              ],
>>              "storage": [
>>                "third.ostechnix.lan"
>>              ]
>>            },
>>            "zone": 1
>>          },
>>          "devices": [
>>            "/dev/nbd1",
>>            "/dev/nbd2",
>>             "/dev/nbd3"
>>          ]
>>        }
>> 
>> 
>> 
>> 
>> 
>>      ]
>>    }
>>  ]
>> }
>> 
>> 
>> but with this one i can’t.
>> 
>> {
>>  "clusters": [
>>    {
>>      "nodes": [
>>        {
>>          "node": {
>>            "hostnames": {
>>              "manage": [
>>                "pri.ostechnix.lan"
>>              ],
>>              "storage": [
>>                "51.15.77.14"
>>              ]
>>            },
>>            "zone": 1
>>          },
>>          "devices": [
>>            "/dev/nbd1",
>>            "/dev/nbd2",
>>    "/dev/nbd3"
>>          ]
>>        },
>>        {
>>          "node": {
>>            "hostnames": {
>>              "manage": [
>>                "sec.ostechnix.lan"
>>              ],
>>              "storage": [
>>                "51.15.90.60"
>>              ]
>>            },
>>            "zone": 1
>>          },
>>          "devices": [
>>            "/dev/nbd1",
>>            "/dev/nbd2",
>>            "/dev/nbd3"
>>          ]
>>        },
>>        {
>>          "node": {
>>            "hostnames": {
>>              "manage": [
>>                "third.ostechnix.lan"
>>              ],
>>              "storage": [
>>                "163.172.151.120"
>>              ]
>>            },
>>            "zone": 1
>>          },
>>          "devices": [
>>            "/dev/nbd1",
>>            "/dev/nbd2",
>>             "/dev/nbd3"
>>          ]
>>        }
>> 
>> 
>> 
>> 
>> 
>>      ]
>>    }
>>  ]
>> }
>> 
>> 
>> root at kubemaster ~/heketi # ./heketi-cli   topology  load
>> --json=topology_with_ip.json
>> Creating cluster ... ID: 522adced1b7033646f0196d538b1f093
>> Creating node 51.15.77.148 ... ID: 1c52608dd3f624ad32cb4d1d074613d7
>> Adding device /dev/nbd1 ... OK
>> Adding device /dev/nbd2 ... OK
>> Adding device /dev/nbd3 ... OK
>> Creating node 51.15.90.10 ... ID: 36eec4fa09cf572b2a0a11f65c43b706
>> Adding device /dev/nbd1 ... OK
>> Adding device /dev/nbd2 ... OK
>> Adding device /dev/nbd3 ... OK
>> Creating node 163.172.151.170 ... ID: da1bea3e71629b4f6f8ed1f1584f521c
>> Adding device /dev/nbd1 ... OK
>> Adding device /dev/nbd2 ... OK
>> 
>> 
>> root at pri:~# gluster peer status
>> Number of Peers: 2
>> 
>> Hostname: sec.ostechnix.lan
>> Uuid: 2bfc4a96-66f5-4ff9-8ee4-5e382a711c3a
>> State: Peer in Cluster (Connected)
>> 
>> Hostname: third.ostechnix.lan
>> Uuid: c3ae3a1e-d9d0-4675-bf0d-0f6cf7267b30
>> State: Peer in Cluster (Connected)
>> 
>> 
>> 
>> root at kubemaster ~/heketi # ./heketi-cli volume create --size=3 --replica=3
>> Name: vol_a6a21750e64c5317e1f949baaac25372
>> Size: 3
>> Volume Id: a6a21750e64c5317e1f949baaac25372
>> Cluster Id: 522adced1b7033646f0196d538b1f093
>> Mount: pri.ostechnix.lan:vol_a6a21750e64c5317e1f949baaac25372
>> Mount Options: backup-volfile-servers=sec.ostechnix.lan,third.ostechnix.lan
>> Durability Type: replicate
>> Distributed+Replica: 3
>> root at kubemaster ~/heketi #
>> 
>> root at kubemaster ~/heketi # ./heketi-cli   topology info
>> 
>> Cluster Id: 522adced1b7033646f0196d538b1f093
>> 
>>    Volumes:
>> 
>> Name: vol_a6a21750e64c5317e1f949baaac25372
>> Size: 3
>> Id: a6a21750e64c5317e1f949baaac25372
>> Cluster Id: 522adced1b7033646f0196d538b1f093
>> Mount: pri.ostechnix.lan:vol_a6a21750e64c5317e1f949baaac25372
>> Mount Options: backup-volfile-servers=sec.ostechnix.lan,third.ostechnix.lan
>> Durability Type: replicate
>> Replica: 3
>> Snapshot: Disabled
>> 
>> Bricks:
>> Id: 05936fa978e7d9fb534c04b4e993fefb
>> Path:
>> /var/lib/heketi/mounts/vg_057848a621c23d381b086cf7898e58cc/brick_05936fa978e7d9fb534c04b4e993fefb/brick
>> Size (GiB): 3
>> Node: 1c52608dd3f624ad32cb4d1d074613d7
>> Device: 057848a621c23d381b086cf7898e58cc
>> 
>> Id: 624b9813a4d17bbe34565bb95d9fe2b3
>> Path:
>> /var/lib/heketi/mounts/vg_d9ce655c3d31fa92f6486abc19e155d5/brick_624b9813a4d17bbe34565bb95d9fe2b3/brick
>> Size (GiB): 3
>> Node: 36eec4fa09cf572b2a0a11f65c43b706
>> Device: d9ce655c3d31fa92f6486abc19e155d5
>> 
>> Id: a779eb5de0131ab97d8d17e8ddad4a3e
>> Path:
>> /var/lib/heketi/mounts/vg_2cb5b83b84bfd1c0e06ac99779f413d7/brick_a779eb5de0131ab97d8d17e8ddad4a3e/brick
>> Size (GiB): 3
>> Node: da1bea3e71629b4f6f8ed1f1584f521c
>> Device: 2cb5b83b84bfd1c0e06ac99779f413d7
>> 
>> 
>>    Nodes:
>> 
>> Node Id: 1c52608dd3f624ad32cb4d1d074613d7
>> State: online
>> Cluster Id: 522adced1b7033646f0196d538b1f093
>> Zone: 1
>> Management Hostname: 51.15.77.14
>> Storage Hostname: pri.ostechnix.lan
>> Devices:
>> Id:057848a621c23d381b086cf7898e58cc   Name:/dev/nbd3           State:online
>> Size (GiB):139     Used (GiB):3       Free (GiB):136
>> Bricks:
>> Id:05936fa978e7d9fb534c04b4e993fefb   Size (GiB):3       Path:
>> /var/lib/heketi/mounts/vg_057848a621c23d381b086cf7898e58cc/brick_05936fa978e7d9fb534c04b4e993fefb/brick
>> Id:24f557371d0eaf2c72065f4220113988   Name:/dev/nbd1           State:online
>> Size (GiB):46      Used (GiB):0       Free (GiB):46
>> Bricks:
>> Id:670546b880260bb0240b9e0ac51bb82c   Name:/dev/nbd2           State:online
>> Size (GiB):46      Used (GiB):0       Free (GiB):46
>> Bricks:
>> 
>> Node Id: 36eec4fa09cf572b2a0a11f65c43b706
>> State: online
>> Cluster Id: 522adced1b7033646f0196d538b1f093
>> Zone: 1
>> Management Hostname: 51.15.90.60
>> Storage Hostname: sec.ostechnix.lan
>> Devices:
>> Id:74320e3bd92b7fdffa06499b17fb3c8f   Name:/dev/nbd1           State:online
>> Size (GiB):46      Used (GiB):0       Free (GiB):46
>> Bricks:
>> Id:923f2d04b03145600e5fc2035b5699c0   Name:/dev/nbd2           State:online
>> Size (GiB):46      Used (GiB):0       Free (GiB):46
>> Bricks:
>> Id:d9ce655c3d31fa92f6486abc19e155d5   Name:/dev/nbd3           State:online
>> Size (GiB):139     Used (GiB):3       Free (GiB):136
>> Bricks:
>> Id:624b9813a4d17bbe34565bb95d9fe2b3   Size (GiB):3       Path:
>> /var/lib/heketi/mounts/vg_d9ce655c3d31fa92f6486abc19e155d5/brick_624b9813a4d17bbe34565bb95d9fe2b3/brick
>> 
>> Node Id: da1bea3e71629b4f6f8ed1f1584f521c
>> State: online
>> Cluster Id: 522adced1b7033646f0196d538b1f093
>> Zone: 1
>> Management Hostname: 163.172.151.120
>> Storage Hostname: third.ostechnix.lan
>> Devices:
>> Id:23621c56af1237380bac9bb482d13859   Name:/dev/nbd2           State:online
>> Size (GiB):46      Used (GiB):0       Free (GiB):46
>> Bricks:
>> Id:2cb5b83b84bfd1c0e06ac99779f413d7   Name:/dev/nbd3           State:online
>> Size (GiB):139     Used (GiB):3       Free (GiB):136
>> Bricks:
>> Id:a779eb5de0131ab97d8d17e8ddad4a3e   Size (GiB):3       Path:
>> /var/lib/heketi/mounts/vg_2cb5b83b84bfd1c0e06ac99779f413d7/brick_a779eb5de0131ab97d8d17e8ddad4a3e/brick
>> Id:89648509136a29dea7712733f1b91733   Name:/dev/nbd1           State:online
>> Size (GiB):46      Used (GiB):0       Free (GiB):46
>> Bricks:
>> 
>> 
>> this is ok.
>> 
>> but this is not work with kubernetes storage class because it need’s ip
>> address on storage hostname side.
>> 
>> Warning  ProvisioningFailed  21s   persistentvolume-controller  Failed
>> to provision volume with StorageClass "fast": create volume error: failed to
>> create endpoint/service error creating endpoint: Endpoints
>> "glusterfs-dynamic-claim1" is invalid: [subsets[0].addresses[0].ip: Invalid
>> value: "pri.ostechnix.lan": must be a valid IP address, (e.g. 10.9.8.7),
>> subsets[0].addresses[1].ip: Invalid value: "third.ostechnix.lan": must be a
>> valid IP address, (e.g. 10.9.8.7), subsets[0].addresses[2].ip: Invalid
>> value: "sec.ostechnix.lan": must be a valid IP address, (e.g. 10.9.8.7)]
>> 
>> 
>> 
>> 
>> 
>> On 7 Feb 2018, at 19:21, Jose A. Rivera <jarrpa at redhat.com> wrote:
>> 
>> What's the output of the topology load command?
>> 
>> Can you verify that glusterd is running and healthy on all the nodes?
>> I'm not sure about Ubuntu, but on Fedora we do "systemctl status
>> glusterd", so something like that.
>> 
>> Does running "glusterfs pool list" show all the nodes? This only needs
>> to be run on one of the nodes.
>> 
>> Finally, do you have a firewall up and does it have the requisite
>> ports open for GlusterFS?
>> 
>> --Jose
>> 
>> On Wed, Feb 7, 2018 at 9:20 AM, Ercan Aydoğan <ercan.aydogan at gmail.com>
>> wrote:
>> 
>> Yes, i delete heketi.db before every try
>> 
>> 
>> On 7 Feb 2018, at 18:18, Jose A. Rivera <jarrpa at redhat.com> wrote:
>> 
>> When you deleted the cluster, did you also delete the heketi database?
>> 
>> --Jose
>> 
>> On Wed, Feb 7, 2018 at 3:30 AM, Ercan Aydoğan <ercan.aydogan at gmail.com>
>> wrote:
>> 
>> Gluster cluster on ubuntu 16.04 and i remove with this commands
>> 
>> apt-get purge glusterfs-server -y  --allow-change-held-packages
>> rm -rf /var/lib/glusterd
>> rm -rf /var/log/glusterfs/
>> wipefs -a --force /dev/nbd1
>> wipefs -a --force /dev/nbd2
>> wipefs -a --force /dev/nbd3
>> 
>> after reboot  install with
>> 
>> apt-get install -y software-properties-common
>> add-apt-repository ppa:gluster/glusterfs-3.11
>> apt-get update
>> apt-get install -y glusterfs-server
>> 
>> after this i’m using
>> 
>> /heketi-cli   topology  load --json=topology.json
>> 
>> 
>> 
>> but
>> 
>> i can’t create any volume with gluster cmd or heketi-cli maybe this is
>> hostname or /etc/hostname issue.
>> 
>> my current /etc/hosts is
>> 
>> node 1
>> 
>> 
>> root at pri:/var/log/glusterfs# cat /etc/hosts
>> #127.0.0.1       localhost
>> 127.0.0.1        pri.ostechnix.lan     pri
>> ::1             localhost ip6-localhost ip6-loopback
>> ff02::1         ip6-allnodes
>> ff02::2         ip6-allrouters
>> 
>> 51.15.90.60      sec.ostechnix.lan     sec
>> 163.172.151.120  third.ostechnix.lan   third
>> root at pri:/var/log/glusterfs#
>> 
>> on every node i set 127.0.0.1 matching hostname.
>> 
>> 
>> 
>> 
>> 
>> 
>> On 7 Feb 2018, at 11:51, Humble Chirammal <hchiramm at redhat.com> wrote:
>> 
>> true, storage should be IP address. However afaict, it failed in "Peer in
>> cluster" , because gluster cluster is formed with different IP/hostname and
>> its stored in metadata. If you can delete the cluster and recreate it with
>> "storage" in IP, it should work I believe.
>> 
>> On Wed, Feb 7, 2018 at 2:16 PM, Ercan Aydoğan <ercan.aydogan at gmail.com>
>> wrote:
>> 
>> 
>> Hello,
>> 
>> i have dedicated 3.11.3 version glusterfs 3 nodes. i can create volumes
>> with both gluster own command line utility and heketi-cli it’s ok.
>> 
>> If i use  storage hostname FQDN i can create cluster with
>> 
>> /heketi-cli   topology  load --json=topology.json
>> 
>> after storageclass , secret and pvc creation i got this error.
>> 
>> kubectl get pvc claim1 returns
>> 
>> root at kubemaster ~ # kubectl describe pvc claim1
>> Name:          claim1
>> Namespace:     default
>> StorageClass:  fast
>> Status:        Pending
>> Volume:
>> Labels:        <none>
>> Annotations:
>> volume.beta.kubernetes.io/storage-provisioner=kubernetes.io/glusterfs
>> Finalizers:    []
>> Capacity:
>> Access Modes:
>> Events:
>> Type     Reason              Age   From                         Message
>> ----     ------              ----  ----                         -------
>> Warning  ProvisioningFailed  21s   persistentvolume-controller  Failed
>> to provision volume with StorageClass "fast": create volume error: failed to
>> create endpoint/service error creating endpoint: Endpoints
>> "glusterfs-dynamic-claim1" is invalid: [subsets[0].addresses[0].ip: Invalid
>> value: "pri.ostechnix.lan": must be a valid IP address, (e.g. 10.9.8.7),
>> subsets[0].addresses[1].ip: Invalid value: "third.ostechnix.lan": must be a
>> valid IP address, (e.g. 10.9.8.7), subsets[0].addresses[2].ip: Invalid
>> value: "sec.ostechnix.lan": must be a valid IP address, (e.g. 10.9.8.7)]
>> 
>> 
>> my topology.json content is
>> 
>> {
>> "clusters": [
>>  {
>>    "nodes": [
>>      {
>>        "node": {
>>          "hostnames": {
>>            "manage": [
>>              "51.15.77.14"
>>            ],
>>            "storage": [
>>              "pri.ostechnix.lan"
>>            ]
>>          },
>>          "zone": 1
>>        },
>>        "devices": [
>>          "/dev/nbd1",
>>          "/dev/nbd2",
>>  "/dev/nbd3"
>>        ]
>>      },
>>      {
>>        "node": {
>>          "hostnames": {
>>            "manage": [
>>              "51.15.90.60"
>>            ],
>>            "storage": [
>>              "sec.ostechnix.lan"
>>            ]
>>          },
>>          "zone": 1
>>        },
>>        "devices": [
>>          "/dev/nbd1",
>>          "/dev/nbd2",
>>          "/dev/nbd3"
>>        ]
>>      },
>>      {
>>        "node": {
>>          "hostnames": {
>>            "manage": [
>>              "163.172.151.120"
>>            ],
>>            "storage": [
>>              "third.ostechnix.lan"
>>            ]
>>          },
>>          "zone": 1
>>        },
>>        "devices": [
>>          "/dev/nbd1",
>>          "/dev/nbd2",
>>           "/dev/nbd3"
>>        ]
>>      }
>> 
>> 
>>    ]
>>  }
>> ]
>> }
>> 
>> 
>> Yes, it says storage must be ip for endpoint creation. But if i change
>> 
>> manage : hostname
>> storage: ip address
>> 
>> {
>> "clusters": [
>>  {
>>    "nodes": [
>>      {
>>        "node": {
>>          "hostnames": {
>>            "manage": [
>>              "pri.ostechnix.lan"
>>            ],
>>            "storage": [
>>              "51.15.77.14"
>>            ]
>>          },
>>          "zone": 1
>>        },
>>        "devices": [
>>          "/dev/nbd1",
>>          "/dev/nbd2",
>>  "/dev/nbd3"
>>        ]
>>      },
>>      {
>>        "node": {
>>          "hostnames": {
>>            "manage": [
>>              "sec.ostechnix.lan"
>>            ],
>>            "storage": [
>>              "51.15.90.60"
>>            ]
>>          },
>>          "zone": 1
>>        },
>>        "devices": [
>>          "/dev/nbd1",
>>          "/dev/nbd2",
>>          "/dev/nbd3"
>>        ]
>>      },
>>      {
>>        "node": {
>>          "hostnames": {
>>            "manage": [
>>              "third.ostechnix.lan"
>>            ],
>>            "storage": [
>>              "163.172.151.120"
>>            ]
>>          },
>>          "zone": 1
>>        },
>>        "devices": [
>>          "/dev/nbd1",
>>          "/dev/nbd2",
>>           "/dev/nbd3"
>>        ]
>>      }
>> 
>> 
>>    ]
>>  }
>> ]
>> }
>> 
>> i can not create volume with heketi-cli.
>> 
>> it says
>> 
>> root at kubemaster ~/heketi # ./heketi-cli volume create --size=3 --replica=3
>> Error: volume create: vol_207bbf81f28b959c51448b919be3bb59: failed: Host
>> 51.15.90.60 is not in 'Peer in Cluster’ state
>> 
>> i need advice how can fix this issue.
>> 
>> 
>> _______________________________________________
>> heketi-devel mailing list
>> heketi-devel at gluster.org
>> http://lists.gluster.org/mailman/listinfo/heketi-devel
>> 
>> 
>> 
>> 
>> --
>> Cheers,
>> Humble
>> 
>> Red Hat Storage Engineering
>> Mastering KVM Virtualization: http://amzn.to/2vFTXaW
>> Website: http://humblec.com
>> 
>> 
>> 
>> _______________________________________________
>> heketi-devel mailing list
>> heketi-devel at gluster.org
>> http://lists.gluster.org/mailman/listinfo/heketi-devel
>> 
>> 
>> 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/heketi-devel/attachments/20180207/e37112d1/attachment-0001.html>


More information about the heketi-devel mailing list