[heketi-devel] heketi-cli kubernetes gluster pvc endpoint issue
Ercan Aydoğan
ercan.aydogan at gmail.com
Wed Feb 7 18:31:47 UTC 2018
previous mail size 500 kb - i attach output as log file.
output.log heketi server log -
> On 7 Feb 2018, at 21:22, Ercan Aydoğan <ercan.aydogan at gmail.com> wrote:
>
> Here is the ip based topology load and all outputs.
>
>
> root at kubemaster ~/heketi # ./heketi-cli topology load --json=topology_with_ip.json
> Creating cluster ... ID: 3109379364d9f90f6c52fd5210b7b69d
> Creating node pri.ostechnix.lan ... ID: 863262f436c8daf2f1526f449111c5a0
> Adding device /dev/nbd1 ... OK
> Adding device /dev/nbd2 ... OK
> Adding device /dev/nbd3 ... OK
> Creating node sec.ostechnix.lan ... ID: 139c65b477131ca4a5cefec7246e46b3
> Adding device /dev/nbd1 ... OK
> Adding device /dev/nbd2 ... OK
> Adding device /dev/nbd3 ... OK
> Creating node third.ostechnix.lan ... ID: 89d89069e54b4257b817f22bf45b5538
> Adding device /dev/nbd1 ... OK
> Adding device /dev/nbd2 ... OK
> Adding device /dev/nbd3 ... OK
> root at kubemaster ~/heketi # ./heketi-cli volume create --size=3 --replica=3
> Error: volume create: vol_8397c8adb21679e81b87d7e6cd517129: failed: Host 51.15.90.60 is not in 'Peer in Cluster' state
>
>
> root at kubemaster ~/heketi # cat topology_with_ip.json
> {
> "clusters": [
> {
> "nodes": [
> {
> "node": {
> "hostnames": {
> "manage": [
> "pri.ostechnix.lan"
> ],
> "storage": [
> "51.15.77.14"
> ]
> },
> "zone": 1
> },
> "devices": [
> "/dev/nbd1",
> "/dev/nbd2",
> "/dev/nbd3"
> ]
> },
> {
> "node": {
> "hostnames": {
> "manage": [
> "sec.ostechnix.lan"
> ],
> "storage": [
> "51.15.90.60"
> ]
> },
> "zone": 1
> },
> "devices": [
> "/dev/nbd1",
> "/dev/nbd2",
> "/dev/nbd3"
> ]
> },
> {
> "node": {
> "hostnames": {
> "manage": [
> "third.ostechnix.lan"
> ],
> "storage": [
> "163.172.151.120"
> ]
> },
> "zone": 1
> },
> "devices": [
> "/dev/nbd1",
> "/dev/nbd2",
> "/dev/nbd3"
> ]
> }
>
>
> ]
> }
> ]
> }
>
>
>
> root at kubemaster ~/heketi # ./heketi-cli topology info
>
> Cluster Id: 3109379364d9f90f6c52fd5210b7b69d
>
> Volumes:
>
> Nodes:
>
> Node Id: 139c65b477131ca4a5cefec7246e46b3
> State: online
> Cluster Id: 3109379364d9f90f6c52fd5210b7b69d
> Zone: 1
> Management Hostname: sec.ostechnix.lan
> Storage Hostname: 51.15.90.60
> Devices:
> Id:23abf78e40a19e12bd593cab96f3239f Name:/dev/nbd3 State:online Size (GiB):139 Used (GiB):0 Free (GiB):139
> Bricks:
> Id:506a4b79ecb6e364441f733b87e191c0 Name:/dev/nbd1 State:online Size (GiB):46 Used (GiB):0 Free (GiB):46
> Bricks:
> Id:970b5dbda46a9e3d94c60130d18c1220 Name:/dev/nbd2 State:online Size (GiB):46 Used (GiB):0 Free (GiB):46
> Bricks:
>
> Node Id: 863262f436c8daf2f1526f449111c5a0
> State: online
> Cluster Id: 3109379364d9f90f6c52fd5210b7b69d
> Zone: 1
> Management Hostname: pri.ostechnix.lan
> Storage Hostname: 51.15.77.14
> Devices:
> Id:85f10821543a2bc2e64af08c07e76e29 Name:/dev/nbd1 State:online Size (GiB):46 Used (GiB):0 Free (GiB):46
> Bricks:
> Id:f4560d692ed58efd0ef49a219d9b6692 Name:/dev/nbd2 State:online Size (GiB):46 Used (GiB):0 Free (GiB):46
> Bricks:
> Id:f619f5344b93c3f2fabd666f424b1938 Name:/dev/nbd3 State:online Size (GiB):139 Used (GiB):0 Free (GiB):139
> Bricks:
>
> Node Id: 89d89069e54b4257b817f22bf45b5538
> State: online
> Cluster Id: 3109379364d9f90f6c52fd5210b7b69d
> Zone: 1
> Management Hostname: third.ostechnix.lan
> Storage Hostname: 163.172.151.120
> Devices:
> Id:33fdb01b4a2ea60d0d40fd4d328f8214 Name:/dev/nbd1 State:online Size (GiB):46 Used (GiB):0 Free (GiB):46
> Bricks:
> Id:7ada758aa7da70e7719ca277f93cb4f9 Name:/dev/nbd2 State:online Size (GiB):46 Used (GiB):0 Free (GiB):46
> Bricks:
> Id:83ba86d13242a1484eb8f4ba691c6327 Name:/dev/nbd3 State:online Size (GiB):139 Used (GiB):0 Free (GiB):139
> Bricks:
>
>
>
>
> on node 1
>
> root at pri:~# gluster peer status
> Number of Peers: 2
>
> Hostname: sec.ostechnix.lan
> Uuid: 887c5074-ab28-4642-846f-fa6c87430987
> State: Peer in Cluster (Connected)
> Other names:
> sec.ostechnix.lan
>
> Hostname: 163.172.151.120
> Uuid: ffd9ff21-c18c-4095-8f05-acc5bb567ef8
> State: Peer in Cluster (Connected)
>
>
> on node 2
>
> root at sec:~# gluster peer status
> Number of Peers: 2
>
> Hostname: pri.ostechnix.lan
> Uuid: 5417f7f0-37c6-4776-bdd1-0a29f45fab89
> State: Peer in Cluster (Connected)
>
> Hostname: 163.172.151.120
> Uuid: ffd9ff21-c18c-4095-8f05-acc5bb567ef8
> State: Peer in Cluster (Connected)
>
>
> on node 3
>
> root at third:/var/log/glusterfs# gluster peer status
> Number of Peers: 2
>
> Hostname: 51.15.90.60
> Uuid: 887c5074-ab28-4642-846f-fa6c87430987
> State: Peer in Cluster (Connected)
> Other names:
> 51.15.90.60
>
> Hostname: pri.ostechnix.lan
> Uuid: 5417f7f0-37c6-4776-bdd1-0a29f45fab89
> State: Peer in Cluster (Connected)
>
>
>
> node 1
>
> root at pri:/var/log/glusterfs# gluster pool list
> UUID Hostname State
> 887c5074-ab28-4642-846f-fa6c87430987 sec.ostechnix.lan Connected
> ffd9ff21-c18c-4095-8f05-acc5bb567ef8 163.172.151.120 Connected
> 5417f7f0-37c6-4776-bdd1-0a29f45fab89 localhost Connected
>
>
> /etc/hosts
>
> root at pri:/var/log/glusterfs# cat /etc/hosts
> 127.0.0.1 localhost
> 127.0.0.1 pri.ostechnix.lan pri
> ::1 localhost ip6-localhost ip6-loopback
> ff02::1 ip6-allnodes
> ff02::2 ip6-allrouters
> 51.15.90.60 sec.ostechnix.lan sec
> 163.172.151.120 third.ostechnix.lan third
>
> node 2
>
> root at sec:/var/log/glusterfs# gluster pool list
> UUID Hostname State
> 5417f7f0-37c6-4776-bdd1-0a29f45fab89 pri.ostechnix.lan Connected
> ffd9ff21-c18c-4095-8f05-acc5bb567ef8 163.172.151.120 Connected
> 887c5074-ab28-4642-846f-fa6c87430987 localhost Connected
> root at sec:/var/log/glusterfs#
>
>
> /etc/hosts
>
> root at sec:/var/log/glusterfs# cat /etc/hosts
> 127.0.0.1 localhost
> 127.0.0.1 sec.ostechnix.lan sec
> ::1 localhost ip6-localhost ip6-loopback
> ff02::1 ip6-allnodes
> ff02::2 ip6-allrouters
> 51.15.77.14 pri.ostechnix.lan pri
> 163.172.151.120 third.ostechnix.lan third
>
>
> node 3
>
> root at third:/var/log/glusterfs# gluster pool list
> UUID Hostname State
> 887c5074-ab28-4642-846f-fa6c87430987 51.15.90.60 Connected
> 5417f7f0-37c6-4776-bdd1-0a29f45fab89 pri.ostechnix.lan Connected
> ffd9ff21-c18c-4095-8f05-acc5bb567ef8 localhost Connected
>
>
> root at third:/var/log/glusterfs# cat /etc/hosts
> 127.0.0.1 localhost
> 127.0.0.1 third.ostechnix.lan third
> ::1 localhost ip6-localhost ip6-loopback
> ff02::1 ip6-allnodes
> ff02::2 ip6-allrouters
> 51.15.77.14 pri.ostechnix.lan pri
> 51.15.90.60 sec.ostechnix.lan sec
>
>
>
>
>> On 7 Feb 2018, at 20:46, Jose A. Rivera <jarrpa at redhat.com <mailto:jarrpa at redhat.com>> wrote:
>>
>> OKay, I'm trying to help you figure out why it's not working. :)
>> Please deploy heketi/gluster with the "proper" configuration (which
>> does not allow you to create PVCs) and show me the output for the
>> things I requested.
>>>
>>>
>>>
>>>
>>> --
>>> Cheers,
>>> Humble
>>>
>>> Red Hat Storage Engineering
>>> Mastering KVM Virtualization: http://amzn.to/2vFTXaW
>>> Website: http://humblec.com
>>>
>>>
>>>
>>> _______________________________________________
>>> heketi-devel mailing list
>>> heketi-devel at gluster.org
>>> http://lists.gluster.org/mailman/listinfo/heketi-devel
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/heketi-devel/attachments/20180207/1bdc7920/attachment-0003.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: output.log
Type: application/octet-stream
Size: 38511 bytes
Desc: not available
URL: <http://lists.gluster.org/pipermail/heketi-devel/attachments/20180207/1bdc7920/attachment-0002.obj>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/heketi-devel/attachments/20180207/1bdc7920/attachment-0004.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: all_nodes_glusterd.log
Type: application/octet-stream
Size: 38204 bytes
Desc: not available
URL: <http://lists.gluster.org/pipermail/heketi-devel/attachments/20180207/1bdc7920/attachment-0003.obj>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/heketi-devel/attachments/20180207/1bdc7920/attachment-0005.html>
More information about the heketi-devel
mailing list