[heketi-devel] heketi-cli kubernetes gluster pvc endpoint issue

Ercan Aydoğan ercan.aydogan at gmail.com
Wed Feb 7 18:22:31 UTC 2018


Here is the ip based topology load and all outputs.


root at kubemaster ~/heketi # ./heketi-cli   topology  load --json=topology_with_ip.json
Creating cluster ... ID: 3109379364d9f90f6c52fd5210b7b69d
	Creating node pri.ostechnix.lan ... ID: 863262f436c8daf2f1526f449111c5a0
		Adding device /dev/nbd1 ... OK
		Adding device /dev/nbd2 ... OK
		Adding device /dev/nbd3 ... OK
	Creating node sec.ostechnix.lan ... ID: 139c65b477131ca4a5cefec7246e46b3
		Adding device /dev/nbd1 ... OK
		Adding device /dev/nbd2 ... OK
		Adding device /dev/nbd3 ... OK
	Creating node third.ostechnix.lan ... ID: 89d89069e54b4257b817f22bf45b5538
		Adding device /dev/nbd1 ... OK
		Adding device /dev/nbd2 ... OK
		Adding device /dev/nbd3 ... OK
root at kubemaster ~/heketi # ./heketi-cli volume create --size=3 --replica=3
Error: volume create: vol_8397c8adb21679e81b87d7e6cd517129: failed: Host 51.15.90.60 is not in 'Peer in Cluster' state


root at kubemaster ~/heketi # cat topology_with_ip.json 
{
  "clusters": [
    {
      "nodes": [
        {
          "node": {
            "hostnames": {
              "manage": [
                "pri.ostechnix.lan"
              ],
              "storage": [
                "51.15.77.14"
              ]
            },
            "zone": 1
          },
          "devices": [
            "/dev/nbd1",
            "/dev/nbd2",
	    "/dev/nbd3"
          ]
        },
        {
          "node": {
            "hostnames": {
              "manage": [
                "sec.ostechnix.lan"
              ],
              "storage": [
                "51.15.90.60"
              ]
            },
            "zone": 1
          },
          "devices": [
            "/dev/nbd1",
            "/dev/nbd2",
            "/dev/nbd3"
          ]
        },
        {
          "node": {
            "hostnames": {
              "manage": [
                "third.ostechnix.lan"
              ],
              "storage": [
                "163.172.151.120"
              ]
            },
            "zone": 1
          },
          "devices": [
            "/dev/nbd1",
            "/dev/nbd2",
             "/dev/nbd3"
          ]
        }
        
       
      ]
    }
  ]
}



root at kubemaster ~/heketi # ./heketi-cli   topology info

Cluster Id: 3109379364d9f90f6c52fd5210b7b69d

    Volumes:

    Nodes:

	Node Id: 139c65b477131ca4a5cefec7246e46b3
	State: online
	Cluster Id: 3109379364d9f90f6c52fd5210b7b69d
	Zone: 1
	Management Hostname: sec.ostechnix.lan
	Storage Hostname: 51.15.90.60
	Devices:
		Id:23abf78e40a19e12bd593cab96f3239f   Name:/dev/nbd3           State:online    Size (GiB):139     Used (GiB):0       Free (GiB):139     
			Bricks:
		Id:506a4b79ecb6e364441f733b87e191c0   Name:/dev/nbd1           State:online    Size (GiB):46      Used (GiB):0       Free (GiB):46      
			Bricks:
		Id:970b5dbda46a9e3d94c60130d18c1220   Name:/dev/nbd2           State:online    Size (GiB):46      Used (GiB):0       Free (GiB):46      
			Bricks:

	Node Id: 863262f436c8daf2f1526f449111c5a0
	State: online
	Cluster Id: 3109379364d9f90f6c52fd5210b7b69d
	Zone: 1
	Management Hostname: pri.ostechnix.lan
	Storage Hostname: 51.15.77.14
	Devices:
		Id:85f10821543a2bc2e64af08c07e76e29   Name:/dev/nbd1           State:online    Size (GiB):46      Used (GiB):0       Free (GiB):46      
			Bricks:
		Id:f4560d692ed58efd0ef49a219d9b6692   Name:/dev/nbd2           State:online    Size (GiB):46      Used (GiB):0       Free (GiB):46      
			Bricks:
		Id:f619f5344b93c3f2fabd666f424b1938   Name:/dev/nbd3           State:online    Size (GiB):139     Used (GiB):0       Free (GiB):139     
			Bricks:

	Node Id: 89d89069e54b4257b817f22bf45b5538
	State: online
	Cluster Id: 3109379364d9f90f6c52fd5210b7b69d
	Zone: 1
	Management Hostname: third.ostechnix.lan
	Storage Hostname: 163.172.151.120
	Devices:
		Id:33fdb01b4a2ea60d0d40fd4d328f8214   Name:/dev/nbd1           State:online    Size (GiB):46      Used (GiB):0       Free (GiB):46      
			Bricks:
		Id:7ada758aa7da70e7719ca277f93cb4f9   Name:/dev/nbd2           State:online    Size (GiB):46      Used (GiB):0       Free (GiB):46      
			Bricks:
		Id:83ba86d13242a1484eb8f4ba691c6327   Name:/dev/nbd3           State:online    Size (GiB):139     Used (GiB):0       Free (GiB):139     
			Bricks:



root at kubemaster ~/heketi # ./heketi --config=heketi.json
Heketi v5.0.1
[heketi] INFO 2018/02/07 18:55:02 Loaded ssh executor
[heketi] INFO 2018/02/07 18:55:02 Loaded simple allocator
[heketi] INFO 2018/02/07 18:55:02 GlusterFS Application Loaded
Listening on port 8080
[negroni] Started GET /clusters
[negroni] Completed 200 OK in 90.27µs
[negroni] Started POST /clusters
[negroni] Completed 201 Created in 29.012005ms
[negroni] Started POST /nodes
[heketi] INFO 2018/02/07 18:56:16 Adding node pri.ostechnix.lan
[negroni] Completed 202 Accepted in 334.322µs
[asynchttp] INFO 2018/02/07 18:56:16 asynchttp.go:125: Started job d94ff71c8353a851b95cf267aa7e4d8a
[negroni] Started GET /queue/d94ff71c8353a851b95cf267aa7e4d8a
[negroni] Completed 200 OK in 19.548µs
[heketi] INFO 2018/02/07 18:56:16 Added node 863262f436c8daf2f1526f449111c5a0
[asynchttp] INFO 2018/02/07 18:56:16 asynchttp.go:129: Completed job d94ff71c8353a851b95cf267aa7e4d8a in 839.531µs
[negroni] Started GET /queue/d94ff71c8353a851b95cf267aa7e4d8a
[negroni] Completed 303 See Other in 32.028µs
[negroni] Started GET /nodes/863262f436c8daf2f1526f449111c5a0
[negroni] Completed 200 OK in 224.291µs
[negroni] Started POST /devices
[heketi] INFO 2018/02/07 18:56:16 Adding device /dev/nbd1 to node 863262f436c8daf2f1526f449111c5a0
[negroni] Completed 202 Accepted in 390.409µs
[asynchttp] INFO 2018/02/07 18:56:16 asynchttp.go:125: Started job fd0f3cd3fdc8442454a6c164b908031c
[negroni] Started GET /queue/fd0f3cd3fdc8442454a6c164b908031c
[negroni] Completed 200 OK in 15.974µs
[sshexec] DEBUG 2018/02/07 18:56:17 /src/github.com/heketi/heketi/pkg/utils/ssh/ssh.go:176: Host: pri.ostechnix.lan:22 Command: sudo /bin/bash -c 'pvcreate --metadatasize=128M --dataalignment=256K '/dev/nbd1''
Result:   Physical volume "/dev/nbd1" successfully created
[sshexec] DEBUG 2018/02/07 18:56:17 /src/github.com/heketi/heketi/pkg/utils/ssh/ssh.go:176: Host: pri.ostechnix.lan:22 Command: sudo /bin/bash -c 'vgcreate vg_85f10821543a2bc2e64af08c07e76e29 /dev/nbd1'
Result:   Volume group "vg_85f10821543a2bc2e64af08c07e76e29" successfully created
[negroni] Started GET /queue/fd0f3cd3fdc8442454a6c164b908031c
[negroni] Completed 200 OK in 22.604µs
[sshexec] DEBUG 2018/02/07 18:56:18 /src/github.com/heketi/heketi/pkg/utils/ssh/ssh.go:176: Host: pri.ostechnix.lan:22 Command: sudo /bin/bash -c 'vgdisplay -c vg_85f10821543a2bc2e64af08c07e76e29'
Result:   vg_85f10821543a2bc2e64af08c07e76e29:r/w:772:-1:0:0:0:-1:0:1:1:48693248:4096:11888:0:11888:QThniW-Ywgn-77M0-oc3b-ccBP-Rm0S-cODL7s
[sshexec] DEBUG 2018/02/07 18:56:18 /src/github.com/heketi/heketi/executors/sshexec/device.go:137: Size of /dev/nbd1 in pri.ostechnix.lan is 48693248
[heketi] INFO 2018/02/07 18:56:18 Added device /dev/nbd1
[asynchttp] INFO 2018/02/07 18:56:18 asynchttp.go:129: Completed job fd0f3cd3fdc8442454a6c164b908031c in 1.187876309s
[negroni] Started GET /queue/fd0f3cd3fdc8442454a6c164b908031c
[negroni] Completed 204 No Content in 26.216µs
[negroni] Started POST /devices
[heketi] INFO 2018/02/07 18:56:18 Adding device /dev/nbd2 to node 863262f436c8daf2f1526f449111c5a0
[negroni] Completed 202 Accepted in 355.112µs
[asynchttp] INFO 2018/02/07 18:56:18 asynchttp.go:125: Started job 37ed903d27fe3f63a69e181d1543679f
[negroni] Started GET /queue/37ed903d27fe3f63a69e181d1543679f
[negroni] Completed 200 OK in 15.069µs
[sshexec] DEBUG 2018/02/07 18:56:19 /src/github.com/heketi/heketi/pkg/utils/ssh/ssh.go:176: Host: pri.ostechnix.lan:22 Command: sudo /bin/bash -c 'pvcreate --metadatasize=128M --dataalignment=256K '/dev/nbd2''
Result:   Physical volume "/dev/nbd2" successfully created
[sshexec] DEBUG 2018/02/07 18:56:19 /src/github.com/heketi/heketi/pkg/utils/ssh/ssh.go:176: Host: pri.ostechnix.lan:22 Command: sudo /bin/bash -c 'vgcreate vg_f4560d692ed58efd0ef49a219d9b6692 /dev/nbd2'
Result:   Volume group "vg_f4560d692ed58efd0ef49a219d9b6692" successfully created
[negroni] Started GET /queue/37ed903d27fe3f63a69e181d1543679f
[negroni] Completed 200 OK in 24.158µs
[sshexec] DEBUG 2018/02/07 18:56:20 /src/github.com/heketi/heketi/pkg/utils/ssh/ssh.go:176: Host: pri.ostechnix.lan:22 Command: sudo /bin/bash -c 'vgdisplay -c vg_f4560d692ed58efd0ef49a219d9b6692'
Result:   vg_f4560d692ed58efd0ef49a219d9b6692:r/w:772:-1:0:0:0:-1:0:1:1:48693248:4096:11888:0:11888:Nbytxj-mViq-PdGu-8mzK-XLde-Su7V-Mvaxqb
[sshexec] DEBUG 2018/02/07 18:56:20 /src/github.com/heketi/heketi/executors/sshexec/device.go:137: Size of /dev/nbd2 in pri.ostechnix.lan is 48693248
[heketi] INFO 2018/02/07 18:56:20 Added device /dev/nbd2
[asynchttp] INFO 2018/02/07 18:56:20 asynchttp.go:129: Completed job 37ed903d27fe3f63a69e181d1543679f in 1.166524295s
[negroni] Started GET /queue/37ed903d27fe3f63a69e181d1543679f
[negroni] Completed 204 No Content in 25.602µs
[negroni] Started POST /devices
[heketi] INFO 2018/02/07 18:56:21 Adding device /dev/nbd3 to node 863262f436c8daf2f1526f449111c5a0
[negroni] Completed 202 Accepted in 13.774963ms
[asynchttp] INFO 2018/02/07 18:56:21 asynchttp.go:125: Started job 421b4f94fb977945441aa1c5f105a64d
[negroni] Started GET /queue/421b4f94fb977945441aa1c5f105a64d
[negroni] Completed 200 OK in 19.513µs
[sshexec] DEBUG 2018/02/07 18:56:21 /src/github.com/heketi/heketi/pkg/utils/ssh/ssh.go:176: Host: pri.ostechnix.lan:22 Command: sudo /bin/bash -c 'pvcreate --metadatasize=128M --dataalignment=256K '/dev/nbd3''
Result:   Physical volume "/dev/nbd3" successfully created
[sshexec] DEBUG 2018/02/07 18:56:21 /src/github.com/heketi/heketi/pkg/utils/ssh/ssh.go:176: Host: pri.ostechnix.lan:22 Command: sudo /bin/bash -c 'vgcreate vg_f619f5344b93c3f2fabd666f424b1938 /dev/nbd3'
Result:   Volume group "vg_f619f5344b93c3f2fabd666f424b1938" successfully created
[negroni] Started GET /queue/421b4f94fb977945441aa1c5f105a64d
[negroni] Completed 200 OK in 24.016µs
[sshexec] DEBUG 2018/02/07 18:56:22 /src/github.com/heketi/heketi/pkg/utils/ssh/ssh.go:176: Host: pri.ostechnix.lan:22 Command: sudo /bin/bash -c 'vgdisplay -c vg_f619f5344b93c3f2fabd666f424b1938'
Result:   vg_f619f5344b93c3f2fabd666f424b1938:r/w:772:-1:0:0:0:-1:0:1:1:146350080:4096:35730:0:35730:VuCdjy-E7iR-chBl-P21P-ZFSk-NpDR-bkAAdh
[sshexec] DEBUG 2018/02/07 18:56:22 /src/github.com/heketi/heketi/executors/sshexec/device.go:137: Size of /dev/nbd3 in pri.ostechnix.lan is 146350080
[heketi] INFO 2018/02/07 18:56:22 Added device /dev/nbd3
[asynchttp] INFO 2018/02/07 18:56:22 asynchttp.go:129: Completed job 421b4f94fb977945441aa1c5f105a64d in 1.407534583s
[negroni] Started GET /queue/421b4f94fb977945441aa1c5f105a64d
[negroni] Completed 204 No Content in 21.67µs
[negroni] Started POST /nodes
[heketi] INFO 2018/02/07 18:56:23 Adding node sec.ostechnix.lan
[negroni] Completed 202 Accepted in 14.154286ms
[asynchttp] INFO 2018/02/07 18:56:23 asynchttp.go:125: Started job 35ccaa34869f2045dcea4d9f36c59e8d
[sshexec] INFO 2018/02/07 18:56:23 Probing: pri.ostechnix.lan -> 51.15.90.60
[negroni] Started GET /queue/35ccaa34869f2045dcea4d9f36c59e8d
[negroni] Completed 200 OK in 18.912µs
[negroni] Started GET /queue/35ccaa34869f2045dcea4d9f36c59e8d
[negroni] Completed 200 OK in 16.187µs
[negroni] Started GET /queue/35ccaa34869f2045dcea4d9f36c59e8d
[negroni] Completed 200 OK in 24.958µs
[sshexec] DEBUG 2018/02/07 18:56:23 /src/github.com/heketi/heketi/pkg/utils/ssh/ssh.go:176: Host: pri.ostechnix.lan:22 Command: sudo /bin/bash -c 'gluster peer probe 51.15.90.60'
Result: peer probe: success. 
[heketi] INFO 2018/02/07 18:56:23 Added node 139c65b477131ca4a5cefec7246e46b3
[asynchttp] INFO 2018/02/07 18:56:23 asynchttp.go:129: Completed job 35ccaa34869f2045dcea4d9f36c59e8d in 710.531502ms
[negroni] Started GET /queue/35ccaa34869f2045dcea4d9f36c59e8d
[negroni] Completed 303 See Other in 38.854µs
[negroni] Started GET /nodes/139c65b477131ca4a5cefec7246e46b3
[negroni] Completed 200 OK in 176µs
[negroni] Started POST /devices
[heketi] INFO 2018/02/07 18:56:23 Adding device /dev/nbd1 to node 139c65b477131ca4a5cefec7246e46b3
[negroni] Completed 202 Accepted in 753.629µs
[asynchttp] INFO 2018/02/07 18:56:23 asynchttp.go:125: Started job 661eeeebd1b571e4b0b7686de31ce6c5
[negroni] Started GET /queue/661eeeebd1b571e4b0b7686de31ce6c5
[negroni] Completed 200 OK in 26.054µs
[sshexec] DEBUG 2018/02/07 18:56:24 /src/github.com/heketi/heketi/pkg/utils/ssh/ssh.go:176: Host: sec.ostechnix.lan:22 Command: sudo /bin/bash -c 'pvcreate --metadatasize=128M --dataalignment=256K '/dev/nbd1''
Result:   Physical volume "/dev/nbd1" successfully created
[sshexec] DEBUG 2018/02/07 18:56:24 /src/github.com/heketi/heketi/pkg/utils/ssh/ssh.go:176: Host: sec.ostechnix.lan:22 Command: sudo /bin/bash -c 'vgcreate vg_506a4b79ecb6e364441f733b87e191c0 /dev/nbd1'
Result:   Volume group "vg_506a4b79ecb6e364441f733b87e191c0" successfully created
[negroni] Started GET /queue/661eeeebd1b571e4b0b7686de31ce6c5
[negroni] Completed 200 OK in 26.656µs
[sshexec] DEBUG 2018/02/07 18:56:24 /src/github.com/heketi/heketi/pkg/utils/ssh/ssh.go:176: Host: sec.ostechnix.lan:22 Command: sudo /bin/bash -c 'vgdisplay -c vg_506a4b79ecb6e364441f733b87e191c0'
Result:   vg_506a4b79ecb6e364441f733b87e191c0:r/w:772:-1:0:0:0:-1:0:1:1:48693248:4096:11888:0:11888:4LiCsV-cHxs-wDCe-Cf5L-6Obm-AAGP-u9Y81a
[sshexec] DEBUG 2018/02/07 18:56:24 /src/github.com/heketi/heketi/executors/sshexec/device.go:137: Size of /dev/nbd1 in sec.ostechnix.lan is 48693248
[heketi] INFO 2018/02/07 18:56:24 Added device /dev/nbd1
[asynchttp] INFO 2018/02/07 18:56:24 asynchttp.go:129: Completed job 661eeeebd1b571e4b0b7686de31ce6c5 in 1.21167918s
[negroni] Started GET /queue/661eeeebd1b571e4b0b7686de31ce6c5
[negroni] Completed 204 No Content in 32.949µs
[negroni] Started POST /devices
[heketi] INFO 2018/02/07 18:56:25 Adding device /dev/nbd2 to node 139c65b477131ca4a5cefec7246e46b3
[negroni] Completed 202 Accepted in 522.891µs
[asynchttp] INFO 2018/02/07 18:56:25 asynchttp.go:125: Started job 3c8fb3ae9408eeb8f12c9a2c2457cb42
[negroni] Started GET /queue/3c8fb3ae9408eeb8f12c9a2c2457cb42
[negroni] Completed 200 OK in 16.614µs
[sshexec] DEBUG 2018/02/07 18:56:26 /src/github.com/heketi/heketi/pkg/utils/ssh/ssh.go:176: Host: sec.ostechnix.lan:22 Command: sudo /bin/bash -c 'pvcreate --metadatasize=128M --dataalignment=256K '/dev/nbd2''
Result:   Physical volume "/dev/nbd2" successfully created
[sshexec] DEBUG 2018/02/07 18:56:26 /src/github.com/heketi/heketi/pkg/utils/ssh/ssh.go:176: Host: sec.ostechnix.lan:22 Command: sudo /bin/bash -c 'vgcreate vg_970b5dbda46a9e3d94c60130d18c1220 /dev/nbd2'
Result:   Volume group "vg_970b5dbda46a9e3d94c60130d18c1220" successfully created
[negroni] Started GET /queue/3c8fb3ae9408eeb8f12c9a2c2457cb42
[negroni] Completed 200 OK in 24.293µs
[sshexec] DEBUG 2018/02/07 18:56:26 /src/github.com/heketi/heketi/pkg/utils/ssh/ssh.go:176: Host: sec.ostechnix.lan:22 Command: sudo /bin/bash -c 'vgdisplay -c vg_970b5dbda46a9e3d94c60130d18c1220'
Result:   vg_970b5dbda46a9e3d94c60130d18c1220:r/w:772:-1:0:0:0:-1:0:1:1:48693248:4096:11888:0:11888:TcVQKL-VBuK-La67-N35z-w0hQ-iW2h-7wqt4M
[sshexec] DEBUG 2018/02/07 18:56:26 /src/github.com/heketi/heketi/executors/sshexec/device.go:137: Size of /dev/nbd2 in sec.ostechnix.lan is 48693248
[heketi] INFO 2018/02/07 18:56:26 Added device /dev/nbd2
[asynchttp] INFO 2018/02/07 18:56:26 asynchttp.go:129: Completed job 3c8fb3ae9408eeb8f12c9a2c2457cb42 in 1.109311953s
[negroni] Started GET /queue/3c8fb3ae9408eeb8f12c9a2c2457cb42
[negroni] Completed 204 No Content in 28.681µs
[negroni] Started POST /devices
[heketi] INFO 2018/02/07 18:56:27 Adding device /dev/nbd3 to node 139c65b477131ca4a5cefec7246e46b3
[negroni] Completed 202 Accepted in 448.948µs
[asynchttp] INFO 2018/02/07 18:56:27 asynchttp.go:125: Started job 38ba96d72428f72aded2fddded703411
[negroni] Started GET /queue/38ba96d72428f72aded2fddded703411
[negroni] Completed 200 OK in 15.829µs
[sshexec] DEBUG 2018/02/07 18:56:28 /src/github.com/heketi/heketi/pkg/utils/ssh/ssh.go:176: Host: sec.ostechnix.lan:22 Command: sudo /bin/bash -c 'pvcreate --metadatasize=128M --dataalignment=256K '/dev/nbd3''
Result:   Physical volume "/dev/nbd3" successfully created
[sshexec] DEBUG 2018/02/07 18:56:28 /src/github.com/heketi/heketi/pkg/utils/ssh/ssh.go:176: Host: sec.ostechnix.lan:22 Command: sudo /bin/bash -c 'vgcreate vg_23abf78e40a19e12bd593cab96f3239f /dev/nbd3'
Result:   Volume group "vg_23abf78e40a19e12bd593cab96f3239f" successfully created
[negroni] Started GET /queue/38ba96d72428f72aded2fddded703411
[negroni] Completed 200 OK in 22.494µs
[sshexec] DEBUG 2018/02/07 18:56:28 /src/github.com/heketi/heketi/pkg/utils/ssh/ssh.go:176: Host: sec.ostechnix.lan:22 Command: sudo /bin/bash -c 'vgdisplay -c vg_23abf78e40a19e12bd593cab96f3239f'
Result:   vg_23abf78e40a19e12bd593cab96f3239f:r/w:772:-1:0:0:0:-1:0:1:1:146350080:4096:35730:0:35730:gxy0JR-1aaO-GFat-dHPS-czfO-zpBF-wCL5NU
[sshexec] DEBUG 2018/02/07 18:56:28 /src/github.com/heketi/heketi/executors/sshexec/device.go:137: Size of /dev/nbd3 in sec.ostechnix.lan is 146350080
[heketi] INFO 2018/02/07 18:56:28 Added device /dev/nbd3
[asynchttp] INFO 2018/02/07 18:56:28 asynchttp.go:129: Completed job 38ba96d72428f72aded2fddded703411 in 1.152064036s
[negroni] Started GET /queue/38ba96d72428f72aded2fddded703411
[negroni] Completed 204 No Content in 23.658µs
[negroni] Started POST /nodes
[heketi] INFO 2018/02/07 18:56:29 Adding node third.ostechnix.lan
[negroni] Completed 202 Accepted in 528.166µs
[asynchttp] INFO 2018/02/07 18:56:29 asynchttp.go:125: Started job 28f81de286f82b1eccbc7bd66d659506
[sshexec] INFO 2018/02/07 18:56:29 Probing: sec.ostechnix.lan -> 163.172.151.120
[negroni] Started GET /queue/28f81de286f82b1eccbc7bd66d659506
[negroni] Completed 200 OK in 27.06µs
[negroni] Started GET /queue/28f81de286f82b1eccbc7bd66d659506
[negroni] Completed 200 OK in 32.057µs
[negroni] Started GET /queue/28f81de286f82b1eccbc7bd66d659506
[negroni] Completed 200 OK in 18.696µs
[negroni] Started GET /queue/28f81de286f82b1eccbc7bd66d659506
[negroni] Completed 200 OK in 25.91µs
[sshexec] DEBUG 2018/02/07 18:56:30 /src/github.com/heketi/heketi/pkg/utils/ssh/ssh.go:176: Host: sec.ostechnix.lan:22 Command: sudo /bin/bash -c 'gluster peer probe 163.172.151.120'
Result: peer probe: success. 
[heketi] INFO 2018/02/07 18:56:30 Added node 89d89069e54b4257b817f22bf45b5538
[asynchttp] INFO 2018/02/07 18:56:30 asynchttp.go:129: Completed job 28f81de286f82b1eccbc7bd66d659506 in 787.911461ms
[negroni] Started GET /queue/28f81de286f82b1eccbc7bd66d659506
[negroni] Completed 303 See Other in 25.193µs
[negroni] Started GET /nodes/89d89069e54b4257b817f22bf45b5538
[negroni] Completed 200 OK in 106.407µs
[negroni] Started POST /devices
[heketi] INFO 2018/02/07 18:56:30 Adding device /dev/nbd1 to node 89d89069e54b4257b817f22bf45b5538
[negroni] Completed 202 Accepted in 378.779µs
[asynchttp] INFO 2018/02/07 18:56:30 asynchttp.go:125: Started job 2077d7429942893bdccf1e33643887af
[negroni] Started GET /queue/2077d7429942893bdccf1e33643887af
[negroni] Completed 200 OK in 13.69µs
[sshexec] DEBUG 2018/02/07 18:56:31 /src/github.com/heketi/heketi/pkg/utils/ssh/ssh.go:176: Host: third.ostechnix.lan:22 Command: sudo /bin/bash -c 'pvcreate --metadatasize=128M --dataalignment=256K '/dev/nbd1''
Result:   Physical volume "/dev/nbd1" successfully created
[sshexec] DEBUG 2018/02/07 18:56:31 /src/github.com/heketi/heketi/pkg/utils/ssh/ssh.go:176: Host: third.ostechnix.lan:22 Command: sudo /bin/bash -c 'vgcreate vg_33fdb01b4a2ea60d0d40fd4d328f8214 /dev/nbd1'
Result:   Volume group "vg_33fdb01b4a2ea60d0d40fd4d328f8214" successfully created
[negroni] Started GET /queue/2077d7429942893bdccf1e33643887af
[negroni] Completed 200 OK in 23.349µs
[sshexec] DEBUG 2018/02/07 18:56:31 /src/github.com/heketi/heketi/pkg/utils/ssh/ssh.go:176: Host: third.ostechnix.lan:22 Command: sudo /bin/bash -c 'vgdisplay -c vg_33fdb01b4a2ea60d0d40fd4d328f8214'
Result:   vg_33fdb01b4a2ea60d0d40fd4d328f8214:r/w:772:-1:0:0:0:-1:0:1:1:48693248:4096:11888:0:11888:G2qSRy-e8f8-paVv-ORtW-XFW5-MdcP-6ti9gd
[sshexec] DEBUG 2018/02/07 18:56:31 /src/github.com/heketi/heketi/executors/sshexec/device.go:137: Size of /dev/nbd1 in third.ostechnix.lan is 48693248
[heketi] INFO 2018/02/07 18:56:31 Added device /dev/nbd1
[asynchttp] INFO 2018/02/07 18:56:31 asynchttp.go:129: Completed job 2077d7429942893bdccf1e33643887af in 1.151549038s
[negroni] Started GET /queue/2077d7429942893bdccf1e33643887af
[negroni] Completed 204 No Content in 23.975µs
[negroni] Started POST /devices
[heketi] INFO 2018/02/07 18:56:32 Adding device /dev/nbd2 to node 89d89069e54b4257b817f22bf45b5538
[negroni] Completed 202 Accepted in 14.248638ms
[asynchttp] INFO 2018/02/07 18:56:32 asynchttp.go:125: Started job 1aa232a006f5b96267e65e504f823841
[negroni] Started GET /queue/1aa232a006f5b96267e65e504f823841
[negroni] Completed 200 OK in 15.567µs
[sshexec] DEBUG 2018/02/07 18:56:33 /src/github.com/heketi/heketi/pkg/utils/ssh/ssh.go:176: Host: third.ostechnix.lan:22 Command: sudo /bin/bash -c 'pvcreate --metadatasize=128M --dataalignment=256K '/dev/nbd2''
Result:   Physical volume "/dev/nbd2" successfully created
[sshexec] DEBUG 2018/02/07 18:56:33 /src/github.com/heketi/heketi/pkg/utils/ssh/ssh.go:176: Host: third.ostechnix.lan:22 Command: sudo /bin/bash -c 'vgcreate vg_7ada758aa7da70e7719ca277f93cb4f9 /dev/nbd2'
Result:   Volume group "vg_7ada758aa7da70e7719ca277f93cb4f9" successfully created
[negroni] Started GET /queue/1aa232a006f5b96267e65e504f823841
[negroni] Completed 200 OK in 23.191µs
[sshexec] DEBUG 2018/02/07 18:56:33 /src/github.com/heketi/heketi/pkg/utils/ssh/ssh.go:176: Host: third.ostechnix.lan:22 Command: sudo /bin/bash -c 'vgdisplay -c vg_7ada758aa7da70e7719ca277f93cb4f9'
Result:   vg_7ada758aa7da70e7719ca277f93cb4f9:r/w:772:-1:0:0:0:-1:0:1:1:48693248:4096:11888:0:11888:xkI3Si-P8EE-6aeE-ffXK-0fTF-c2E7-CTh78n
[sshexec] DEBUG 2018/02/07 18:56:33 /src/github.com/heketi/heketi/executors/sshexec/device.go:137: Size of /dev/nbd2 in third.ostechnix.lan is 48693248
[heketi] INFO 2018/02/07 18:56:33 Added device /dev/nbd2
[asynchttp] INFO 2018/02/07 18:56:33 asynchttp.go:129: Completed job 1aa232a006f5b96267e65e504f823841 in 1.096424617s
[negroni] Started GET /queue/1aa232a006f5b96267e65e504f823841
[negroni] Completed 204 No Content in 32.949µs
[negroni] Started POST /devices
[heketi] INFO 2018/02/07 18:56:34 Adding device /dev/nbd3 to node 89d89069e54b4257b817f22bf45b5538
[negroni] Completed 202 Accepted in 14.748901ms
[asynchttp] INFO 2018/02/07 18:56:34 asynchttp.go:125: Started job 19e9554907bfd61c1beb8194dbb5b406
[negroni] Started GET /queue/19e9554907bfd61c1beb8194dbb5b406
[negroni] Completed 200 OK in 25.782µs
[sshexec] DEBUG 2018/02/07 18:56:35 /src/github.com/heketi/heketi/pkg/utils/ssh/ssh.go:176: Host: third.ostechnix.lan:22 Command: sudo /bin/bash -c 'pvcreate --metadatasize=128M --dataalignment=256K '/dev/nbd3''
Result:   Physical volume "/dev/nbd3" successfully created
[sshexec] DEBUG 2018/02/07 18:56:35 /src/github.com/heketi/heketi/pkg/utils/ssh/ssh.go:176: Host: third.ostechnix.lan:22 Command: sudo /bin/bash -c 'vgcreate vg_83ba86d13242a1484eb8f4ba691c6327 /dev/nbd3'
Result:   Volume group "vg_83ba86d13242a1484eb8f4ba691c6327" successfully created
[negroni] Started GET /queue/19e9554907bfd61c1beb8194dbb5b406
[negroni] Completed 200 OK in 36.773µs
[sshexec] DEBUG 2018/02/07 18:56:35 /src/github.com/heketi/heketi/pkg/utils/ssh/ssh.go:176: Host: third.ostechnix.lan:22 Command: sudo /bin/bash -c 'vgdisplay -c vg_83ba86d13242a1484eb8f4ba691c6327'
Result:   vg_83ba86d13242a1484eb8f4ba691c6327:r/w:772:-1:0:0:0:-1:0:1:1:146350080:4096:35730:0:35730:dDQSms-W2Rd-WKdu-xcFP-2d8g-1la8-82amTr
[sshexec] DEBUG 2018/02/07 18:56:35 /src/github.com/heketi/heketi/executors/sshexec/device.go:137: Size of /dev/nbd3 in third.ostechnix.lan is 146350080
[heketi] INFO 2018/02/07 18:56:35 Added device /dev/nbd3
[asynchttp] INFO 2018/02/07 18:56:35 asynchttp.go:129: Completed job 19e9554907bfd61c1beb8194dbb5b406 in 1.108107811s
[negroni] Started GET /queue/19e9554907bfd61c1beb8194dbb5b406
[negroni] Completed 204 No Content in 22.843µs
[negroni] Started GET /clusters/3109379364d9f90f6c52fd5210b7b69d
[negroni] Completed 200 OK in 109.93µs
[negroni] Started POST /volumes
[heketi] DEBUG 2018/02/07 18:56:58 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:82: [8397c8adb21679e81b87d7e6cd517129] Replica 3
[negroni] Completed 202 Accepted in 210.355µs
[asynchttp] INFO 2018/02/07 18:56:58 asynchttp.go:125: Started job efaf72b4736238b5bbf2b91d062e940d
[heketi] INFO 2018/02/07 18:56:58 Creating volume 8397c8adb21679e81b87d7e6cd517129
[heketi] DEBUG 2018/02/07 18:56:58 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:253: Using the following clusters: [3109379364d9f90f6c52fd5210b7b69d]
[heketi] DEBUG 2018/02/07 18:56:58 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry_allocate.go:44: brick_size = 3145728
[heketi] DEBUG 2018/02/07 18:56:58 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry_allocate.go:45: sets = 1
[heketi] DEBUG 2018/02/07 18:56:58 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry_allocate.go:46: num_bricks = 3
[heketi] INFO 2018/02/07 18:56:58 brick_num: 0
[heketi] DEBUG 2018/02/07 18:56:58 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry_allocate.go:398: 0 / 3
[heketi] DEBUG 2018/02/07 18:56:58 /src/github.com/heketi/heketi/apps/glusterfs/device_entry.go:411: device 23abf78e40a19e12bd593cab96f3239f[146350080] > required size [3162112] ?
[negroni] Started GET /queue/efaf72b4736238b5bbf2b91d062e940d
[negroni] Completed 200 OK in 16.785µs
[heketi] DEBUG 2018/02/07 18:56:58 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry_allocate.go:398: 1 / 3
[heketi] DEBUG 2018/02/07 18:56:58 /src/github.com/heketi/heketi/apps/glusterfs/device_entry.go:411: device 83ba86d13242a1484eb8f4ba691c6327[146350080] > required size [3162112] ?
[heketi] DEBUG 2018/02/07 18:56:58 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry_allocate.go:398: 2 / 3
[heketi] DEBUG 2018/02/07 18:56:58 /src/github.com/heketi/heketi/apps/glusterfs/device_entry.go:411: device f4560d692ed58efd0ef49a219d9b6692[48693248] > required size [3162112] ?
[heketi] DEBUG 2018/02/07 18:56:58 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:301: Volume to be created on cluster 3109379364d9f90f6c52fd5210b7b69d
[heketi] INFO 2018/02/07 18:56:58 Creating brick 3c9be1bfe1964d0543ed3bf135ca1a15
[heketi] INFO 2018/02/07 18:56:58 Creating brick 1e57c3a35dc12c9f8c4385f7ad1ad969
[heketi] INFO 2018/02/07 18:56:58 Creating brick 2229f09158081dda34feedab092f1ca6
[sshexec] DEBUG 2018/02/07 18:56:58 /src/github.com/heketi/heketi/pkg/utils/ssh/ssh.go:176: Host: third.ostechnix.lan:22 Command: sudo /bin/bash -c 'mkdir -p /var/lib/heketi/mounts/vg_83ba86d13242a1484eb8f4ba691c6327/brick_1e57c3a35dc12c9f8c4385f7ad1ad969'
Result: 
[sshexec] DEBUG 2018/02/07 18:56:58 /src/github.com/heketi/heketi/pkg/utils/ssh/ssh.go:176: Host: sec.ostechnix.lan:22 Command: sudo /bin/bash -c 'mkdir -p /var/lib/heketi/mounts/vg_23abf78e40a19e12bd593cab96f3239f/brick_3c9be1bfe1964d0543ed3bf135ca1a15'
Result: 
[sshexec] DEBUG 2018/02/07 18:56:58 /src/github.com/heketi/heketi/pkg/utils/ssh/ssh.go:176: Host: pri.ostechnix.lan:22 Command: sudo /bin/bash -c 'mkdir -p /var/lib/heketi/mounts/vg_f4560d692ed58efd0ef49a219d9b6692/brick_2229f09158081dda34feedab092f1ca6'
Result: 
[negroni] Started GET /queue/efaf72b4736238b5bbf2b91d062e940d
[negroni] Completed 200 OK in 27.206µs
[sshexec] DEBUG 2018/02/07 18:56:59 /src/github.com/heketi/heketi/pkg/utils/ssh/ssh.go:176: Host: third.ostechnix.lan:22 Command: sudo /bin/bash -c 'lvcreate --poolmetadatasize 16384K -c 256K -L 3145728K -T vg_83ba86d13242a1484eb8f4ba691c6327/tp_1e57c3a35dc12c9f8c4385f7ad1ad969 -V 3145728K -n brick_1e57c3a35dc12c9f8c4385f7ad1ad969'
Result:   Logical volume "brick_1e57c3a35dc12c9f8c4385f7ad1ad969" created.
[sshexec] DEBUG 2018/02/07 18:56:59 /src/github.com/heketi/heketi/pkg/utils/ssh/ssh.go:176: Host: sec.ostechnix.lan:22 Command: sudo /bin/bash -c 'lvcreate --poolmetadatasize 16384K -c 256K -L 3145728K -T vg_23abf78e40a19e12bd593cab96f3239f/tp_3c9be1bfe1964d0543ed3bf135ca1a15 -V 3145728K -n brick_3c9be1bfe1964d0543ed3bf135ca1a15'
Result:   Logical volume "brick_3c9be1bfe1964d0543ed3bf135ca1a15" created.
[sshexec] DEBUG 2018/02/07 18:56:59 /src/github.com/heketi/heketi/pkg/utils/ssh/ssh.go:176: Host: pri.ostechnix.lan:22 Command: sudo /bin/bash -c 'lvcreate --poolmetadatasize 16384K -c 256K -L 3145728K -T vg_f4560d692ed58efd0ef49a219d9b6692/tp_2229f09158081dda34feedab092f1ca6 -V 3145728K -n brick_2229f09158081dda34feedab092f1ca6'
Result:   Logical volume "brick_2229f09158081dda34feedab092f1ca6" created.
[negroni] Started GET /queue/efaf72b4736238b5bbf2b91d062e940d
[negroni] Completed 200 OK in 22.162µs
[negroni] Started GET /queue/efaf72b4736238b5bbf2b91d062e940d
[negroni] Completed 200 OK in 25.719µs
[sshexec] DEBUG 2018/02/07 18:57:01 /src/github.com/heketi/heketi/pkg/utils/ssh/ssh.go:176: Host: third.ostechnix.lan:22 Command: sudo /bin/bash -c 'mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_83ba86d13242a1484eb8f4ba691c6327-brick_1e57c3a35dc12c9f8c4385f7ad1ad969'
Result: meta-data=/dev/mapper/vg_83ba86d13242a1484eb8f4ba691c6327-brick_1e57c3a35dc12c9f8c4385f7ad1ad969 isize=512    agcount=8, agsize=98272 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=0
data     =                       bsize=4096   blocks=786176, imaxpct=25
         =                       sunit=32     swidth=64 blks
naming   =version 2              bsize=8192   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=32 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[sshexec] DEBUG 2018/02/07 18:57:01 /src/github.com/heketi/heketi/pkg/utils/ssh/ssh.go:176: Host: third.ostechnix.lan:22 Command: sudo /bin/bash -c 'echo "/dev/mapper/vg_83ba86d13242a1484eb8f4ba691c6327-brick_1e57c3a35dc12c9f8c4385f7ad1ad969 /var/lib/heketi/mounts/vg_83ba86d13242a1484eb8f4ba691c6327/brick_1e57c3a35dc12c9f8c4385f7ad1ad969 xfs rw,inode64,noatime,nouuid 1 2" | tee -a /etc/fstab > /dev/null '
Result: 
[sshexec] DEBUG 2018/02/07 18:57:01 /src/github.com/heketi/heketi/pkg/utils/ssh/ssh.go:176: Host: sec.ostechnix.lan:22 Command: sudo /bin/bash -c 'mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_23abf78e40a19e12bd593cab96f3239f-brick_3c9be1bfe1964d0543ed3bf135ca1a15'
Result: meta-data=/dev/mapper/vg_23abf78e40a19e12bd593cab96f3239f-brick_3c9be1bfe1964d0543ed3bf135ca1a15 isize=512    agcount=8, agsize=98272 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=0
data     =                       bsize=4096   blocks=786176, imaxpct=25
         =                       sunit=32     swidth=64 blks
naming   =version 2              bsize=8192   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=32 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[sshexec] DEBUG 2018/02/07 18:57:01 /src/github.com/heketi/heketi/pkg/utils/ssh/ssh.go:176: Host: third.ostechnix.lan:22 Command: sudo /bin/bash -c 'mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_83ba86d13242a1484eb8f4ba691c6327-brick_1e57c3a35dc12c9f8c4385f7ad1ad969 /var/lib/heketi/mounts/vg_83ba86d13242a1484eb8f4ba691c6327/brick_1e57c3a35dc12c9f8c4385f7ad1ad969'
Result: 
[negroni] Started GET /queue/efaf72b4736238b5bbf2b91d062e940d
[negroni] Completed 200 OK in 18.015µs
[sshexec] DEBUG 2018/02/07 18:57:02 /src/github.com/heketi/heketi/pkg/utils/ssh/ssh.go:176: Host: third.ostechnix.lan:22 Command: sudo /bin/bash -c 'mkdir /var/lib/heketi/mounts/vg_83ba86d13242a1484eb8f4ba691c6327/brick_1e57c3a35dc12c9f8c4385f7ad1ad969/brick'
Result: 
[sshexec] DEBUG 2018/02/07 18:57:02 /src/github.com/heketi/heketi/pkg/utils/ssh/ssh.go:176: Host: sec.ostechnix.lan:22 Command: sudo /bin/bash -c 'echo "/dev/mapper/vg_23abf78e40a19e12bd593cab96f3239f-brick_3c9be1bfe1964d0543ed3bf135ca1a15 /var/lib/heketi/mounts/vg_23abf78e40a19e12bd593cab96f3239f/brick_3c9be1bfe1964d0543ed3bf135ca1a15 xfs rw,inode64,noatime,nouuid 1 2" | tee -a /etc/fstab > /dev/null '
Result: 
[sshexec] DEBUG 2018/02/07 18:57:02 /src/github.com/heketi/heketi/pkg/utils/ssh/ssh.go:176: Host: sec.ostechnix.lan:22 Command: sudo /bin/bash -c 'mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_23abf78e40a19e12bd593cab96f3239f-brick_3c9be1bfe1964d0543ed3bf135ca1a15 /var/lib/heketi/mounts/vg_23abf78e40a19e12bd593cab96f3239f/brick_3c9be1bfe1964d0543ed3bf135ca1a15'
Result: 
[sshexec] DEBUG 2018/02/07 18:57:02 /src/github.com/heketi/heketi/pkg/utils/ssh/ssh.go:176: Host: sec.ostechnix.lan:22 Command: sudo /bin/bash -c 'mkdir /var/lib/heketi/mounts/vg_23abf78e40a19e12bd593cab96f3239f/brick_3c9be1bfe1964d0543ed3bf135ca1a15/brick'
Result: 
[sshexec] DEBUG 2018/02/07 18:57:02 /src/github.com/heketi/heketi/pkg/utils/ssh/ssh.go:176: Host: pri.ostechnix.lan:22 Command: sudo /bin/bash -c 'mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_f4560d692ed58efd0ef49a219d9b6692-brick_2229f09158081dda34feedab092f1ca6'
Result: meta-data=/dev/mapper/vg_f4560d692ed58efd0ef49a219d9b6692-brick_2229f09158081dda34feedab092f1ca6 isize=512    agcount=8, agsize=98272 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=0
data     =                       bsize=4096   blocks=786176, imaxpct=25
         =                       sunit=32     swidth=64 blks
naming   =version 2              bsize=8192   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=32 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[sshexec] DEBUG 2018/02/07 18:57:02 /src/github.com/heketi/heketi/pkg/utils/ssh/ssh.go:176: Host: pri.ostechnix.lan:22 Command: sudo /bin/bash -c 'echo "/dev/mapper/vg_f4560d692ed58efd0ef49a219d9b6692-brick_2229f09158081dda34feedab092f1ca6 /var/lib/heketi/mounts/vg_f4560d692ed58efd0ef49a219d9b6692/brick_2229f09158081dda34feedab092f1ca6 xfs rw,inode64,noatime,nouuid 1 2" | tee -a /etc/fstab > /dev/null '
Result: 
[negroni] Started GET /queue/efaf72b4736238b5bbf2b91d062e940d
[negroni] Completed 200 OK in 20.333µs
[sshexec] DEBUG 2018/02/07 18:57:03 /src/github.com/heketi/heketi/pkg/utils/ssh/ssh.go:176: Host: pri.ostechnix.lan:22 Command: sudo /bin/bash -c 'mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_f4560d692ed58efd0ef49a219d9b6692-brick_2229f09158081dda34feedab092f1ca6 /var/lib/heketi/mounts/vg_f4560d692ed58efd0ef49a219d9b6692/brick_2229f09158081dda34feedab092f1ca6'
Result: 
[sshexec] DEBUG 2018/02/07 18:57:03 /src/github.com/heketi/heketi/pkg/utils/ssh/ssh.go:176: Host: pri.ostechnix.lan:22 Command: sudo /bin/bash -c 'mkdir /var/lib/heketi/mounts/vg_f4560d692ed58efd0ef49a219d9b6692/brick_2229f09158081dda34feedab092f1ca6/brick'
Result: 
[sshexec] INFO 2018/02/07 18:57:03 Creating volume vol_8397c8adb21679e81b87d7e6cd517129 replica 3
[sshexec] ERROR 2018/02/07 18:57:03 /src/github.com/heketi/heketi/pkg/utils/ssh/ssh.go:172: Failed to run command [sudo /bin/bash -c 'gluster --mode=script volume create vol_8397c8adb21679e81b87d7e6cd517129 replica 3 51.15.90.60:/var/lib/heketi/mounts/vg_23abf78e40a19e12bd593cab96f3239f/brick_3c9be1bfe1964d0543ed3bf135ca1a15/brick 163.172.151.120:/var/lib/heketi/mounts/vg_83ba86d13242a1484eb8f4ba691c6327/brick_1e57c3a35dc12c9f8c4385f7ad1ad969/brick 51.15.77.14:/var/lib/heketi/mounts/vg_f4560d692ed58efd0ef49a219d9b6692/brick_2229f09158081dda34feedab092f1ca6/brick '] on sec.ostechnix.lan:22: Err[Process exited with status 1]: Stdout []: Stderr [volume create: vol_8397c8adb21679e81b87d7e6cd517129: failed: Host 51.15.90.60 is not in 'Peer in Cluster' state
]
[negroni] Started GET /queue/efaf72b4736238b5bbf2b91d062e940d
[negroni] Completed 200 OK in 16.937µs
[sshexec] ERROR 2018/02/07 18:57:04 /src/github.com/heketi/heketi/pkg/utils/ssh/ssh.go:172: Failed to run command [sudo /bin/bash -c 'gluster --mode=script volume stop vol_8397c8adb21679e81b87d7e6cd517129 force'] on sec.ostechnix.lan:22: Err[Process exited with status 1]: Stdout []: Stderr [volume stop: vol_8397c8adb21679e81b87d7e6cd517129: failed: Volume vol_8397c8adb21679e81b87d7e6cd517129 does not exist
]
[sshexec] ERROR 2018/02/07 18:57:04 /src/github.com/heketi/heketi/executors/sshexec/volume.go:132: Unable to stop volume vol_8397c8adb21679e81b87d7e6cd517129: volume stop: vol_8397c8adb21679e81b87d7e6cd517129: failed: Volume vol_8397c8adb21679e81b87d7e6cd517129 does not exist
[negroni] Started GET /queue/efaf72b4736238b5bbf2b91d062e940d
[negroni] Completed 200 OK in 23.225µs
[sshexec] ERROR 2018/02/07 18:57:05 /src/github.com/heketi/heketi/pkg/utils/ssh/ssh.go:172: Failed to run command [sudo /bin/bash -c 'gluster --mode=script volume delete vol_8397c8adb21679e81b87d7e6cd517129'] on sec.ostechnix.lan:22: Err[Process exited with status 1]: Stdout []: Stderr [volume delete: vol_8397c8adb21679e81b87d7e6cd517129: failed: Volume vol_8397c8adb21679e81b87d7e6cd517129 does not exist
]
[sshexec] ERROR 2018/02/07 18:57:05 /src/github.com/heketi/heketi/executors/sshexec/volume.go:141: Unable to delete volume vol_8397c8adb21679e81b87d7e6cd517129: volume delete: vol_8397c8adb21679e81b87d7e6cd517129: failed: Volume vol_8397c8adb21679e81b87d7e6cd517129 does not exist
[heketi] INFO 2018/02/07 18:57:05 Deleting brick 3c9be1bfe1964d0543ed3bf135ca1a15
[heketi] INFO 2018/02/07 18:57:05 Deleting brick 1e57c3a35dc12c9f8c4385f7ad1ad969
[heketi] INFO 2018/02/07 18:57:05 Deleting brick 2229f09158081dda34feedab092f1ca6
[sshexec] DEBUG 2018/02/07 18:57:05 /src/github.com/heketi/heketi/pkg/utils/ssh/ssh.go:176: Host: third.ostechnix.lan:22 Command: sudo /bin/bash -c 'umount /var/lib/heketi/mounts/vg_83ba86d13242a1484eb8f4ba691c6327/brick_1e57c3a35dc12c9f8c4385f7ad1ad969'
Result: 
[sshexec] DEBUG 2018/02/07 18:57:05 /src/github.com/heketi/heketi/pkg/utils/ssh/ssh.go:176: Host: sec.ostechnix.lan:22 Command: sudo /bin/bash -c 'umount /var/lib/heketi/mounts/vg_23abf78e40a19e12bd593cab96f3239f/brick_3c9be1bfe1964d0543ed3bf135ca1a15'
Result: 
[negroni] Started GET /queue/efaf72b4736238b5bbf2b91d062e940d
[negroni] Completed 200 OK in 19.529µs
[sshexec] DEBUG 2018/02/07 18:57:06 /src/github.com/heketi/heketi/pkg/utils/ssh/ssh.go:176: Host: pri.ostechnix.lan:22 Command: sudo /bin/bash -c 'umount /var/lib/heketi/mounts/vg_f4560d692ed58efd0ef49a219d9b6692/brick_2229f09158081dda34feedab092f1ca6'
Result: 
[negroni] Started GET /queue/efaf72b4736238b5bbf2b91d062e940d
[negroni] Completed 200 OK in 25.964µs
[sshexec] DEBUG 2018/02/07 18:57:07 /src/github.com/heketi/heketi/pkg/utils/ssh/ssh.go:176: Host: sec.ostechnix.lan:22 Command: sudo /bin/bash -c 'lvremove -f vg_23abf78e40a19e12bd593cab96f3239f/tp_3c9be1bfe1964d0543ed3bf135ca1a15'
Result:   Logical volume "brick_3c9be1bfe1964d0543ed3bf135ca1a15" successfully removed
  Logical volume "tp_3c9be1bfe1964d0543ed3bf135ca1a15" successfully removed
[sshexec] DEBUG 2018/02/07 18:57:07 /src/github.com/heketi/heketi/pkg/utils/ssh/ssh.go:176: Host: third.ostechnix.lan:22 Command: sudo /bin/bash -c 'lvremove -f vg_83ba86d13242a1484eb8f4ba691c6327/tp_1e57c3a35dc12c9f8c4385f7ad1ad969'
Result:   Logical volume "brick_1e57c3a35dc12c9f8c4385f7ad1ad969" successfully removed
  Logical volume "tp_1e57c3a35dc12c9f8c4385f7ad1ad969" successfully removed
[sshexec] DEBUG 2018/02/07 18:57:07 /src/github.com/heketi/heketi/pkg/utils/ssh/ssh.go:176: Host: pri.ostechnix.lan:22 Command: sudo /bin/bash -c 'lvremove -f vg_f4560d692ed58efd0ef49a219d9b6692/tp_2229f09158081dda34feedab092f1ca6'
Result:   Logical volume "brick_2229f09158081dda34feedab092f1ca6" successfully removed
  Logical volume "tp_2229f09158081dda34feedab092f1ca6" successfully removed
[sshexec] DEBUG 2018/02/07 18:57:07 /src/github.com/heketi/heketi/pkg/utils/ssh/ssh.go:176: Host: sec.ostechnix.lan:22 Command: sudo /bin/bash -c 'rmdir /var/lib/heketi/mounts/vg_23abf78e40a19e12bd593cab96f3239f/brick_3c9be1bfe1964d0543ed3bf135ca1a15'
Result: 
[sshexec] DEBUG 2018/02/07 18:57:07 /src/github.com/heketi/heketi/pkg/utils/ssh/ssh.go:176: Host: third.ostechnix.lan:22 Command: sudo /bin/bash -c 'rmdir /var/lib/heketi/mounts/vg_83ba86d13242a1484eb8f4ba691c6327/brick_1e57c3a35dc12c9f8c4385f7ad1ad969'
Result: 
[sshexec] DEBUG 2018/02/07 18:57:07 /src/github.com/heketi/heketi/pkg/utils/ssh/ssh.go:176: Host: pri.ostechnix.lan:22 Command: sudo /bin/bash -c 'rmdir /var/lib/heketi/mounts/vg_f4560d692ed58efd0ef49a219d9b6692/brick_2229f09158081dda34feedab092f1ca6'
Result: 
[negroni] Started GET /queue/efaf72b4736238b5bbf2b91d062e940d
[negroni] Completed 200 OK in 28.023µs
[sshexec] DEBUG 2018/02/07 18:57:08 /src/github.com/heketi/heketi/pkg/utils/ssh/ssh.go:176: Host: sec.ostechnix.lan:22 Command: sudo /bin/bash -c 'sed -i.save "/brick_3c9be1bfe1964d0543ed3bf135ca1a15/d" /etc/fstab'
Result: 
[sshexec] DEBUG 2018/02/07 18:57:08 /src/github.com/heketi/heketi/pkg/utils/ssh/ssh.go:176: Host: third.ostechnix.lan:22 Command: sudo /bin/bash -c 'sed -i.save "/brick_1e57c3a35dc12c9f8c4385f7ad1ad969/d" /etc/fstab'
Result: 
[sshexec] DEBUG 2018/02/07 18:57:08 /src/github.com/heketi/heketi/pkg/utils/ssh/ssh.go:176: Host: pri.ostechnix.lan:22 Command: sudo /bin/bash -c 'sed -i.save "/brick_2229f09158081dda34feedab092f1ca6/d" /etc/fstab'
Result: 
[heketi] ERROR 2018/02/07 18:57:08 /src/github.com/heketi/heketi/apps/glusterfs/app_volume.go:155: Failed to create volume: volume create: vol_8397c8adb21679e81b87d7e6cd517129: failed: Host 51.15.90.60 is not in 'Peer in Cluster' state
[asynchttp] INFO 2018/02/07 18:57:08 asynchttp.go:129: Completed job efaf72b4736238b5bbf2b91d062e940d in 10.507322606s
[negroni] Started GET /queue/efaf72b4736238b5bbf2b91d062e940d
[negroni] Completed 500 Internal Server Error in 25.64µs
[negroni] Started GET /clusters
[negroni] Completed 200 OK in 56.493µs
[negroni] Started GET /clusters/3109379364d9f90f6c52fd5210b7b69d
[negroni] Completed 200 OK in 213.278µs
[negroni] Started GET /nodes/139c65b477131ca4a5cefec7246e46b3
[negroni] Completed 200 OK in 267.732µs
[negroni] Started GET /nodes/863262f436c8daf2f1526f449111c5a0
[negroni] Completed 200 OK in 240.626µs
[negroni] Started GET /nodes/89d89069e54b4257b817f22bf45b5538
[negroni] Completed 200 OK in 209.605µs



on node 1 

root at pri:~# gluster peer status
Number of Peers: 2

Hostname: sec.ostechnix.lan
Uuid: 887c5074-ab28-4642-846f-fa6c87430987
State: Peer in Cluster (Connected)
Other names:
sec.ostechnix.lan

Hostname: 163.172.151.120
Uuid: ffd9ff21-c18c-4095-8f05-acc5bb567ef8
State: Peer in Cluster (Connected)


on node 2

root at sec:~# gluster peer status
Number of Peers: 2

Hostname: pri.ostechnix.lan
Uuid: 5417f7f0-37c6-4776-bdd1-0a29f45fab89
State: Peer in Cluster (Connected)

Hostname: 163.172.151.120
Uuid: ffd9ff21-c18c-4095-8f05-acc5bb567ef8
State: Peer in Cluster (Connected)


on node 3

root at third:/var/log/glusterfs# gluster peer status
Number of Peers: 2

Hostname: 51.15.90.60
Uuid: 887c5074-ab28-4642-846f-fa6c87430987
State: Peer in Cluster (Connected)
Other names:
51.15.90.60

Hostname: pri.ostechnix.lan
Uuid: 5417f7f0-37c6-4776-bdd1-0a29f45fab89
State: Peer in Cluster (Connected)


node 1 glusterd.log

[2018-02-07 17:51:15.273687] I [MSGID: 100030] [glusterfsd.c:2556:main] 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 3.13.2 (args: /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO)
[2018-02-07 17:51:15.282394] I [MSGID: 106478] [glusterd.c:1423:init] 0-management: Maximum allowed open file descriptors set to 65536
[2018-02-07 17:51:15.282440] I [MSGID: 106479] [glusterd.c:1481:init] 0-management: Using /var/lib/glusterd as working directory
[2018-02-07 17:51:15.282463] I [MSGID: 106479] [glusterd.c:1486:init] 0-management: Using /var/run/gluster as pid file working directory
[2018-02-07 17:51:15.287695] W [MSGID: 103071] [rdma.c:4631:__gf_rdma_ctx_create] 0-rpc-transport/rdma: rdma_cm event channel creation failed [No such device]
[2018-02-07 17:51:15.287746] W [MSGID: 103055] [rdma.c:4940:init] 0-rdma.management: Failed to initialize IB Device
[2018-02-07 17:51:15.287767] W [rpc-transport.c:350:rpc_transport_load] 0-rpc-transport: 'rdma' initialization failed
[2018-02-07 17:51:15.287851] W [rpcsvc.c:1770:rpcsvc_create_listener] 0-rpc-service: cannot create listener, initing the transport failed
[2018-02-07 17:51:15.287875] E [MSGID: 106243] [glusterd.c:1769:init] 0-management: creation of 1 listeners failed, continuing with succeeded transport
[2018-02-07 17:51:18.538307] E [MSGID: 101032] [store.c:441:gf_store_handle_retrieve] 0-: Path corresponding to /var/lib/glusterd/glusterd.info. [No such file or directory]
[2018-02-07 17:51:18.538390] E [MSGID: 101032] [store.c:441:gf_store_handle_retrieve] 0-: Path corresponding to /var/lib/glusterd/glusterd.info. [No such file or directory]
[2018-02-07 17:51:18.538393] I [MSGID: 106514] [glusterd-store.c:2263:glusterd_restore_op_version] 0-management: Detected new install. Setting op-version to maximum : 31302
[2018-02-07 17:51:18.538452] E [MSGID: 101032] [store.c:441:gf_store_handle_retrieve] 0-: Path corresponding to /var/lib/glusterd/options. [No such file or directory]
[2018-02-07 17:51:18.543044] I [MSGID: 106194] [glusterd-store.c:3831:glusterd_store_retrieve_missed_snaps_list] 0-management: No missed snaps list.
Final graph:
+------------------------------------------------------------------------------+
  1: volume management
  2:     type mgmt/glusterd
  3:     option rpc-auth.auth-glusterfs on
  4:     option rpc-auth.auth-unix on
  5:     option rpc-auth.auth-null on
  6:     option rpc-auth-allow-insecure on
  7:     option transport.listen-backlog 10
  8:     option event-threads 1
  9:     option ping-timeout 0
 10:     option transport.socket.read-fail-log off
 11:     option transport.socket.keepalive-interval 2
 12:     option transport.socket.keepalive-time 10
 13:     option transport-type rdma
 14:     option working-directory /var/lib/glusterd
 15: end-volume
 16:  
+------------------------------------------------------------------------------+
[2018-02-07 17:51:18.543581] I [MSGID: 101190] [event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
[2018-02-07 17:51:41.606378] W [glusterfsd.c:1393:cleanup_and_exit] (-->/lib/x86_64-linux-gnu/libpthread.so.0(+0x76ba) [0x7f1c817c26ba] -->/usr/sbin/glusterd(glusterfs_sigwaiter+0xe5) [0x56547f4ebe65] -->/usr/sbin/glusterd(cleanup_and_exit+0x54) [0x56547f4ebc84] ) 0-: received signum (15), shutting down
[2018-02-07 17:52:46.932390] I [MSGID: 100030] [glusterfsd.c:2556:main] 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 3.13.2 (args: /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO)
[2018-02-07 17:52:46.964662] I [MSGID: 106478] [glusterd.c:1423:init] 0-management: Maximum allowed open file descriptors set to 65536
[2018-02-07 17:52:46.964769] I [MSGID: 106479] [glusterd.c:1481:init] 0-management: Using /var/lib/glusterd as working directory
[2018-02-07 17:52:46.964793] I [MSGID: 106479] [glusterd.c:1486:init] 0-management: Using /var/run/gluster as pid file working directory
[2018-02-07 17:52:46.977217] W [MSGID: 103071] [rdma.c:4631:__gf_rdma_ctx_create] 0-rpc-transport/rdma: rdma_cm event channel creation failed [No such device]
[2018-02-07 17:52:46.977377] W [MSGID: 103055] [rdma.c:4940:init] 0-rdma.management: Failed to initialize IB Device
[2018-02-07 17:52:46.977399] W [rpc-transport.c:350:rpc_transport_load] 0-rpc-transport: 'rdma' initialization failed
[2018-02-07 17:52:46.977488] W [rpcsvc.c:1770:rpcsvc_create_listener] 0-rpc-service: cannot create listener, initing the transport failed
[2018-02-07 17:52:46.977519] E [MSGID: 106243] [glusterd.c:1769:init] 0-management: creation of 1 listeners failed, continuing with succeeded transport
[2018-02-07 17:52:50.277283] E [MSGID: 101032] [store.c:441:gf_store_handle_retrieve] 0-: Path corresponding to /var/lib/glusterd/glusterd.info. [No such file or directory]
[2018-02-07 17:52:50.277367] E [MSGID: 101032] [store.c:441:gf_store_handle_retrieve] 0-: Path corresponding to /var/lib/glusterd/glusterd.info. [No such file or directory]
[2018-02-07 17:52:50.277370] I [MSGID: 106514] [glusterd-store.c:2263:glusterd_restore_op_version] 0-management: Detected new install. Setting op-version to maximum : 31302
[2018-02-07 17:52:50.280400] I [MSGID: 106194] [glusterd-store.c:3831:glusterd_store_retrieve_missed_snaps_list] 0-management: No missed snaps list.
Final graph:
+------------------------------------------------------------------------------+
  1: volume management
  2:     type mgmt/glusterd
  3:     option rpc-auth.auth-glusterfs on
  4:     option rpc-auth.auth-unix on
  5:     option rpc-auth.auth-null on
  6:     option rpc-auth-allow-insecure on
  7:     option transport.listen-backlog 10
  8:     option event-threads 1
  9:     option ping-timeout 0
 10:     option transport.socket.read-fail-log off
 11:     option transport.socket.keepalive-interval 2
 12:     option transport.socket.keepalive-time 10
 13:     option transport-type rdma
 14:     option working-directory /var/lib/glusterd
 15: end-volume
 16:  
+------------------------------------------------------------------------------+
[2018-02-07 17:52:50.282250] I [MSGID: 101190] [event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
[2018-02-07 17:56:23.632800] I [MSGID: 106487] [glusterd-handler.c:1243:__glusterd_handle_cli_probe] 0-glusterd: Received CLI probe req 51.15.90.60 24007
[2018-02-07 17:56:23.633440] I [MSGID: 106129] [glusterd-handler.c:3624:glusterd_probe_begin] 0-glusterd: Unable to find peerinfo for host: 51.15.90.60 (24007)
[2018-02-07 17:56:23.638142] W [MSGID: 106062] [glusterd-handler.c:3400:glusterd_transport_inet_options_build] 0-glusterd: Failed to get tcp-user-timeout
[2018-02-07 17:56:23.638187] I [rpc-clnt.c:1044:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600
[2018-02-07 17:56:23.638356] W [MSGID: 101002] [options.c:995:xl_opt_validate] 0-management: option 'address-family' is deprecated, preferred is 'transport.address-family', continuing with correction
[2018-02-07 17:56:23.638984] W [socket.c:3216:socket_connect] 0-management: Error disabling sockopt IPV6_V6ONLY: "Protocol not available"
[2018-02-07 17:56:23.639090] I [MSGID: 106498] [glusterd-handler.c:3550:glusterd_friend_add] 0-management: connect returned 0
[2018-02-07 17:56:23.652988] E [MSGID: 101032] [store.c:441:gf_store_handle_retrieve] 0-: Path corresponding to /var/lib/glusterd/glusterd.info. [No such file or directory]
[2018-02-07 17:56:23.653116] I [MSGID: 106477] [glusterd.c:190:glusterd_uuid_generate_save] 0-management: generated UUID: 5417f7f0-37c6-4776-bdd1-0a29f45fab89
[2018-02-07 17:56:23.690680] I [MSGID: 106511] [glusterd-rpc-ops.c:262:__glusterd_probe_cbk] 0-management: Received probe resp from uuid: 887c5074-ab28-4642-846f-fa6c87430987, host: 51.15.90.60
[2018-02-07 17:56:23.690731] I [MSGID: 106511] [glusterd-rpc-ops.c:422:__glusterd_probe_cbk] 0-glusterd: Received resp to probe req
[2018-02-07 17:56:23.701458] I [MSGID: 106493] [glusterd-rpc-ops.c:486:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: 887c5074-ab28-4642-846f-fa6c87430987, host: 51.15.90.60, port: 0
[2018-02-07 17:56:23.715399] I [MSGID: 106163] [glusterd-handshake.c:1361:__glusterd_mgmt_hndsk_versions_ack] 0-management: using the op-version 31302
[2018-02-07 17:56:23.728754] I [MSGID: 106490] [glusterd-handler.c:2891:__glusterd_handle_probe_query] 0-glusterd: Received probe from uuid: 887c5074-ab28-4642-846f-fa6c87430987
[2018-02-07 17:56:23.728941] I [MSGID: 106493] [glusterd-handler.c:2954:__glusterd_handle_probe_query] 0-glusterd: Responded to sec.ostechnix.lan, op_ret: 0, op_errno: 0, ret: 0
[2018-02-07 17:56:23.736766] I [MSGID: 106490] [glusterd-handler.c:2540:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: 887c5074-ab28-4642-846f-fa6c87430987
[2018-02-07 17:56:23.740324] I [MSGID: 106493] [glusterd-handler.c:3800:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to 51.15.90.60 (0), ret: 0, op_ret: 0
[2018-02-07 17:56:23.747713] I [MSGID: 106492] [glusterd-handler.c:2718:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 887c5074-ab28-4642-846f-fa6c87430987
[2018-02-07 17:56:23.747785] I [MSGID: 106502] [glusterd-handler.c:2763:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend
[2018-02-07 17:56:23.750683] I [MSGID: 106493] [glusterd-rpc-ops.c:701:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 887c5074-ab28-4642-846f-fa6c87430987
[2018-02-07 17:56:30.611370] I [MSGID: 106492] [glusterd-handler.c:2718:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 887c5074-ab28-4642-846f-fa6c87430987
[2018-02-07 17:56:30.611448] I [MSGID: 106502] [glusterd-handler.c:2763:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend
[2018-02-07 17:56:30.615375] I [rpc-clnt.c:1044:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600
[2018-02-07 17:56:30.615836] W [socket.c:3216:socket_connect] 0-management: Error disabling sockopt IPV6_V6ONLY: "Protocol not available"
[2018-02-07 17:56:30.615543] W [MSGID: 101002] [options.c:995:xl_opt_validate] 0-management: option 'address-family' is deprecated, preferred is 'transport.address-family', continuing with correction
[2018-02-07 17:56:30.615929] I [MSGID: 106498] [glusterd-handler.c:3603:glusterd_friend_add_from_peerinfo] 0-management: connect returned 0
[2018-02-07 17:56:30.684564] I [MSGID: 106163] [glusterd-handshake.c:1361:__glusterd_mgmt_hndsk_versions_ack] 0-management: using the op-version 31302
[2018-02-07 17:56:30.705120] I [MSGID: 106490] [glusterd-handler.c:2540:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: ffd9ff21-c18c-4095-8f05-acc5bb567ef8
[2018-02-07 17:56:30.707489] I [MSGID: 106493] [glusterd-handler.c:3800:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to 163.172.151.120 (0), ret: 0, op_ret: 0
[2018-02-07 17:56:30.714046] I [MSGID: 106493] [glusterd-rpc-ops.c:486:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: ffd9ff21-c18c-4095-8f05-acc5bb567ef8, host: 163.172.151.120, port: 0
[2018-02-07 17:56:30.717253] I [MSGID: 106492] [glusterd-handler.c:2718:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: ffd9ff21-c18c-4095-8f05-acc5bb567ef8
The message "I [MSGID: 106502] [glusterd-handler.c:2763:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend" repeated 2 times between [2018-02-07 17:56:30.611448] and [2018-02-07 17:56:30.728557]
[2018-02-07 17:56:30.731490] I [MSGID: 106493] [glusterd-rpc-ops.c:701:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: ffd9ff21-c18c-4095-8f05-acc5bb567ef8
[2018-02-07 17:56:30.615360] W [MSGID: 106062] [glusterd-handler.c:3400:glusterd_transport_inet_options_build] 0-glusterd: Failed to get tcp-user-timeout
[2018-02-07 17:56:30.725752] I [MSGID: 106492] [glusterd-handler.c:2718:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: ffd9ff21-c18c-4095-8f05-acc5bb567ef8
[2018-02-07 17:56:30.735785] I [MSGID: 106493] [glusterd-rpc-ops.c:701:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: ffd9ff21-c18c-4095-8f05-acc5bb567ef8
[2018-02-07 18:03:41.641674] I [MSGID: 106487] [glusterd-handler.c:1485:__glusterd_handle_cli_list_friends] 0-glusterd: Received cli list req
[2018-02-07 18:04:45.987668] I [MSGID: 106487] [glusterd-handler.c:1485:__glusterd_handle_cli_list_friends] 0-glusterd: Received cli list req




node 2 glusterd.log

[2018-02-07 17:51:13.767399] I [MSGID: 100030] [glusterfsd.c:2556:main] 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 3.13.2 (args: /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO)
[2018-02-07 17:51:13.776240] I [MSGID: 106478] [glusterd.c:1423:init] 0-management: Maximum allowed open file descriptors set to 65536
[2018-02-07 17:51:13.776289] I [MSGID: 106479] [glusterd.c:1481:init] 0-management: Using /var/lib/glusterd as working directory
[2018-02-07 17:51:13.776312] I [MSGID: 106479] [glusterd.c:1486:init] 0-management: Using /var/run/gluster as pid file working directory
[2018-02-07 17:51:13.781592] W [MSGID: 103071] [rdma.c:4631:__gf_rdma_ctx_create] 0-rpc-transport/rdma: rdma_cm event channel creation failed [No such device]
[2018-02-07 17:51:13.781642] W [MSGID: 103055] [rdma.c:4940:init] 0-rdma.management: Failed to initialize IB Device
[2018-02-07 17:51:13.781664] W [rpc-transport.c:350:rpc_transport_load] 0-rpc-transport: 'rdma' initialization failed
[2018-02-07 17:51:13.781745] W [rpcsvc.c:1770:rpcsvc_create_listener] 0-rpc-service: cannot create listener, initing the transport failed
[2018-02-07 17:51:13.781769] E [MSGID: 106243] [glusterd.c:1769:init] 0-management: creation of 1 listeners failed, continuing with succeeded transport
[2018-02-07 17:51:17.082491] E [MSGID: 101032] [store.c:441:gf_store_handle_retrieve] 0-: Path corresponding to /var/lib/glusterd/glusterd.info. [No such file or directory]
[2018-02-07 17:51:17.082575] E [MSGID: 101032] [store.c:441:gf_store_handle_retrieve] 0-: Path corresponding to /var/lib/glusterd/glusterd.info. [No such file or directory]
[2018-02-07 17:51:17.082578] I [MSGID: 106514] [glusterd-store.c:2263:glusterd_restore_op_version] 0-management: Detected new install. Setting op-version to maximum : 31302
[2018-02-07 17:51:17.082639] E [MSGID: 101032] [store.c:441:gf_store_handle_retrieve] 0-: Path corresponding to /var/lib/glusterd/options. [No such file or directory]
[2018-02-07 17:51:17.087442] I [MSGID: 106194] [glusterd-store.c:3831:glusterd_store_retrieve_missed_snaps_list] 0-management: No missed snaps list.
Final graph:
+------------------------------------------------------------------------------+
  1: volume management
  2:     type mgmt/glusterd
  3:     option rpc-auth.auth-glusterfs on
  4:     option rpc-auth.auth-unix on
  5:     option rpc-auth.auth-null on
  6:     option rpc-auth-allow-insecure on
  7:     option transport.listen-backlog 10
  8:     option event-threads 1
  9:     option ping-timeout 0
 10:     option transport.socket.read-fail-log off
 11:     option transport.socket.keepalive-interval 2
 12:     option transport.socket.keepalive-time 10
 13:     option transport-type rdma
 14:     option working-directory /var/lib/glusterd
 15: end-volume
 16:  
+------------------------------------------------------------------------------+
[2018-02-07 17:51:17.088368] I [MSGID: 101190] [event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
[2018-02-07 17:52:01.683189] W [glusterfsd.c:1393:cleanup_and_exit] (-->/lib/x86_64-linux-gnu/libpthread.so.0(+0x76ba) [0x7fc91dfd06ba] -->/usr/sbin/glusterd(glusterfs_sigwaiter+0xe5) [0x55d8c8744e65] -->/usr/sbin/glusterd(cleanup_and_exit+0x54) [0x55d8c8744c84] ) 0-: received signum (15), shutting down
[2018-02-07 17:53:26.993559] I [MSGID: 100030] [glusterfsd.c:2556:main] 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 3.13.2 (args: /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO)
[2018-02-07 17:53:27.045746] I [MSGID: 106478] [glusterd.c:1423:init] 0-management: Maximum allowed open file descriptors set to 65536
[2018-02-07 17:53:27.045804] I [MSGID: 106479] [glusterd.c:1481:init] 0-management: Using /var/lib/glusterd as working directory
[2018-02-07 17:53:27.045828] I [MSGID: 106479] [glusterd.c:1486:init] 0-management: Using /var/run/gluster as pid file working directory
[2018-02-07 17:53:27.059202] W [MSGID: 103071] [rdma.c:4631:__gf_rdma_ctx_create] 0-rpc-transport/rdma: rdma_cm event channel creation failed [No such device]
[2018-02-07 17:53:27.059311] W [MSGID: 103055] [rdma.c:4940:init] 0-rdma.management: Failed to initialize IB Device
[2018-02-07 17:53:27.059334] W [rpc-transport.c:350:rpc_transport_load] 0-rpc-transport: 'rdma' initialization failed
[2018-02-07 17:53:27.059414] W [rpcsvc.c:1770:rpcsvc_create_listener] 0-rpc-service: cannot create listener, initing the transport failed
[2018-02-07 17:53:27.059446] E [MSGID: 106243] [glusterd.c:1769:init] 0-management: creation of 1 listeners failed, continuing with succeeded transport
[2018-02-07 17:53:30.403842] E [MSGID: 101032] [store.c:441:gf_store_handle_retrieve] 0-: Path corresponding to /var/lib/glusterd/glusterd.info. [No such file or directory]
[2018-02-07 17:53:30.403925] E [MSGID: 101032] [store.c:441:gf_store_handle_retrieve] 0-: Path corresponding to /var/lib/glusterd/glusterd.info. [No such file or directory]
[2018-02-07 17:53:30.403929] I [MSGID: 106514] [glusterd-store.c:2263:glusterd_restore_op_version] 0-management: Detected new install. Setting op-version to maximum : 31302
[2018-02-07 17:53:30.407235] I [MSGID: 106194] [glusterd-store.c:3831:glusterd_store_retrieve_missed_snaps_list] 0-management: No missed snaps list.
Final graph:
+------------------------------------------------------------------------------+
  1: volume management
  2:     type mgmt/glusterd
  3:     option rpc-auth.auth-glusterfs on
  4:     option rpc-auth.auth-unix on
  5:     option rpc-auth.auth-null on
  6:     option rpc-auth-allow-insecure on
  7:     option transport.listen-backlog 10
  8:     option event-threads 1
  9:     option ping-timeout 0
 10:     option transport.socket.read-fail-log off
 11:     option transport.socket.keepalive-interval 2
 12:     option transport.socket.keepalive-time 10
 13:     option transport-type rdma
 14:     option working-directory /var/lib/glusterd
 15: end-volume
 16:  
+------------------------------------------------------------------------------+
[2018-02-07 17:53:30.409887] I [MSGID: 101190] [event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
[2018-02-07 17:56:23.669001] I [MSGID: 106163] [glusterd-handshake.c:1361:__glusterd_mgmt_hndsk_versions_ack] 0-management: using the op-version 31302
[2018-02-07 17:56:23.669079] E [MSGID: 101032] [store.c:441:gf_store_handle_retrieve] 0-: Path corresponding to /var/lib/glusterd/glusterd.info. [No such file or directory]
[2018-02-07 17:56:23.669210] I [MSGID: 106477] [glusterd.c:190:glusterd_uuid_generate_save] 0-management: generated UUID: 887c5074-ab28-4642-846f-fa6c87430987
[2018-02-07 17:56:23.683247] I [MSGID: 106490] [glusterd-handler.c:2891:__glusterd_handle_probe_query] 0-glusterd: Received probe from uuid: 5417f7f0-37c6-4776-bdd1-0a29f45fab89
[2018-02-07 17:56:23.683731] I [MSGID: 106129] [glusterd-handler.c:2926:__glusterd_handle_probe_query] 0-glusterd: Unable to find peerinfo for host: pri.ostechnix.lan (24007)
[2018-02-07 17:56:23.687744] W [MSGID: 106062] [glusterd-handler.c:3400:glusterd_transport_inet_options_build] 0-glusterd: Failed to get tcp-user-timeout
[2018-02-07 17:56:23.687796] I [rpc-clnt.c:1044:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600
[2018-02-07 17:56:23.687955] W [MSGID: 101002] [options.c:995:xl_opt_validate] 0-management: option 'address-family' is deprecated, preferred is 'transport.address-family', continuing with correction
[2018-02-07 17:56:23.688448] W [socket.c:3216:socket_connect] 0-management: Error disabling sockopt IPV6_V6ONLY: "Protocol not available"
[2018-02-07 17:56:23.688549] I [MSGID: 106498] [glusterd-handler.c:3550:glusterd_friend_add] 0-management: connect returned 0
[2018-02-07 17:56:23.688614] I [MSGID: 106493] [glusterd-handler.c:2954:__glusterd_handle_probe_query] 0-glusterd: Responded to pri.ostechnix.lan, op_ret: 0, op_errno: 0, ret: 0
[2018-02-07 17:56:23.695533] I [MSGID: 106490] [glusterd-handler.c:2540:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: 5417f7f0-37c6-4776-bdd1-0a29f45fab89
[2018-02-07 17:56:23.699387] I [MSGID: 106493] [glusterd-handler.c:3800:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to pri.ostechnix.lan (0), ret: 0, op_ret: 0
[2018-02-07 17:56:23.734523] I [MSGID: 106511] [glusterd-rpc-ops.c:262:__glusterd_probe_cbk] 0-management: Received probe resp from uuid: 5417f7f0-37c6-4776-bdd1-0a29f45fab89, host: pri.ostechnix.lan
[2018-02-07 17:56:23.734569] I [MSGID: 106511] [glusterd-rpc-ops.c:422:__glusterd_probe_cbk] 0-glusterd: Received resp to probe req
[2018-02-07 17:56:23.745496] I [MSGID: 106493] [glusterd-rpc-ops.c:486:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: 5417f7f0-37c6-4776-bdd1-0a29f45fab89, host: pri.ostechnix.lan, port: 0
[2018-02-07 17:56:23.748477] I [MSGID: 106492] [glusterd-handler.c:2718:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 5417f7f0-37c6-4776-bdd1-0a29f45fab89
[2018-02-07 17:56:23.748538] I [MSGID: 106502] [glusterd-handler.c:2763:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend
[2018-02-07 17:56:23.753532] I [MSGID: 106493] [glusterd-rpc-ops.c:701:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 5417f7f0-37c6-4776-bdd1-0a29f45fab89
[2018-02-07 17:56:30.408410] I [MSGID: 106487] [glusterd-handler.c:1243:__glusterd_handle_cli_probe] 0-glusterd: Received CLI probe req 163.172.151.120 24007
[2018-02-07 17:56:30.409306] I [MSGID: 106129] [glusterd-handler.c:3624:glusterd_probe_begin] 0-glusterd: Unable to find peerinfo for host: 163.172.151.120 (24007)
[2018-02-07 17:56:30.414124] W [MSGID: 106062] [glusterd-handler.c:3400:glusterd_transport_inet_options_build] 0-glusterd: Failed to get tcp-user-timeout
[2018-02-07 17:56:30.414174] I [rpc-clnt.c:1044:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600
[2018-02-07 17:56:30.414342] W [MSGID: 101002] [options.c:995:xl_opt_validate] 0-management: option 'address-family' is deprecated, preferred is 'transport.address-family', continuing with correction
[2018-02-07 17:56:30.414655] W [socket.c:3216:socket_connect] 0-management: Error disabling sockopt IPV6_V6ONLY: "Protocol not available"
[2018-02-07 17:56:30.414745] I [MSGID: 106498] [glusterd-handler.c:3550:glusterd_friend_add] 0-management: connect returned 0
[2018-02-07 17:56:30.515097] I [MSGID: 106511] [glusterd-rpc-ops.c:262:__glusterd_probe_cbk] 0-management: Received probe resp from uuid: ffd9ff21-c18c-4095-8f05-acc5bb567ef8, host: 163.172.151.120
[2018-02-07 17:56:30.515148] I [MSGID: 106511] [glusterd-rpc-ops.c:422:__glusterd_probe_cbk] 0-glusterd: Received resp to probe req
[2018-02-07 17:56:30.534397] I [MSGID: 106493] [glusterd-rpc-ops.c:486:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: ffd9ff21-c18c-4095-8f05-acc5bb567ef8, host: 163.172.151.120, port: 0
[2018-02-07 17:56:30.564668] I [MSGID: 106163] [glusterd-handshake.c:1361:__glusterd_mgmt_hndsk_versions_ack] 0-management: using the op-version 31302
[2018-02-07 17:56:30.587516] I [MSGID: 106490] [glusterd-handler.c:2891:__glusterd_handle_probe_query] 0-glusterd: Received probe from uuid: ffd9ff21-c18c-4095-8f05-acc5bb567ef8
[2018-02-07 17:56:30.587689] I [MSGID: 106493] [glusterd-handler.c:2954:__glusterd_handle_probe_query] 0-glusterd: Responded to third.ostechnix.lan, op_ret: 0, op_errno: 0, ret: 0
[2018-02-07 17:56:30.603123] I [MSGID: 106490] [glusterd-handler.c:2540:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: ffd9ff21-c18c-4095-8f05-acc5bb567ef8
[2018-02-07 17:56:30.605882] I [MSGID: 106493] [glusterd-handler.c:3800:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to 163.172.151.120 (0), ret: 0, op_ret: 0
[2018-02-07 17:56:30.617952] I [MSGID: 106493] [glusterd-rpc-ops.c:701:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 5417f7f0-37c6-4776-bdd1-0a29f45fab89
[2018-02-07 17:56:30.621365] I [MSGID: 106492] [glusterd-handler.c:2718:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: ffd9ff21-c18c-4095-8f05-acc5bb567ef8
[2018-02-07 17:56:30.621410] I [MSGID: 106502] [glusterd-handler.c:2763:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend
[2018-02-07 17:56:30.626898] I [MSGID: 106493] [glusterd-rpc-ops.c:701:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: ffd9ff21-c18c-4095-8f05-acc5bb567ef8
[2018-02-07 17:57:03.710418] E [MSGID: 106429] [glusterd-utils.c:1268:glusterd_brickinfo_new_from_brick] 0-management: Failed to convert hostname 51.15.90.60 to uuid
[2018-02-07 17:57:03.710477] E [MSGID: 106301] [glusterd-syncop.c:1352:gd_stage_op_phase] 0-management: Staging of operation 'Volume Create' failed on localhost : Host 51.15.90.60 is not in 'Peer in Cluster' state
[2018-02-07 17:57:04.598365] E [MSGID: 106525] [glusterd-op-sm.c:4347:glusterd_dict_set_volid] 0-management: Volume vol_8397c8adb21679e81b87d7e6cd517129 does not exist
[2018-02-07 17:57:04.598406] E [MSGID: 106289] [glusterd-syncop.c:1967:gd_sync_task_begin] 0-management: Failed to build payload for operation 'Volume Stop'
[2018-02-07 17:57:05.255604] E [MSGID: 106525] [glusterd-op-sm.c:4347:glusterd_dict_set_volid] 0-management: Volume vol_8397c8adb21679e81b87d7e6cd517129 does not exist
[2018-02-07 17:57:05.255646] E [MSGID: 106289] [glusterd-syncop.c:1967:gd_sync_task_begin] 0-management: Failed to build payload for operation 'Volume Delete'
[2018-02-07 18:06:24.870818] I [MSGID: 106487] [glusterd-handler.c:1485:__glusterd_handle_cli_list_friends] 0-glusterd: Received cli list req


node 3 glusterd.log


root at third:/var/log/glusterfs# cat glusterd.log 
[2018-02-07 17:51:22.666349] I [MSGID: 100030] [glusterfsd.c:2556:main] 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 3.13.2 (args: /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO)
[2018-02-07 17:51:22.675840] I [MSGID: 106478] [glusterd.c:1423:init] 0-management: Maximum allowed open file descriptors set to 65536
[2018-02-07 17:51:22.675890] I [MSGID: 106479] [glusterd.c:1481:init] 0-management: Using /var/lib/glusterd as working directory
[2018-02-07 17:51:22.675915] I [MSGID: 106479] [glusterd.c:1486:init] 0-management: Using /var/run/gluster as pid file working directory
[2018-02-07 17:51:22.681481] W [MSGID: 103071] [rdma.c:4631:__gf_rdma_ctx_create] 0-rpc-transport/rdma: rdma_cm event channel creation failed [No such device]
[2018-02-07 17:51:22.681533] W [MSGID: 103055] [rdma.c:4940:init] 0-rdma.management: Failed to initialize IB Device
[2018-02-07 17:51:22.681557] W [rpc-transport.c:350:rpc_transport_load] 0-rpc-transport: 'rdma' initialization failed
[2018-02-07 17:51:22.681649] W [rpcsvc.c:1770:rpcsvc_create_listener] 0-rpc-service: cannot create listener, initing the transport failed
[2018-02-07 17:51:22.681674] E [MSGID: 106243] [glusterd.c:1769:init] 0-management: creation of 1 listeners failed, continuing with succeeded transport
[2018-02-07 17:51:26.166183] E [MSGID: 101032] [store.c:441:gf_store_handle_retrieve] 0-: Path corresponding to /var/lib/glusterd/glusterd.info. [No such file or directory]
[2018-02-07 17:51:26.166272] E [MSGID: 101032] [store.c:441:gf_store_handle_retrieve] 0-: Path corresponding to /var/lib/glusterd/glusterd.info. [No such file or directory]
[2018-02-07 17:51:26.166276] I [MSGID: 106514] [glusterd-store.c:2263:glusterd_restore_op_version] 0-management: Detected new install. Setting op-version to maximum : 31302
[2018-02-07 17:51:26.166334] E [MSGID: 101032] [store.c:441:gf_store_handle_retrieve] 0-: Path corresponding to /var/lib/glusterd/options. [No such file or directory]
[2018-02-07 17:51:26.171702] I [MSGID: 106194] [glusterd-store.c:3831:glusterd_store_retrieve_missed_snaps_list] 0-management: No missed snaps list.
Final graph:
+------------------------------------------------------------------------------+
  1: volume management
  2:     type mgmt/glusterd
  3:     option rpc-auth.auth-glusterfs on
  4:     option rpc-auth.auth-unix on
  5:     option rpc-auth.auth-null on
  6:     option rpc-auth-allow-insecure on
  7:     option transport.listen-backlog 10
  8:     option event-threads 1
  9:     option ping-timeout 0
 10:     option transport.socket.read-fail-log off
 11:     option transport.socket.keepalive-interval 2
 12:     option transport.socket.keepalive-time 10
 13:     option transport-type rdma
 14:     option working-directory /var/lib/glusterd
 15: end-volume
 16:  
+------------------------------------------------------------------------------+
[2018-02-07 17:51:26.172365] I [MSGID: 101190] [event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
[2018-02-07 17:51:55.205005] W [glusterfsd.c:1393:cleanup_and_exit] (-->/lib/x86_64-linux-gnu/libpthread.so.0(+0x76ba) [0x7f2b191ca6ba] -->/usr/sbin/glusterd(glusterfs_sigwaiter+0xe5) [0x55a8d71d3e65] -->/usr/sbin/glusterd(cleanup_and_exit+0x54) [0x55a8d71d3c84] ) 0-: received signum (15), shutting down
[2018-02-07 17:53:09.573189] I [MSGID: 100030] [glusterfsd.c:2556:main] 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 3.13.2 (args: /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO)
[2018-02-07 17:53:09.616929] I [MSGID: 106478] [glusterd.c:1423:init] 0-management: Maximum allowed open file descriptors set to 65536
[2018-02-07 17:53:09.617011] I [MSGID: 106479] [glusterd.c:1481:init] 0-management: Using /var/lib/glusterd as working directory
[2018-02-07 17:53:09.617037] I [MSGID: 106479] [glusterd.c:1486:init] 0-management: Using /var/run/gluster as pid file working directory
[2018-02-07 17:53:09.630202] W [MSGID: 103071] [rdma.c:4631:__gf_rdma_ctx_create] 0-rpc-transport/rdma: rdma_cm event channel creation failed [No such device]
[2018-02-07 17:53:09.630307] W [MSGID: 103055] [rdma.c:4940:init] 0-rdma.management: Failed to initialize IB Device
[2018-02-07 17:53:09.630331] W [rpc-transport.c:350:rpc_transport_load] 0-rpc-transport: 'rdma' initialization failed
[2018-02-07 17:53:09.630424] W [rpcsvc.c:1770:rpcsvc_create_listener] 0-rpc-service: cannot create listener, initing the transport failed
[2018-02-07 17:53:09.630456] E [MSGID: 106243] [glusterd.c:1769:init] 0-management: creation of 1 listeners failed, continuing with succeeded transport
[2018-02-07 17:53:13.173017] E [MSGID: 101032] [store.c:441:gf_store_handle_retrieve] 0-: Path corresponding to /var/lib/glusterd/glusterd.info. [No such file or directory]
[2018-02-07 17:53:13.173108] E [MSGID: 101032] [store.c:441:gf_store_handle_retrieve] 0-: Path corresponding to /var/lib/glusterd/glusterd.info. [No such file or directory]
[2018-02-07 17:53:13.173112] I [MSGID: 106514] [glusterd-store.c:2263:glusterd_restore_op_version] 0-management: Detected new install. Setting op-version to maximum : 31302
[2018-02-07 17:53:13.176760] I [MSGID: 106194] [glusterd-store.c:3831:glusterd_store_retrieve_missed_snaps_list] 0-management: No missed snaps list.
Final graph:
+------------------------------------------------------------------------------+
  1: volume management
  2:     type mgmt/glusterd
  3:     option rpc-auth.auth-glusterfs on
  4:     option rpc-auth.auth-unix on
  5:     option rpc-auth.auth-null on
  6:     option rpc-auth-allow-insecure on
  7:     option transport.listen-backlog 10
  8:     option event-threads 1
  9:     option ping-timeout 0
 10:     option transport.socket.read-fail-log off
 11:     option transport.socket.keepalive-interval 2
 12:     option transport.socket.keepalive-time 10
 13:     option transport-type rdma
 14:     option working-directory /var/lib/glusterd
 15: end-volume
 16:  
+------------------------------------------------------------------------------+
[2018-02-07 17:53:13.178907] I [MSGID: 101190] [event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
[2018-02-07 17:56:30.454549] I [MSGID: 106163] [glusterd-handshake.c:1361:__glusterd_mgmt_hndsk_versions_ack] 0-management: using the op-version 31302
[2018-02-07 17:56:30.454621] E [MSGID: 101032] [store.c:441:gf_store_handle_retrieve] 0-: Path corresponding to /var/lib/glusterd/glusterd.info. [No such file or directory]
[2018-02-07 17:56:30.454762] I [MSGID: 106477] [glusterd.c:190:glusterd_uuid_generate_save] 0-management: generated UUID: ffd9ff21-c18c-4095-8f05-acc5bb567ef8
[2018-02-07 17:56:30.506082] I [MSGID: 106490] [glusterd-handler.c:2891:__glusterd_handle_probe_query] 0-glusterd: Received probe from uuid: 887c5074-ab28-4642-846f-fa6c87430987
[2018-02-07 17:56:30.506632] I [MSGID: 106129] [glusterd-handler.c:2926:__glusterd_handle_probe_query] 0-glusterd: Unable to find peerinfo for host: sec.ostechnix.lan (24007)
[2018-02-07 17:56:30.510537] W [MSGID: 106062] [glusterd-handler.c:3400:glusterd_transport_inet_options_build] 0-glusterd: Failed to get tcp-user-timeout
[2018-02-07 17:56:30.510586] I [rpc-clnt.c:1044:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600
[2018-02-07 17:56:30.510745] W [MSGID: 101002] [options.c:995:xl_opt_validate] 0-management: option 'address-family' is deprecated, preferred is 'transport.address-family', continuing with correction
[2018-02-07 17:56:30.511290] W [socket.c:3216:socket_connect] 0-management: Error disabling sockopt IPV6_V6ONLY: "Protocol not available"
[2018-02-07 17:56:30.511414] I [MSGID: 106498] [glusterd-handler.c:3550:glusterd_friend_add] 0-management: connect returned 0
[2018-02-07 17:56:30.511481] I [MSGID: 106493] [glusterd-handler.c:2954:__glusterd_handle_probe_query] 0-glusterd: Responded to sec.ostechnix.lan, op_ret: 0, op_errno: 0, ret: 0
[2018-02-07 17:56:30.522702] I [MSGID: 106490] [glusterd-handler.c:2540:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: 887c5074-ab28-4642-846f-fa6c87430987
[2018-02-07 17:56:30.530740] I [MSGID: 106493] [glusterd-handler.c:3800:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to sec.ostechnix.lan (0), ret: 0, op_ret: 0
[2018-02-07 17:56:30.596883] I [MSGID: 106511] [glusterd-rpc-ops.c:262:__glusterd_probe_cbk] 0-management: Received probe resp from uuid: 887c5074-ab28-4642-846f-fa6c87430987, host: sec.ostechnix.lan
[2018-02-07 17:56:30.596930] I [MSGID: 106511] [glusterd-rpc-ops.c:422:__glusterd_probe_cbk] 0-glusterd: Received resp to probe req
[2018-02-07 17:56:30.615130] I [MSGID: 106493] [glusterd-rpc-ops.c:486:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: 887c5074-ab28-4642-846f-fa6c87430987, host: sec.ostechnix.lan, port: 0
[2018-02-07 17:56:30.618794] I [MSGID: 106492] [glusterd-handler.c:2718:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 887c5074-ab28-4642-846f-fa6c87430987
[2018-02-07 17:56:30.622550] W [MSGID: 106062] [glusterd-handler.c:3400:glusterd_transport_inet_options_build] 0-glusterd: Failed to get tcp-user-timeout
[2018-02-07 17:56:30.622595] I [rpc-clnt.c:1044:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600
[2018-02-07 17:56:30.622754] W [MSGID: 101002] [options.c:995:xl_opt_validate] 0-management: option 'address-family' is deprecated, preferred is 'transport.address-family', continuing with correction
[2018-02-07 17:56:30.623113] W [socket.c:3216:socket_connect] 0-management: Error disabling sockopt IPV6_V6ONLY: "Protocol not available"
[2018-02-07 17:56:30.623223] I [MSGID: 106498] [glusterd-handler.c:3603:glusterd_friend_add_from_peerinfo] 0-management: connect returned 0
[2018-02-07 17:56:30.623255] I [MSGID: 106502] [glusterd-handler.c:2763:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend
[2018-02-07 17:56:30.630699] I [MSGID: 106493] [glusterd-rpc-ops.c:701:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 887c5074-ab28-4642-846f-fa6c87430987
[2018-02-07 17:56:30.680466] I [MSGID: 106163] [glusterd-handshake.c:1361:__glusterd_mgmt_hndsk_versions_ack] 0-management: using the op-version 31302
[2018-02-07 17:56:30.703060] I [MSGID: 106490] [glusterd-handler.c:2540:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: 5417f7f0-37c6-4776-bdd1-0a29f45fab89
[2018-02-07 17:56:30.705628] I [MSGID: 106493] [glusterd-handler.c:3800:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to pri.ostechnix.lan (0), ret: 0, op_ret: 0
[2018-02-07 17:56:30.720125] I [MSGID: 106493] [glusterd-rpc-ops.c:486:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: 5417f7f0-37c6-4776-bdd1-0a29f45fab89, host: pri.ostechnix.lan, port: 0
[2018-02-07 17:56:30.723297] I [MSGID: 106492] [glusterd-handler.c:2718:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 5417f7f0-37c6-4776-bdd1-0a29f45fab89
[2018-02-07 17:56:30.725928] I [MSGID: 106502] [glusterd-handler.c:2763:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend
[2018-02-07 17:56:30.727396] I [MSGID: 106492] [glusterd-handler.c:2718:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 5417f7f0-37c6-4776-bdd1-0a29f45fab89
[2018-02-07 17:56:30.730235] I [MSGID: 106502] [glusterd-handler.c:2763:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend
[2018-02-07 17:56:30.733655] I [MSGID: 106493] [glusterd-rpc-ops.c:701:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 5417f7f0-37c6-4776-bdd1-0a29f45fab89
[2018-02-07 17:56:30.741924] I [MSGID: 106493] [glusterd-rpc-ops.c:701:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 5417f7f0-37c6-4776-bdd1-0a29f45fab89
[2018-02-07 18:06:47.118644] I [MSGID: 106487] [glusterd-handler.c:1485:__glusterd_handle_cli_list_friends] 0-glusterd: Received cli list req


node 1

root at pri:/var/log/glusterfs# gluster pool list
UUID					Hostname         	State
887c5074-ab28-4642-846f-fa6c87430987	sec.ostechnix.lan	Connected 
ffd9ff21-c18c-4095-8f05-acc5bb567ef8	163.172.151.120  	Connected 
5417f7f0-37c6-4776-bdd1-0a29f45fab89	localhost        	Connected 


/etc/hosts

root at pri:/var/log/glusterfs# cat /etc/hosts
127.0.0.1       localhost
127.0.0.1       pri.ostechnix.lan     pri
::1             localhost ip6-localhost ip6-loopback
ff02::1         ip6-allnodes
ff02::2         ip6-allrouters
51.15.90.60      sec.ostechnix.lan     sec
163.172.151.120  third.ostechnix.lan   third

node 2

root at sec:/var/log/glusterfs# gluster pool list
UUID					Hostname         	State
5417f7f0-37c6-4776-bdd1-0a29f45fab89	pri.ostechnix.lan	Connected 
ffd9ff21-c18c-4095-8f05-acc5bb567ef8	163.172.151.120  	Connected 
887c5074-ab28-4642-846f-fa6c87430987	localhost        	Connected 
root at sec:/var/log/glusterfs# 


/etc/hosts

root at sec:/var/log/glusterfs# cat /etc/hosts
127.0.0.1       localhost
127.0.0.1       sec.ostechnix.lan     sec
::1             localhost ip6-localhost ip6-loopback
ff02::1         ip6-allnodes
ff02::2         ip6-allrouters
51.15.77.14      pri.ostechnix.lan     pri
163.172.151.120  third.ostechnix.lan   third


node 3

root at third:/var/log/glusterfs# gluster pool list
UUID					Hostname         	State
887c5074-ab28-4642-846f-fa6c87430987	51.15.90.60      	Connected 
5417f7f0-37c6-4776-bdd1-0a29f45fab89	pri.ostechnix.lan	Connected 
ffd9ff21-c18c-4095-8f05-acc5bb567ef8	localhost        	Connected 


root at third:/var/log/glusterfs# cat /etc/hosts
127.0.0.1       localhost
127.0.0.1       third.ostechnix.lan     third
::1             localhost ip6-localhost ip6-loopback
ff02::1         ip6-allnodes
ff02::2         ip6-allrouters
51.15.77.14      pri.ostechnix.lan     pri
51.15.90.60      sec.ostechnix.lan     sec




> On 7 Feb 2018, at 20:46, Jose A. Rivera <jarrpa at redhat.com> wrote:
> 
> OKay, I'm trying to help you figure out why it's not working. :)
> Please deploy heketi/gluster with the "proper" configuration (which
> does not allow you to create PVCs) and show me the output for the
> things I requested.
> 
> On Wed, Feb 7, 2018 at 11:02 AM, Ercan Aydoğan <ercan.aydogan at gmail.com> wrote:
>> if kubernetes gluster pvc requires ip address i need ip based gluster peer
>> but it’s not working and i want to make this happen. I’m wondering how
>> others use gluster pvc.
>> 
>> i can not create volume when i use storage address set to ip.
>> 
>> Error: volume create: vol_207bbf81f28b959c51448b919be3bb59: failed: Host
>> 51.15.90.60 is not in 'Peer in Cluster' state
>> 
>> I want to know how this can possible. last 2-3 days i am working this but no
>> progress.
>> 
>> 
>> On 7 Feb 2018, at 19:49, Jose A. Rivera <jarrpa at redhat.com> wrote:
>> 
>> What is the output of all those things when you use the "proper"
>> configuration os manage and storage addresses?
>> 
>> On Wed, Feb 7, 2018 at 10:46 AM, Ercan Aydoğan <ercan.aydogan at gmail.com>
>> wrote:
>> 
>> When i change
>> 
>> manage to ip address and
>> storage to FQDN / hostname  i can create volume but kubernetes storage
>> cluster not accept FQDN it requires ip address.
>> 
>> i can create with this topology.json
>> 
>> {
>> "clusters": [
>>   {
>>     "nodes": [
>>       {
>>         "node": {
>>           "hostnames": {
>>             "manage": [
>>               "51.15.77.14"
>>             ],
>>             "storage": [
>>               "pri.ostechnix.lan"
>>             ]
>>           },
>>           "zone": 1
>>         },
>>         "devices": [
>>           "/dev/nbd1",
>>           "/dev/nbd2",
>>   "/dev/nbd3"
>>         ]
>>       },
>>       {
>>         "node": {
>>           "hostnames": {
>>             "manage": [
>>               "51.15.90.60"
>>             ],
>>             "storage": [
>>               "sec.ostechnix.lan"
>>             ]
>>           },
>>           "zone": 1
>>         },
>>         "devices": [
>>           "/dev/nbd1",
>>           "/dev/nbd2",
>>           "/dev/nbd3"
>>         ]
>>       },
>>       {
>>         "node": {
>>           "hostnames": {
>>             "manage": [
>>               "163.172.151.120"
>>             ],
>>             "storage": [
>>               "third.ostechnix.lan"
>>             ]
>>           },
>>           "zone": 1
>>         },
>>         "devices": [
>>           "/dev/nbd1",
>>           "/dev/nbd2",
>>            "/dev/nbd3"
>>         ]
>>       }
>> 
>> 
>> 
>> 
>> 
>>     ]
>>   }
>> ]
>> }
>> 
>> 
>> but with this one i can’t.
>> 
>> {
>> "clusters": [
>>   {
>>     "nodes": [
>>       {
>>         "node": {
>>           "hostnames": {
>>             "manage": [
>>               "pri.ostechnix.lan"
>>             ],
>>             "storage": [
>>               "51.15.77.14"
>>             ]
>>           },
>>           "zone": 1
>>         },
>>         "devices": [
>>           "/dev/nbd1",
>>           "/dev/nbd2",
>>   "/dev/nbd3"
>>         ]
>>       },
>>       {
>>         "node": {
>>           "hostnames": {
>>             "manage": [
>>               "sec.ostechnix.lan"
>>             ],
>>             "storage": [
>>               "51.15.90.60"
>>             ]
>>           },
>>           "zone": 1
>>         },
>>         "devices": [
>>           "/dev/nbd1",
>>           "/dev/nbd2",
>>           "/dev/nbd3"
>>         ]
>>       },
>>       {
>>         "node": {
>>           "hostnames": {
>>             "manage": [
>>               "third.ostechnix.lan"
>>             ],
>>             "storage": [
>>               "163.172.151.120"
>>             ]
>>           },
>>           "zone": 1
>>         },
>>         "devices": [
>>           "/dev/nbd1",
>>           "/dev/nbd2",
>>            "/dev/nbd3"
>>         ]
>>       }
>> 
>> 
>> 
>> 
>> 
>>     ]
>>   }
>> ]
>> }
>> 
>> 
>> root at kubemaster ~/heketi # ./heketi-cli   topology  load
>> --json=topology_with_ip.json
>> Creating cluster ... ID: 522adced1b7033646f0196d538b1f093
>> Creating node 51.15.77.14 ... ID: 1c52608dd3f624ad32cb4d1d074613d7
>> Adding device /dev/nbd1 ... OK
>> Adding device /dev/nbd2 ... OK
>> Adding device /dev/nbd3 ... OK
>> Creating node 51.15.90.60 ... ID: 36eec4fa09cf572b2a0a11f65c43b706
>> Adding device /dev/nbd1 ... OK
>> Adding device /dev/nbd2 ... OK
>> Adding device /dev/nbd3 ... OK
>> Creating node 163.172.151.120 ... ID: da1bea3e71629b4f6f8ed1f1584f521c
>> Adding device /dev/nbd1 ... OK
>> Adding device /dev/nbd2 ... OK
>> 
>> 
>> root at pri:~# gluster peer status
>> Number of Peers: 2
>> 
>> Hostname: sec.ostechnix.lan
>> Uuid: 2bfc4a96-66f5-4ff9-8ee4-5e382a711c3a
>> State: Peer in Cluster (Connected)
>> 
>> Hostname: third.ostechnix.lan
>> Uuid: c3ae3a1e-d9d0-4675-bf0d-0f6cf7267b30
>> State: Peer in Cluster (Connected)
>> 
>> 
>> 
>> root at kubemaster ~/heketi # ./heketi-cli volume create --size=3 --replica=3
>> Name: vol_a6a21750e64c5317e1f949baaac25372
>> Size: 3
>> Volume Id: a6a21750e64c5317e1f949baaac25372
>> Cluster Id: 522adced1b7033646f0196d538b1f093
>> Mount: pri.ostechnix.lan:vol_a6a21750e64c5317e1f949baaac25372
>> Mount Options: backup-volfile-servers=sec.ostechnix.lan,third.ostechnix.lan
>> Durability Type: replicate
>> Distributed+Replica: 3
>> root at kubemaster ~/heketi #
>> 
>> root at kubemaster ~/heketi # ./heketi-cli   topology info
>> 
>> Cluster Id: 522adced1b7033646f0196d538b1f093
>> 
>>   Volumes:
>> 
>> Name: vol_a6a21750e64c5317e1f949baaac25372
>> Size: 3
>> Id: a6a21750e64c5317e1f949baaac25372
>> Cluster Id: 522adced1b7033646f0196d538b1f093
>> Mount: pri.ostechnix.lan:vol_a6a21750e64c5317e1f949baaac25372
>> Mount Options: backup-volfile-servers=sec.ostechnix.lan,third.ostechnix.lan
>> Durability Type: replicate
>> Replica: 3
>> Snapshot: Disabled
>> 
>> Bricks:
>> Id: 05936fa978e7d9fb534c04b4e993fefb
>> Path:
>> /var/lib/heketi/mounts/vg_057848a621c23d381b086cf7898e58cc/brick_05936fa978e7d9fb534c04b4e993fefb/brick
>> Size (GiB): 3
>> Node: 1c52608dd3f624ad32cb4d1d074613d7
>> Device: 057848a621c23d381b086cf7898e58cc
>> 
>> Id: 624b9813a4d17bbe34565bb95d9fe2b3
>> Path:
>> /var/lib/heketi/mounts/vg_d9ce655c3d31fa92f6486abc19e155d5/brick_624b9813a4d17bbe34565bb95d9fe2b3/brick
>> Size (GiB): 3
>> Node: 36eec4fa09cf572b2a0a11f65c43b706
>> Device: d9ce655c3d31fa92f6486abc19e155d5
>> 
>> Id: a779eb5de0131ab97d8d17e8ddad4a3e
>> Path:
>> /var/lib/heketi/mounts/vg_2cb5b83b84bfd1c0e06ac99779f413d7/brick_a779eb5de0131ab97d8d17e8ddad4a3e/brick
>> Size (GiB): 3
>> Node: da1bea3e71629b4f6f8ed1f1584f521c
>> Device: 2cb5b83b84bfd1c0e06ac99779f413d7
>> 
>> 
>>   Nodes:
>> 
>> Node Id: 1c52608dd3f624ad32cb4d1d074613d7
>> State: online
>> Cluster Id: 522adced1b7033646f0196d538b1f093
>> Zone: 1
>> Management Hostname: 51.15.77.14
>> Storage Hostname: pri.ostechnix.lan
>> Devices:
>> Id:057848a621c23d381b086cf7898e58cc   Name:/dev/nbd3           State:online
>> Size (GiB):139     Used (GiB):3       Free (GiB):136
>> Bricks:
>> Id:05936fa978e7d9fb534c04b4e993fefb   Size (GiB):3       Path:
>> /var/lib/heketi/mounts/vg_057848a621c23d381b086cf7898e58cc/brick_05936fa978e7d9fb534c04b4e993fefb/brick
>> Id:24f557371d0eaf2c72065f4220113988   Name:/dev/nbd1           State:online
>> Size (GiB):46      Used (GiB):0       Free (GiB):46
>> Bricks:
>> Id:670546b880260bb0240b9e0ac51bb82c   Name:/dev/nbd2           State:online
>> Size (GiB):46      Used (GiB):0       Free (GiB):46
>> Bricks:
>> 
>> Node Id: 36eec4fa09cf572b2a0a11f65c43b706
>> State: online
>> Cluster Id: 522adced1b7033646f0196d538b1f093
>> Zone: 1
>> Management Hostname: 51.15.90.60
>> Storage Hostname: sec.ostechnix.lan
>> Devices:
>> Id:74320e3bd92b7fdffa06499b17fb3c8f   Name:/dev/nbd1           State:online
>> Size (GiB):46      Used (GiB):0       Free (GiB):46
>> Bricks:
>> Id:923f2d04b03145600e5fc2035b5699c0   Name:/dev/nbd2           State:online
>> Size (GiB):46      Used (GiB):0       Free (GiB):46
>> Bricks:
>> Id:d9ce655c3d31fa92f6486abc19e155d5   Name:/dev/nbd3           State:online
>> Size (GiB):139     Used (GiB):3       Free (GiB):136
>> Bricks:
>> Id:624b9813a4d17bbe34565bb95d9fe2b3   Size (GiB):3       Path:
>> /var/lib/heketi/mounts/vg_d9ce655c3d31fa92f6486abc19e155d5/brick_624b9813a4d17bbe34565bb95d9fe2b3/brick
>> 
>> Node Id: da1bea3e71629b4f6f8ed1f1584f521c
>> State: online
>> Cluster Id: 522adced1b7033646f0196d538b1f093
>> Zone: 1
>> Management Hostname: 163.172.151.120
>> Storage Hostname: third.ostechnix.lan
>> Devices:
>> Id:23621c56af1237380bac9bb482d13859   Name:/dev/nbd2           State:online
>> Size (GiB):46      Used (GiB):0       Free (GiB):46
>> Bricks:
>> Id:2cb5b83b84bfd1c0e06ac99779f413d7   Name:/dev/nbd3           State:online
>> Size (GiB):139     Used (GiB):3       Free (GiB):136
>> Bricks:
>> Id:a779eb5de0131ab97d8d17e8ddad4a3e   Size (GiB):3       Path:
>> /var/lib/heketi/mounts/vg_2cb5b83b84bfd1c0e06ac99779f413d7/brick_a779eb5de0131ab97d8d17e8ddad4a3e/brick
>> Id:89648509136a29dea7712733f1b91733   Name:/dev/nbd1           State:online
>> Size (GiB):46      Used (GiB):0       Free (GiB):46
>> Bricks:
>> 
>> 
>> this is ok.
>> 
>> but this is not work with kubernetes storage class because it need’s ip
>> address on storage hostname side.
>> 
>> Warning  ProvisioningFailed  21s   persistentvolume-controller  Failed
>> to provision volume with StorageClass "fast": create volume error: failed to
>> create endpoint/service error creating endpoint: Endpoints
>> "glusterfs-dynamic-claim1" is invalid: [subsets[0].addresses[0].ip: Invalid
>> value: "pri.ostechnix.lan": must be a valid IP address, (e.g. 10.9.8.7),
>> subsets[0].addresses[1].ip: Invalid value: "third.ostechnix.lan": must be a
>> valid IP address, (e.g. 10.9.8.7), subsets[0].addresses[2].ip: Invalid
>> value: "sec.ostechnix.lan": must be a valid IP address, (e.g. 10.9.8.7)]
>> 
>> 
>> 
>> 
>> 
>> On 7 Feb 2018, at 19:21, Jose A. Rivera <jarrpa at redhat.com> wrote:
>> 
>> What's the output of the topology load command?
>> 
>> Can you verify that glusterd is running and healthy on all the nodes?
>> I'm not sure about Ubuntu, but on Fedora we do "systemctl status
>> glusterd", so something like that.
>> 
>> Does running "glusterfs pool list" show all the nodes? This only needs
>> to be run on one of the nodes.
>> 
>> Finally, do you have a firewall up and does it have the requisite
>> ports open for GlusterFS?
>> 
>> --Jose
>> 
>> On Wed, Feb 7, 2018 at 9:20 AM, Ercan Aydoğan <ercan.aydogan at gmail.com>
>> wrote:
>> 
>> Yes, i delete heketi.db before every try
>> 
>> 
>> On 7 Feb 2018, at 18:18, Jose A. Rivera <jarrpa at redhat.com> wrote:
>> 
>> When you deleted the cluster, did you also delete the heketi database?
>> 
>> --Jose
>> 
>> On Wed, Feb 7, 2018 at 3:30 AM, Ercan Aydoğan <ercan.aydogan at gmail.com>
>> wrote:
>> 
>> Gluster cluster on ubuntu 16.04 and i remove with this commands
>> 
>> apt-get purge glusterfs-server -y  --allow-change-held-packages
>> rm -rf /var/lib/glusterd
>> rm -rf /var/log/glusterfs/
>> wipefs -a --force /dev/nbd1
>> wipefs -a --force /dev/nbd2
>> wipefs -a --force /dev/nbd3
>> 
>> after reboot  install with
>> 
>> apt-get install -y software-properties-common
>> add-apt-repository ppa:gluster/glusterfs-3.11
>> apt-get update
>> apt-get install -y glusterfs-server
>> 
>> after this i’m using
>> 
>> /heketi-cli   topology  load --json=topology.json
>> 
>> 
>> 
>> but
>> 
>> i can’t create any volume with gluster cmd or heketi-cli maybe this is
>> hostname or /etc/hostname issue.
>> 
>> my current /etc/hosts is
>> 
>> node 1
>> 
>> 
>> root at pri:/var/log/glusterfs# cat /etc/hosts
>> #127.0.0.1       localhost
>> 127.0.0.1        pri.ostechnix.lan     pri
>> ::1             localhost ip6-localhost ip6-loopback
>> ff02::1         ip6-allnodes
>> ff02::2         ip6-allrouters
>> 
>> 51.15.90.60      sec.ostechnix.lan     sec
>> 163.172.151.120  third.ostechnix.lan   third
>> root at pri:/var/log/glusterfs#
>> 
>> on every node i set 127.0.0.1 matching hostname.
>> 
>> 
>> 
>> 
>> 
>> 
>> On 7 Feb 2018, at 11:51, Humble Chirammal <hchiramm at redhat.com> wrote:
>> 
>> true, storage should be IP address. However afaict, it failed in "Peer in
>> cluster" , because gluster cluster is formed with different IP/hostname and
>> its stored in metadata. If you can delete the cluster and recreate it with
>> "storage" in IP, it should work I believe.
>> 
>> On Wed, Feb 7, 2018 at 2:16 PM, Ercan Aydoğan <ercan.aydogan at gmail.com>
>> wrote:
>> 
>> 
>> Hello,
>> 
>> i have dedicated 3.11.3 version glusterfs 3 nodes. i can create volumes
>> with both gluster own command line utility and heketi-cli it’s ok.
>> 
>> If i use  storage hostname FQDN i can create cluster with
>> 
>> /heketi-cli   topology  load --json=topology.json
>> 
>> after storageclass , secret and pvc creation i got this error.
>> 
>> kubectl get pvc claim1 returns
>> 
>> root at kubemaster ~ # kubectl describe pvc claim1
>> Name:          claim1
>> Namespace:     default
>> StorageClass:  fast
>> Status:        Pending
>> Volume:
>> Labels:        <none>
>> Annotations:
>> volume.beta.kubernetes.io/storage-provisioner=kubernetes.io/glusterfs
>> Finalizers:    []
>> Capacity:
>> Access Modes:
>> Events:
>> Type     Reason              Age   From                         Message
>> ----     ------              ----  ----                         -------
>> Warning  ProvisioningFailed  21s   persistentvolume-controller  Failed
>> to provision volume with StorageClass "fast": create volume error: failed to
>> create endpoint/service error creating endpoint: Endpoints
>> "glusterfs-dynamic-claim1" is invalid: [subsets[0].addresses[0].ip: Invalid
>> value: "pri.ostechnix.lan": must be a valid IP address, (e.g. 10.9.8.7),
>> subsets[0].addresses[1].ip: Invalid value: "third.ostechnix.lan": must be a
>> valid IP address, (e.g. 10.9.8.7), subsets[0].addresses[2].ip: Invalid
>> value: "sec.ostechnix.lan": must be a valid IP address, (e.g. 10.9.8.7)]
>> 
>> 
>> my topology.json content is
>> 
>> {
>> "clusters": [
>> {
>>   "nodes": [
>>     {
>>       "node": {
>>         "hostnames": {
>>           "manage": [
>>             "51.15.77.14"
>>           ],
>>           "storage": [
>>             "pri.ostechnix.lan"
>>           ]
>>         },
>>         "zone": 1
>>       },
>>       "devices": [
>>         "/dev/nbd1",
>>         "/dev/nbd2",
>> "/dev/nbd3"
>>       ]
>>     },
>>     {
>>       "node": {
>>         "hostnames": {
>>           "manage": [
>>             "51.15.90.60"
>>           ],
>>           "storage": [
>>             "sec.ostechnix.lan"
>>           ]
>>         },
>>         "zone": 1
>>       },
>>       "devices": [
>>         "/dev/nbd1",
>>         "/dev/nbd2",
>>         "/dev/nbd3"
>>       ]
>>     },
>>     {
>>       "node": {
>>         "hostnames": {
>>           "manage": [
>>             "163.172.151.120"
>>           ],
>>           "storage": [
>>             "third.ostechnix.lan"
>>           ]
>>         },
>>         "zone": 1
>>       },
>>       "devices": [
>>         "/dev/nbd1",
>>         "/dev/nbd2",
>>          "/dev/nbd3"
>>       ]
>>     }
>> 
>> 
>>   ]
>> }
>> ]
>> }
>> 
>> 
>> Yes, it says storage must be ip for endpoint creation. But if i change
>> 
>> manage : hostname
>> storage: ip address
>> 
>> {
>> "clusters": [
>> {
>>   "nodes": [
>>     {
>>       "node": {
>>         "hostnames": {
>>           "manage": [
>>             "pri.ostechnix.lan"
>>           ],
>>           "storage": [
>>             "51.15.77.14"
>>           ]
>>         },
>>         "zone": 1
>>       },
>>       "devices": [
>>         "/dev/nbd1",
>>         "/dev/nbd2",
>> "/dev/nbd3"
>>       ]
>>     },
>>     {
>>       "node": {
>>         "hostnames": {
>>           "manage": [
>>             "sec.ostechnix.lan"
>>           ],
>>           "storage": [
>>             "51.15.90.60"
>>           ]
>>         },
>>         "zone": 1
>>       },
>>       "devices": [
>>         "/dev/nbd1",
>>         "/dev/nbd2",
>>         "/dev/nbd3"
>>       ]
>>     },
>>     {
>>       "node": {
>>         "hostnames": {
>>           "manage": [
>>             "third.ostechnix.lan"
>>           ],
>>           "storage": [
>>             "163.172.151.120"
>>           ]
>>         },
>>         "zone": 1
>>       },
>>       "devices": [
>>         "/dev/nbd1",
>>         "/dev/nbd2",
>>          "/dev/nbd3"
>>       ]
>>     }
>> 
>> 
>>   ]
>> }
>> ]
>> }
>> 
>> i can not create volume with heketi-cli.
>> 
>> it says
>> 
>> root at kubemaster ~/heketi # ./heketi-cli volume create --size=3 --replica=3
>> Error: volume create: vol_207bbf81f28b959c51448b919be3bb59: failed: Host
>> 51.15.90.60 is not in 'Peer in Cluster’ state
>> 
>> i need advice how can fix this issue.
>> 
>> 
>> _______________________________________________
>> heketi-devel mailing list
>> heketi-devel at gluster.org
>> http://lists.gluster.org/mailman/listinfo/heketi-devel
>> 
>> 
>> 
>> 
>> --
>> Cheers,
>> Humble
>> 
>> Red Hat Storage Engineering
>> Mastering KVM Virtualization: http://amzn.to/2vFTXaW
>> Website: http://humblec.com
>> 
>> 
>> 
>> _______________________________________________
>> heketi-devel mailing list
>> heketi-devel at gluster.org
>> http://lists.gluster.org/mailman/listinfo/heketi-devel
>> 
>> 
>> 
>> 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/heketi-devel/attachments/20180207/a8a7653e/attachment-0001.html>


More information about the heketi-devel mailing list