<div dir="ltr"><div class="gmail_default" style=""><span style="font-family:verdana,sans-serif">I am just running this bash script which will start the </span><font face="trebuchet ms, sans-serif">rancher-glusterfs-client</font><font face="verdana, sans-serif"> container:</font></div><div class="gmail_default" style="font-family:verdana,sans-serif"><br></div><div class="gmail_default" style="font-family:verdana,sans-serif"><span class="gmail-im" style="font-family:arial,sans-serif;font-size:12.8px"><div class="gmail_default"><font face="trebuchet ms, sans-serif">[root@node-a ~]# cat rancher-glusterfs-client.sh</font></div><div class="gmail_default"><font face="trebuchet ms, sans-serif">#!/bin/bash</font></div><div class="gmail_default"><font face="trebuchet ms, sans-serif">sudo docker run --privileged \</font></div><div class="gmail_default"><font face="trebuchet ms, sans-serif"> --name=gluster-client \</font></div><div class="gmail_default"><font face="trebuchet ms, sans-serif"> -d \</font></div><div class="gmail_default"><font face="trebuchet ms, sans-serif"> -v /sys/fs/cgroup:/sys/fs/cgroup \</font></div></span><div class="gmail_default" style="font-family:arial,sans-serif;font-size:12.8px"><font face="trebuchet ms, sans-serif"> -v /var/log/glusterfs:/var/log/<wbr>glusterfs \</font></div><div class="gmail_default" style="font-family:arial,sans-serif;font-size:12.8px"><font face="trebuchet ms, sans-serif"> --env GLUSTER_PEER=10.128.0.12,10.<wbr>128.0.15,10.128.0.16 \</font></div><div class="gmail_default" style="font-family:arial,sans-serif;font-size:12.8px"><font face="trebuchet ms, sans-serif"> nixel/rancher-glusterfs-client</font></div><div class="gmail_default" style="font-family:arial,sans-serif;font-size:12.8px"><font face="trebuchet ms, sans-serif"><br></font></div><div class="gmail_default" style="font-family:arial,sans-serif;font-size:12.8px"><font face="trebuchet ms, sans-serif"><br></font></div></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Sep 4, 2017 at 6:26 PM, Jose A. Rivera <span dir="ltr"><<a href="mailto:jarrpa@redhat.com" target="_blank">jarrpa@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">What is the exact command you're running?<br>
<div class="HOEnZb"><div class="h5"><br>
On Mon, Sep 4, 2017 at 4:26 AM, Gaurav Chhabra <<a href="mailto:varuag.chhabra@gmail.com">varuag.chhabra@gmail.com</a>> wrote:<br>
> Hi Jose,<br>
><br>
><br>
> From this link, it seems mount.glusterfs might actually be present in the<br>
> container that launched and quickly terminated.<br>
><br>
> <a href="https://unix.stackexchange.com/questions/312178/glusterfs-replicated-volume-mounting-issue" rel="noreferrer" target="_blank">https://unix.stackexchange.<wbr>com/questions/312178/<wbr>glusterfs-replicated-volume-<wbr>mounting-issue</a><br>
><br>
> If you check the question that the user posted, the same error (Mount<br>
> failed) was reported that i sent you in the last email.<br>
><br>
> After seeing the above, i checked /var/log/glusterfs on my host (RancherOS)<br>
> but it was empty. I ran the container again but with explicit volume mount<br>
> as shown below:<br>
><br>
> [root@node-a ~]# cat rancher-glusterfs-client.sh<br>
> #!/bin/bash<br>
> sudo docker run --privileged \<br>
> --name=gluster-client \<br>
> -d \<br>
> -v /sys/fs/cgroup:/sys/fs/cgroup \<br>
> -v /var/log/glusterfs:/var/log/<wbr>glusterfs \<br>
> --env GLUSTER_PEER=10.128.0.12,10.<wbr>128.0.15,10.128.0.16 \<br>
> nixel/rancher-glusterfs-client<br>
> This time, i could see a log file (/var/log/glusterfs/mnt-<wbr>ranchervol.log)<br>
> present. I have attached the content of the same. Also attached are logs<br>
> from Heketi client/server (both on one node) and Gluster cluster.<br>
><br>
><br>
> Regards,<br>
> Gaurav<br>
><br>
><br>
> On Mon, Sep 4, 2017 at 2:29 PM, Gaurav Chhabra <<a href="mailto:varuag.chhabra@gmail.com">varuag.chhabra@gmail.com</a>><br>
> wrote:<br>
>><br>
>> Hi Jose,<br>
>><br>
>><br>
>> I tried setting up things using the link you provided and i was able to<br>
>> get all steps working for 3 node Gluster cluster, all running on CentOS7,<br>
>> without any issue. However, as expected, when i tried configuring Kubernetes<br>
>> by installing nixel/rancher-glusterfs-client container, i got error:<br>
>><br>
>> [root@node-a ~]# cat rancher-glusterfs-client.sh<br>
>> #!/bin/bash<br>
>> sudo docker run --privileged \<br>
>> --name=gluster-client \<br>
>> -d \<br>
>> -v /sys/fs/cgroup:/sys/fs/cgroup \<br>
>> --env GLUSTER_PEER=10.128.0.12,10.<wbr>128.0.15,10.128.0.16 \<br>
>> nixel/rancher-glusterfs-client<br>
>><br>
>> [root@node-a ~]# ./rancher-glusterfs-client.sh<br>
>> ac069caccdce147d6f423fc5661663<wbr>45191dbc1b11f3416c66207a1fd11f<wbr>da6b<br>
>><br>
>> [root@node-a ~]# docker logs gluster-client<br>
>> => Checking if I can reach GlusterFS node 10.128.0.12 ...<br>
>> => GlusterFS node 10.128.0.12 is alive<br>
>> => Mounting GlusterFS volume ranchervol from GlusterFS node 10.128.0.12<br>
>> ...<br>
>> Mount failed. Please check the log file for more details.<br>
>><br>
>> If i try running the next step as described in your link, i get the<br>
>> following:<br>
>><br>
>> [root@node-a ~]# modprobe fuse<br>
>> modprobe: module fuse not found in modules.dep<br>
>><br>
>> Since the container failed to start, i could only check on the host<br>
>> (RancherOS) and i could only find two mount-related commands: mount &<br>
>> mountpoint<br>
>><br>
>> Any pointers?<br>
>><br>
>><br>
>> Regards,<br>
>> Gaurav<br>
>><br>
>> On Sun, Sep 3, 2017 at 11:18 PM, Jose A. Rivera <<a href="mailto:jarrpa@redhat.com">jarrpa@redhat.com</a>> wrote:<br>
>>><br>
>>> Installing the glusterfs-client container should be fine. :) The main<br>
>>> thing that's needed is that all your Kubernetes nodes need to have the<br>
>>> "mount.glusterfs" command available so Kube can mount the GlusterFS<br>
>>> volumes and present them to the pods.<br>
>>><br>
>>> On Sun, Sep 3, 2017 at 12:14 PM, Gaurav Chhabra<br>
>>> <<a href="mailto:varuag.chhabra@gmail.com">varuag.chhabra@gmail.com</a>> wrote:<br>
>>> > Thanks Jose. The link you've suggested looks good but again it expects<br>
>>> > me to<br>
>>> > install gluster-client on Kubernetes node and i fall into the same<br>
>>> > issue of<br>
>>> > installing a container for glusterfs. Only difference is that this time<br>
>>> > it's<br>
>>> > glusterfs-client and not glusterfs-server. :)<br>
>>> ><br>
>>> > I will try this out and let you know tomorrow.<br>
>>> ><br>
>>> ><br>
>>> > Regards,<br>
>>> > Gaurav<br>
>>> ><br>
>>> ><br>
>>> > On Sun, Sep 3, 2017 at 2:11 AM, Jose A. Rivera <<a href="mailto:jarrpa@redhat.com">jarrpa@redhat.com</a>><br>
>>> > wrote:<br>
>>> >><br>
>>> >> Hey, no problem! I'm eager to learn more about different flavors of<br>
>>> >> Linux, I just apologize for my relative inexperience with them. :)<br>
>>> >><br>
>>> >> To that end, I will also admit I'm not very experienced with direct<br>
>>> >> Docker myself. I understand the basic workflow and know some of the<br>
>>> >> run options, but not having deep experience keeps me from having a<br>
>>> >> better understanding of the patterns and consequences.<br>
>>> >><br>
>>> >> Thus, I'd like to guide you in a direction I'd more apt to help you in<br>
>>> >> right now. I know that you can't have multiple GlusterFS servers<br>
>>> >> running on the same nodes, and I know that we have been successfully<br>
>>> >> running several configurations using our gluster/gluster-centos image.<br>
>>> >> If you follow the Kubernetes configuration on gluster-kubernetes, the<br>
>>> >> pod/container is run privileged and with host networking, and we<br>
>>> >> require that the node has all listed ports open, not just 2222. The<br>
>>> >> sshd running in the container is listening on 2222, not 22, but it is<br>
>>> >> also not really required if you're not doing geo-replication.<br>
>>> >><br>
>>> >> Alternatively, you can indeed run GlusterFS outside of Kubernetes but<br>
>>> >> still have Kubernetes apps access GlusterFS storage. The nodes can be<br>
>>> >> anything you want, they just need to be running GlusterFS and you need<br>
>>> >> a heketi service managing them. Here is an example of how to set this<br>
>>> >> up using CentOS:<br>
>>> >><br>
>>> >><br>
>>> >><br>
>>> >> <a href="https://github.com/gluster/gluster-kubernetes/tree/master/docs/examples/dynamic_provisioning_external_gluster" rel="noreferrer" target="_blank">https://github.com/gluster/<wbr>gluster-kubernetes/tree/<wbr>master/docs/examples/dynamic_<wbr>provisioning_external_gluster</a><br>
>>> >><br>
>>> >> Hope this is at least leading you in a useful direction. :)<br>
>>> >><br>
>>> >> --Jose<br>
>>> >><br>
>>> >> On Sat, Sep 2, 2017 at 3:16 PM, Gaurav Chhabra<br>
>>> >> <<a href="mailto:varuag.chhabra@gmail.com">varuag.chhabra@gmail.com</a>><br>
>>> >> wrote:<br>
>>> >> > Hi Jose,<br>
>>> >> ><br>
>>> >> > webcenter/rancher-glusterfs-<wbr>server is actually a container provided<br>
>>> >> > by<br>
>>> >> > Sebastien, its maintainer. It's a Docker container which has<br>
>>> >> > GlusterFS<br>
>>> >> > server running within it. On the host i.e., RancherOS, there is no<br>
>>> >> > separate<br>
>>> >> > GlusterFS server running because we cannot install anything that<br>
>>> >> > way.<br>
>>> >> > Running using container is the only way so i started<br>
>>> >> > ancher-glusterfs-server<br>
>>> >> > container with the following parameters:<br>
>>> >> ><br>
>>> >> > [root@node-1 rancher]# cat gluster-server.sh<br>
>>> >> > #!/bin/bash<br>
>>> >> > sudo docker run --name=gluster-server -d \<br>
>>> >> > --env 'SERVICE_NAME=gluster' \<br>
>>> >> > --restart always \<br>
>>> >> > --publish 2222:22 \<br>
>>> >> > webcenter/rancher-glusterfs-<wbr>server<br>
>>> >> ><br>
>>> >> > Here's the link to the dockerfile:<br>
>>> >> ><br>
>>> >> ><br>
>>> >> > <a href="https://hub.docker.com/r/webcenter/rancher-glusterfs-server/~/dockerfile/" rel="noreferrer" target="_blank">https://hub.docker.com/r/<wbr>webcenter/rancher-glusterfs-<wbr>server/~/dockerfile/</a><br>
>>> >> ><br>
>>> >> > It's similar to other GlusteFS containers provided by other<br>
>>> >> > maintainers<br>
>>> >> > for<br>
>>> >> > different OS. For example, for CentOS, we have<br>
>>> >> > <a href="https://hub.docker.com/r/gluster/gluster-centos/~/dockerfile/" rel="noreferrer" target="_blank">https://hub.docker.com/r/<wbr>gluster/gluster-centos/~/<wbr>dockerfile/</a><br>
>>> >> ><br>
>>> >> > From what i understand, Heketi does support container based<br>
>>> >> > GlusterFS<br>
>>> >> > server<br>
>>> >> > as mentioned in the prerequisite where it says:<br>
>>> >> ><br>
>>> >> > "Each node must have the following ports opened for GlusterFS<br>
>>> >> > communications:<br>
>>> >> > 2222 - GlusterFS pod's sshd"<br>
>>> >> ><br>
>>> >> > That's the reason i've exposed port 2222 for 22 as shown above.<br>
>>> >> > Please<br>
>>> >> > correct me if i misunderstood it.<br>
>>> >> ><br>
>>> >> > As soon as i run the above script (gluster-server.sh), it<br>
>>> >> > automatically<br>
>>> >> > creates the following directories on host. This should have ideally<br>
>>> >> > not<br>
>>> >> > been<br>
>>> >> > empty as you mentioned.<br>
>>> >> ><br>
>>> >> > /etc/glusterfs /var/lib/glusterd /var/log/glusterfs<br>
>>> >> ><br>
>>> >> > Just wanted to know in which circumstances do we get this specific<br>
>>> >> > error<br>
>>> >> > (Failed to get D-Bus connection: Operation not permitted) related to<br>
>>> >> > Readiness probe failing. Searching online took me to discussions<br>
>>> >> > around<br>
>>> >> > running container in privileged mode and some directory to be<br>
>>> >> > mounted.<br>
>>> >> > Based<br>
>>> >> > on that, i also modified my container startup script to the<br>
>>> >> > following:<br>
>>> >> ><br>
>>> >> > #!/bin/bash<br>
>>> >> > sudo docker run --privileged \<br>
>>> >> > --name=gluster-server \<br>
>>> >> > -d \<br>
>>> >> > -v /sys/fs/cgroup:/sys/fs/cgroup \<br>
>>> >> > -v /etc/glusterfs:/etc/glusterfs \<br>
>>> >> > -v /var/lib/glusterd:/var/lib/<wbr>glusterd \<br>
>>> >> > -v /var/log/glusterfs:/var/log/<wbr>glusterfs \<br>
>>> >> > --env 'SERVICE_NAME=gluster' \<br>
>>> >> > --restart always \<br>
>>> >> > --publish 2222:22 \<br>
>>> >> > webcenter/rancher-glusterfs-<wbr>server<br>
>>> >> > Still, the issue persists.<br>
>>> >> ><br>
>>> >> > I also logged into the container and checked whether systemctl<br>
>>> >> > command<br>
>>> >> > is<br>
>>> >> > present. It was there but manualy running the command also doesn't<br>
>>> >> > work:<br>
>>> >> ><br>
>>> >> > [root@node-c ~]# docker exec -it gluster-server /bin/bash<br>
>>> >> > root@42150f203f80:/app# systemctl status glusterd.service<br>
>>> >> > WARNING: terminal is not fully functional<br>
>>> >> > Failed to connect to bus: No such file or directory<br>
>>> >> ><br>
>>> >> > Under section 'ADVANCED OPTIONS - Security/Host' in this link, it<br>
>>> >> > talks<br>
>>> >> > about SYS_ADMIN setting. Any idea how i can try this?<br>
>>> >> ><br>
>>> >> > Also, there was this mentioned in the Heketi setup page:<br>
>>> >> > "If you are not able to deploy a hyper-converged GlusterFS cluster,<br>
>>> >> > you<br>
>>> >> > must<br>
>>> >> > have one running somewhere that the Kubernetes nodes can access"<br>
>>> >> ><br>
>>> >> >>>> Does it mean running the three node Gluster cluster outside<br>
>>> >> >>>> Kubernetes,<br>
>>> >> >>>> may be on some VM running on RHEL/CentOS etc? If yes, then how<br>
>>> >> >>>> will i<br>
>>> >> >>>> be<br>
>>> >> >>>> able to tell Gluster which volume from the Kubernetes cluster pod<br>
>>> >> >>>> to<br>
>>> >> >>>> sync?<br>
>>> >> >>>> Any references?<br>
>>> >> ><br>
>>> >> ><br>
>>> >> > I really appreciate your responses despite the fact that you've not<br>
>>> >> > used<br>
>>> >> > RancherOS but still trying to help.<br>
>>> >> ><br>
>>> >> ><br>
>>> >> > Thanks,<br>
>>> >> > Gaurav<br>
>>> >> ><br>
>>> >> ><br>
>>> >> > On Sat, Sep 2, 2017 at 7:35 PM, Jose A. Rivera <<a href="mailto:jarrpa@redhat.com">jarrpa@redhat.com</a>><br>
>>> >> > wrote:<br>
>>> >> >><br>
>>> >> >> I'm afraid I have no experience with RancherOS, so I may be missing<br>
>>> >> >> some things about how it works. My primary experience is with<br>
>>> >> >> Fedora,<br>
>>> >> >> CentOS, and Ubuntu.<br>
>>> >> >><br>
>>> >> >> What is webcenter/rancher-glusterfs-<wbr>server? If it's running another<br>
>>> >> >> glusterd then you probably don't want to be running it and should<br>
>>> >> >> remove it from your systems.<br>
>>> >> >><br>
>>> >> >> The glusterfs pods mount hostpath volumes from the host they're<br>
>>> >> >> running on to persist their configuration. Thus anything they write<br>
>>> >> >> to<br>
>>> >> >> those directories should land on the host. If that's not happening<br>
>>> >> >> then that's an additional problem.<br>
>>> >> >><br>
>>> >> >> --Jose<br>
>>> >> >><br>
>>> >> >> On Fri, Sep 1, 2017 at 11:17 PM, Gaurav Chhabra<br>
>>> >> >> <<a href="mailto:varuag.chhabra@gmail.com">varuag.chhabra@gmail.com</a>> wrote:<br>
>>> >> >> > Hi Jose,<br>
>>> >> >> ><br>
>>> >> >> ><br>
>>> >> >> > I tried your suggestion but there is one confusion regarding<br>
>>> >> >> > point<br>
>>> >> >> > #3.<br>
>>> >> >> > Since<br>
>>> >> >> > RancherOS has everything running as container, i am running<br>
>>> >> >> > webcenter/rancher-glusterfs-<wbr>server container on all three nodes.<br>
>>> >> >> > Now<br>
>>> >> >> > as<br>
>>> >> >> > far<br>
>>> >> >> > as removing the directories are concerned, i hope you meant<br>
>>> >> >> > removing<br>
>>> >> >> > them on<br>
>>> >> >> > the host and _not_ from within the container. After completing<br>
>>> >> >> > step 1<br>
>>> >> >> > and 2,<br>
>>> >> >> > i checked the contents of all the directories that you specified<br>
>>> >> >> > in<br>
>>> >> >> > point<br>
>>> >> >> > #3. All were empty as you can see in the attached other_logs.txt.<br>
>>> >> >> > So<br>
>>> >> >> > i<br>
>>> >> >> > got<br>
>>> >> >> > confused. I ran the deploy again but the issue persists. Two pods<br>
>>> >> >> > show<br>
>>> >> >> > Liveness error and the third one, Readiness error.<br>
>>> >> >> ><br>
>>> >> >> > I then tried removing those directories (Step #3) from within the<br>
>>> >> >> > container<br>
>>> >> >> > but getting following error:<br>
>>> >> >> ><br>
>>> >> >> > root@c0f8ab4d92a2:/app# rm -rf /var/lib/heketi /etc/glusterfs<br>
>>> >> >> > /var/lib/glusterd /var/log/glusterfs<br>
>>> >> >> > rm: cannot remove '/var/lib/glusterd': Device or resource busy<br>
>>> >> >> ><br>
>>> >> >> ><br>
>>> >> >> ><br>
>>> >> >> > On Fri, Sep 1, 2017 at 8:21 PM, Jose A. Rivera<br>
>>> >> >> > <<a href="mailto:jarrpa@redhat.com">jarrpa@redhat.com</a>><br>
>>> >> >> > wrote:<br>
>>> >> >> >><br>
>>> >> >> >> 1. Add a line to the ssh-exec portion of heketi.json of the<br>
>>> >> >> >> sort:<br>
>>> >> >> >><br>
>>> >> >> >> "sudo": true,<br>
>>> >> >> >><br>
>>> >> >> >> 2. Run<br>
>>> >> >> >><br>
>>> >> >> >> gk-deploy -g --abort<br>
>>> >> >> >><br>
>>> >> >> >> 3. On the nodes that were/will be running GlusterFS pods, run:<br>
>>> >> >> >><br>
>>> >> >> >> rm -rf /var/lib/heketi /etc/glusterfs /var/lib/glusterd<br>
>>> >> >> >> /var/log/glusterfs<br>
>>> >> >> >><br>
>>> >> >> >> Then try the deploy again.<br>
>>> >> >> >><br>
>>> >> >> >> On Fri, Sep 1, 2017 at 6:05 AM, Gaurav Chhabra<br>
>>> >> >> >> <<a href="mailto:varuag.chhabra@gmail.com">varuag.chhabra@gmail.com</a>><br>
>>> >> >> >> wrote:<br>
>>> >> >> >> > Hi Jose,<br>
>>> >> >> >> ><br>
>>> >> >> >> ><br>
>>> >> >> >> > Thanks for the reply. It seems the three gluster pods might<br>
>>> >> >> >> > have<br>
>>> >> >> >> > been<br>
>>> >> >> >> > a<br>
>>> >> >> >> > copy-paste from another set of cluster where i was trying to<br>
>>> >> >> >> > setup<br>
>>> >> >> >> > the<br>
>>> >> >> >> > same<br>
>>> >> >> >> > thing using CentOS. Sorry for that. By the way, i did check<br>
>>> >> >> >> > for<br>
>>> >> >> >> > the<br>
>>> >> >> >> > kernel<br>
>>> >> >> >> > modules and it seems it's already there. Also, i am attaching<br>
>>> >> >> >> > fresh<br>
>>> >> >> >> > set<br>
>>> >> >> >> > of<br>
>>> >> >> >> > files because i created a new cluster and thought of giving it<br>
>>> >> >> >> > a<br>
>>> >> >> >> > try<br>
>>> >> >> >> > again.<br>
>>> >> >> >> > Issue still persists. :(<br>
>>> >> >> >> ><br>
>>> >> >> >> > In heketi.json, there is a slight change w.r.t the user which<br>
>>> >> >> >> > connects<br>
>>> >> >> >> > to<br>
>>> >> >> >> > glusterfs node using SSH. I am not sure how Heketi was using<br>
>>> >> >> >> > root<br>
>>> >> >> >> > user<br>
>>> >> >> >> > to<br>
>>> >> >> >> > login because i wasn't able to use root and do manual SSH.<br>
>>> >> >> >> > With<br>
>>> >> >> >> > rancher<br>
>>> >> >> >> > user, i can login successfully so i think this should be fine.<br>
>>> >> >> >> ><br>
>>> >> >> >> > /etc/heketi/heketi.json:<br>
>>> >> >> >> ><br>
>>> >> >> >> > ------------------------------<wbr>------------------------------<wbr>------<br>
>>> >> >> >> > "executor": "ssh",<br>
>>> >> >> >> ><br>
>>> >> >> >> > "_sshexec_comment": "SSH username and private key file<br>
>>> >> >> >> > information",<br>
>>> >> >> >> > "sshexec": {<br>
>>> >> >> >> > "keyfile": "/var/lib/heketi/.ssh/id_rsa",<br>
>>> >> >> >> > "user": "rancher",<br>
>>> >> >> >> > "port": "22",<br>
>>> >> >> >> > "fstab": "/etc/fstab"<br>
>>> >> >> >> > },<br>
>>> >> >> >> ><br>
>>> >> >> >> > ------------------------------<wbr>------------------------------<wbr>------<br>
>>> >> >> >> ><br>
>>> >> >> >> > Before running gk-deploy:<br>
>>> >> >> >> ><br>
>>> >> >> >> > ------------------------------<wbr>------------------------------<wbr>------<br>
>>> >> >> >> > [root@workstation deploy]# kubectl get<br>
>>> >> >> >> > nodes,pods,daemonset,<wbr>deployments,services<br>
>>> >> >> >> > NAME STATUS AGE<br>
>>> >> >> >> > VERSION<br>
>>> >> >> >> > no/node-a.c.kubernetes-174104.<wbr>internal Ready 3h<br>
>>> >> >> >> > v1.7.2-rancher1<br>
>>> >> >> >> > no/node-b.c.kubernetes-174104.<wbr>internal Ready 3h<br>
>>> >> >> >> > v1.7.2-rancher1<br>
>>> >> >> >> > no/node-c.c.kubernetes-174104.<wbr>internal Ready 3h<br>
>>> >> >> >> > v1.7.2-rancher1<br>
>>> >> >> >> ><br>
>>> >> >> >> > NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE<br>
>>> >> >> >> > svc/kubernetes 10.43.0.1 <none> 443/TCP 3h<br>
>>> >> >> >> ><br>
>>> >> >> >> > ------------------------------<wbr>------------------------------<wbr>------<br>
>>> >> >> >> ><br>
>>> >> >> >> > After running gk-deploy:<br>
>>> >> >> >> ><br>
>>> >> >> >> > ------------------------------<wbr>------------------------------<wbr>------<br>
>>> >> >> >> > [root@workstation messagegc]# kubectl get<br>
>>> >> >> >> > nodes,pods,daemonset,<wbr>deployments,services<br>
>>> >> >> >> > NAME STATUS AGE<br>
>>> >> >> >> > VERSION<br>
>>> >> >> >> > no/node-a.c.kubernetes-174104.<wbr>internal Ready 3h<br>
>>> >> >> >> > v1.7.2-rancher1<br>
>>> >> >> >> > no/node-b.c.kubernetes-174104.<wbr>internal Ready 3h<br>
>>> >> >> >> > v1.7.2-rancher1<br>
>>> >> >> >> > no/node-c.c.kubernetes-174104.<wbr>internal Ready 3h<br>
>>> >> >> >> > v1.7.2-rancher1<br>
>>> >> >> >> ><br>
>>> >> >> >> > NAME READY STATUS RESTARTS AGE<br>
>>> >> >> >> > po/glusterfs-0j9l5 0/1 Running 0 2m<br>
>>> >> >> >> > po/glusterfs-gqz4c 0/1 Running 0 2m<br>
>>> >> >> >> > po/glusterfs-gxvcb 0/1 Running 0 2m<br>
>>> >> >> >> ><br>
>>> >> >> >> > NAME DESIRED CURRENT READY UP-TO-DATE<br>
>>> >> >> >> > AVAILABLE<br>
>>> >> >> >> > NODE-SELECTOR AGE<br>
>>> >> >> >> > ds/glusterfs 3 3 0 3 0<br>
>>> >> >> >> > storagenode=glusterfs 2m<br>
>>> >> >> >> ><br>
>>> >> >> >> > NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE<br>
>>> >> >> >> > svc/kubernetes 10.43.0.1 <none> 443/TCP 3h<br>
>>> >> >> >> ><br>
>>> >> >> >> > ------------------------------<wbr>------------------------------<wbr>------<br>
>>> >> >> >> ><br>
>>> >> >> >> > Kernel module check on all three nodes:<br>
>>> >> >> >> ><br>
>>> >> >> >> > ------------------------------<wbr>------------------------------<wbr>------<br>
>>> >> >> >> > [root@node-a ~]# find /lib*/modules/$(uname -r) -name *.ko |<br>
>>> >> >> >> > grep<br>
>>> >> >> >> > 'thin-pool\|snapshot\|mirror' | xargs ls -ltr<br>
>>> >> >> >> > -rw-r--r-- 1 root root 92310 Jun 26 04:13<br>
>>> >> >> >> ><br>
>>> >> >> >> > /lib64/modules/4.9.34-rancher/<wbr>kernel/drivers/md/dm-thin-<wbr>pool.ko<br>
>>> >> >> >> > -rw-r--r-- 1 root root 56982 Jun 26 04:13<br>
>>> >> >> >> > /lib64/modules/4.9.34-rancher/<wbr>kernel/drivers/md/dm-snapshot.<wbr>ko<br>
>>> >> >> >> > -rw-r--r-- 1 root root 27070 Jun 26 04:13<br>
>>> >> >> >> > /lib64/modules/4.9.34-rancher/<wbr>kernel/drivers/md/dm-mirror.ko<br>
>>> >> >> >> > -rw-r--r-- 1 root root 92310 Jun 26 04:13<br>
>>> >> >> >> > /lib/modules/4.9.34-rancher/<wbr>kernel/drivers/md/dm-thin-<wbr>pool.ko<br>
>>> >> >> >> > -rw-r--r-- 1 root root 56982 Jun 26 04:13<br>
>>> >> >> >> > /lib/modules/4.9.34-rancher/<wbr>kernel/drivers/md/dm-snapshot.<wbr>ko<br>
>>> >> >> >> > -rw-r--r-- 1 root root 27070 Jun 26 04:13<br>
>>> >> >> >> > /lib/modules/4.9.34-rancher/<wbr>kernel/drivers/md/dm-mirror.ko<br>
>>> >> >> >> ><br>
>>> >> >> >> > ------------------------------<wbr>------------------------------<wbr>------<br>
>>> >> >> >> ><br>
>>> >> >> >> > Error snapshot attached.<br>
>>> >> >> >> ><br>
>>> >> >> >> > In my first mail, i checked that Readiness Probe failure check<br>
>>> >> >> >> > has<br>
>>> >> >> >> > this<br>
>>> >> >> >> > code<br>
>>> >> >> >> > in kube-templates/glusterfs-<wbr>daemonset.yaml file:<br>
>>> >> >> >> ><br>
>>> >> >> >> > ------------------------------<wbr>------------------------------<wbr>------<br>
>>> >> >> >> > readinessProbe:<br>
>>> >> >> >> > timeoutSeconds: 3<br>
>>> >> >> >> > initialDelaySeconds: 40<br>
>>> >> >> >> > exec:<br>
>>> >> >> >> > command:<br>
>>> >> >> >> > - "/bin/bash"<br>
>>> >> >> >> > - "-c"<br>
>>> >> >> >> > - systemctl status glusterd.service<br>
>>> >> >> >> > periodSeconds: 25<br>
>>> >> >> >> > successThreshold: 1<br>
>>> >> >> >> > failureThreshold: 15<br>
>>> >> >> >> ><br>
>>> >> >> >> > ------------------------------<wbr>------------------------------<wbr>------<br>
>>> >> >> >> ><br>
>>> >> >> >> > I tried logging into glustefs container on one of the node and<br>
>>> >> >> >> > ran<br>
>>> >> >> >> > the<br>
>>> >> >> >> > above<br>
>>> >> >> >> > command:<br>
>>> >> >> >> ><br>
>>> >> >> >> > [root@node-a ~]# docker exec -it c0f8ab4d92a23b6df2 /bin/bash<br>
>>> >> >> >> > root@c0f8ab4d92a2:/app# systemctl status glusterd.service<br>
>>> >> >> >> > WARNING: terminal is not fully functional<br>
>>> >> >> >> > Failed to connect to bus: No such file or directory<br>
>>> >> >> >> ><br>
>>> >> >> >> ><br>
>>> >> >> >> > Any check that i can do manually on nodes to debug further?<br>
>>> >> >> >> > Any<br>
>>> >> >> >> > suggestions?<br>
>>> >> >> >> ><br>
>>> >> >> >> ><br>
>>> >> >> >> > On Thu, Aug 31, 2017 at 6:53 PM, Jose A. Rivera<br>
>>> >> >> >> > <<a href="mailto:jarrpa@redhat.com">jarrpa@redhat.com</a>><br>
>>> >> >> >> > wrote:<br>
>>> >> >> >> >><br>
>>> >> >> >> >> Hey Gaurav,<br>
>>> >> >> >> >><br>
>>> >> >> >> >> The kernel modules must be loaded on all nodes that will run<br>
>>> >> >> >> >> heketi<br>
>>> >> >> >> >> pods. Additionally, you must have at least three nodes<br>
>>> >> >> >> >> specified<br>
>>> >> >> >> >> in<br>
>>> >> >> >> >> your topology file. I'm not sure how you're getting three<br>
>>> >> >> >> >> gluster<br>
>>> >> >> >> >> pods<br>
>>> >> >> >> >> when you only have two nodes defined... :)<br>
>>> >> >> >> >><br>
>>> >> >> >> >> --Jose<br>
>>> >> >> >> >><br>
>>> >> >> >> >> On Wed, Aug 30, 2017 at 5:27 AM, Gaurav Chhabra<br>
>>> >> >> >> >> <<a href="mailto:varuag.chhabra@gmail.com">varuag.chhabra@gmail.com</a>> wrote:<br>
>>> >> >> >> >> > Hi,<br>
>>> >> >> >> >> ><br>
>>> >> >> >> >> ><br>
>>> >> >> >> >> > I have the following setup in place:<br>
>>> >> >> >> >> ><br>
>>> >> >> >> >> > 1 node : RancherOS having Rancher application for<br>
>>> >> >> >> >> > Kubernetes<br>
>>> >> >> >> >> > setup<br>
>>> >> >> >> >> > 2 nodes : RancherOS having Rancher agent<br>
>>> >> >> >> >> > 1 node : CentOS 7 workstation having kubectl installed<br>
>>> >> >> >> >> > and<br>
>>> >> >> >> >> > folder<br>
>>> >> >> >> >> > cloned/downloaded from<br>
>>> >> >> >> >> > <a href="https://github.com/gluster/gluster-kubernetes" rel="noreferrer" target="_blank">https://github.com/gluster/<wbr>gluster-kubernetes</a><br>
>>> >> >> >> >> > using<br>
>>> >> >> >> >> > which i run Heketi setup (gk-deploy -g)<br>
>>> >> >> >> >> ><br>
>>> >> >> >> >> > I also have rancher-glusterfs-server container running with<br>
>>> >> >> >> >> > the<br>
>>> >> >> >> >> > following<br>
>>> >> >> >> >> > configuration:<br>
>>> >> >> >> >> ><br>
>>> >> >> >> >> ><br>
>>> >> >> >> >> > ------------------------------<wbr>------------------------------<wbr>------<br>
>>> >> >> >> >> > [root@node-1 rancher]# cat gluster-server.sh<br>
>>> >> >> >> >> > #!/bin/bash<br>
>>> >> >> >> >> ><br>
>>> >> >> >> >> > sudo docker run --name=gluster-server -d \<br>
>>> >> >> >> >> > --env 'SERVICE_NAME=gluster' \<br>
>>> >> >> >> >> > --restart always \<br>
>>> >> >> >> >> > --env 'GLUSTER_DATA=/srv/docker/<wbr>gitlab' \<br>
>>> >> >> >> >> > --publish 2222:22 \<br>
>>> >> >> >> >> > webcenter/rancher-glusterfs-<wbr>server<br>
>>> >> >> >> >> ><br>
>>> >> >> >> >> ><br>
>>> >> >> >> >> > ------------------------------<wbr>------------------------------<wbr>------<br>
>>> >> >> >> >> ><br>
>>> >> >> >> >> > In /etc/heketi/heketi.json, following is the only modified<br>
>>> >> >> >> >> > portion:<br>
>>> >> >> >> >> ><br>
>>> >> >> >> >> ><br>
>>> >> >> >> >> > ------------------------------<wbr>------------------------------<wbr>------<br>
>>> >> >> >> >> > "executor": "ssh",<br>
>>> >> >> >> >> ><br>
>>> >> >> >> >> > "_sshexec_comment": "SSH username and private key file<br>
>>> >> >> >> >> > information",<br>
>>> >> >> >> >> > "sshexec": {<br>
>>> >> >> >> >> > "keyfile": "/var/lib/heketi/.ssh/id_rsa",<br>
>>> >> >> >> >> > "user": "root",<br>
>>> >> >> >> >> > "port": "22",<br>
>>> >> >> >> >> > "fstab": "/etc/fstab"<br>
>>> >> >> >> >> > },<br>
>>> >> >> >> >> ><br>
>>> >> >> >> >> ><br>
>>> >> >> >> >> > ------------------------------<wbr>------------------------------<wbr>------<br>
>>> >> >> >> >> ><br>
>>> >> >> >> >> > Status before running gk-deploy:<br>
>>> >> >> >> >> ><br>
>>> >> >> >> >> > [root@workstation deploy]# kubectl get<br>
>>> >> >> >> >> > nodes,pods,services,<wbr>deployments<br>
>>> >> >> >> >> > NAME STATUS AGE<br>
>>> >> >> >> >> > VERSION<br>
>>> >> >> >> >> > no/node-1.c.kubernetes-174104.<wbr>internal Ready 2d<br>
>>> >> >> >> >> > v1.7.2-rancher1<br>
>>> >> >> >> >> > no/node-2.c.kubernetes-174104.<wbr>internal Ready 2d<br>
>>> >> >> >> >> > v1.7.2-rancher1<br>
>>> >> >> >> >> > no/node-3.c.kubernetes-174104.<wbr>internal Ready 2d<br>
>>> >> >> >> >> > v1.7.2-rancher1<br>
>>> >> >> >> >> ><br>
>>> >> >> >> >> > NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE<br>
>>> >> >> >> >> > svc/kubernetes 10.43.0.1 <none> 443/TCP 2d<br>
>>> >> >> >> >> ><br>
>>> >> >> >> >> ><br>
>>> >> >> >> >> > Now when i run 'gk-deploy -g', in the Rancher console, i<br>
>>> >> >> >> >> > see<br>
>>> >> >> >> >> > the<br>
>>> >> >> >> >> > following<br>
>>> >> >> >> >> > error:<br>
>>> >> >> >> >> > Readiness probe failed: Failed to get D-Bus connection:<br>
>>> >> >> >> >> > Operation<br>
>>> >> >> >> >> > not<br>
>>> >> >> >> >> > permitted<br>
>>> >> >> >> >> ><br>
>>> >> >> >> >> > From the attached gk-deploy_log i see that it failed at:<br>
>>> >> >> >> >> > Waiting for GlusterFS pods to start ... pods not found.<br>
>>> >> >> >> >> ><br>
>>> >> >> >> >> > In the kube-templates/glusterfs-<wbr>daemonset.yaml file, i see<br>
>>> >> >> >> >> > this<br>
>>> >> >> >> >> > for<br>
>>> >> >> >> >> > Readiness probe section:<br>
>>> >> >> >> >> ><br>
>>> >> >> >> >> ><br>
>>> >> >> >> >> > ------------------------------<wbr>------------------------------<wbr>------<br>
>>> >> >> >> >> > readinessProbe:<br>
>>> >> >> >> >> > timeoutSeconds: 3<br>
>>> >> >> >> >> > initialDelaySeconds: 40<br>
>>> >> >> >> >> > exec:<br>
>>> >> >> >> >> > command:<br>
>>> >> >> >> >> > - "/bin/bash"<br>
>>> >> >> >> >> > - "-c"<br>
>>> >> >> >> >> > - systemctl status glusterd.service<br>
>>> >> >> >> >> > periodSeconds: 25<br>
>>> >> >> >> >> > successThreshold: 1<br>
>>> >> >> >> >> > failureThreshold: 15<br>
>>> >> >> >> >> ><br>
>>> >> >> >> >> ><br>
>>> >> >> >> >> > ------------------------------<wbr>------------------------------<wbr>------<br>
>>> >> >> >> >> ><br>
>>> >> >> >> >> ><br>
>>> >> >> >> >> > Status after running gk-deploy:<br>
>>> >> >> >> >> ><br>
>>> >> >> >> >> > [root@workstation deploy]# kubectl get<br>
>>> >> >> >> >> > nodes,pods,deployments,<wbr>services<br>
>>> >> >> >> >> > NAME STATUS AGE<br>
>>> >> >> >> >> > VERSION<br>
>>> >> >> >> >> > no/node-1.c.kubernetes-174104.<wbr>internal Ready 2d<br>
>>> >> >> >> >> > v1.7.2-rancher1<br>
>>> >> >> >> >> > no/node-2.c.kubernetes-174104.<wbr>internal Ready 2d<br>
>>> >> >> >> >> > v1.7.2-rancher1<br>
>>> >> >> >> >> > no/node-3.c.kubernetes-174104.<wbr>internal Ready 2d<br>
>>> >> >> >> >> > v1.7.2-rancher1<br>
>>> >> >> >> >> ><br>
>>> >> >> >> >> > NAME READY STATUS RESTARTS AGE<br>
>>> >> >> >> >> > po/glusterfs-0s440 0/1 Running 0 1m<br>
>>> >> >> >> >> > po/glusterfs-j7dgr 0/1 Running 0 1m<br>
>>> >> >> >> >> > po/glusterfs-p6jl3 0/1 Running 0 1m<br>
>>> >> >> >> >> ><br>
>>> >> >> >> >> > NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE<br>
>>> >> >> >> >> > svc/kubernetes 10.43.0.1 <none> 443/TCP 2d<br>
>>> >> >> >> >> ><br>
>>> >> >> >> >> ><br>
>>> >> >> >> >> > Also, from prerequisite perspective, i was also seeing this<br>
>>> >> >> >> >> > mentioned:<br>
>>> >> >> >> >> ><br>
>>> >> >> >> >> > The following kernel modules must be loaded:<br>
>>> >> >> >> >> > * dm_snapshot<br>
>>> >> >> >> >> > * dm_mirror<br>
>>> >> >> >> >> > * dm_thin_pool<br>
>>> >> >> >> >> ><br>
>>> >> >> >> >> > Where exactly is this to be checked? On all Gluster server<br>
>>> >> >> >> >> > nodes?<br>
>>> >> >> >> >> > How<br>
>>> >> >> >> >> > can i<br>
>>> >> >> >> >> > check whether it's there?<br>
>>> >> >> >> >> ><br>
>>> >> >> >> >> > I have attached topology.json and gk-deploy log for<br>
>>> >> >> >> >> > reference.<br>
>>> >> >> >> >> ><br>
>>> >> >> >> >> > Does this issue has anything to do with the host OS<br>
>>> >> >> >> >> > (RancherOS)<br>
>>> >> >> >> >> > that<br>
>>> >> >> >> >> > i<br>
>>> >> >> >> >> > am<br>
>>> >> >> >> >> > using for Gluster nodes? Any idea how i can fix this? Any<br>
>>> >> >> >> >> > help<br>
>>> >> >> >> >> > will<br>
>>> >> >> >> >> > really<br>
>>> >> >> >> >> > be appreciated.<br>
>>> >> >> >> >> ><br>
>>> >> >> >> >> ><br>
>>> >> >> >> >> > Thanks.<br>
>>> >> >> >> >> ><br>
>>> >> >> >> >> ><br>
>>> >> >> >> >> ><br>
>>> >> >> >> >> > ______________________________<wbr>_________________<br>
>>> >> >> >> >> > heketi-devel mailing list<br>
>>> >> >> >> >> > <a href="mailto:heketi-devel@gluster.org">heketi-devel@gluster.org</a><br>
>>> >> >> >> >> > <a href="http://lists.gluster.org/mailman/listinfo/heketi-devel" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/heketi-devel</a><br>
>>> >> >> >> >> ><br>
>>> >> >> >> ><br>
>>> >> >> >> ><br>
>>> >> >> ><br>
>>> >> >> ><br>
>>> >> ><br>
>>> >> ><br>
>>> ><br>
>>> ><br>
>><br>
>><br>
><br>
</div></div></blockquote></div><br></div>