<div dir="ltr"><div class="gmail_default" style="font-family:verdana,sans-serif">Oh...ok. I misinterpreted while thinking of HA. ​So it's persistent and HA in the sense that even if one node in the Gluster cluster goes down, others are available to serve request because of replication. I hope i now got it correct.</div><div class="gmail_default" style="font-family:verdana,sans-serif"><br></div><div class="gmail_default" style="font-family:verdana,sans-serif">I'm glad i posted my query here and i'm really thankful for your help. I was struggling to get this thing work for the last 6-7 days.</div><div class="gmail_default" style="font-family:verdana,sans-serif"><br></div><div class="gmail_default" style="font-family:verdana,sans-serif"><br></div><div class="gmail_default" style="font-family:verdana,sans-serif">Thanks again Jose! :)</div><div class="gmail_default" style="font-family:verdana,sans-serif">Gaurav</div><div class="gmail_default" style="font-family:verdana,sans-serif"><br></div><div class="gmail_default" style="font-family:verdana,sans-serif"><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Sep 5, 2017 at 7:00 PM, Jose A. Rivera <span dir="ltr"><<a href="mailto:jarrpa@redhat.com" target="_blank">jarrpa@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">That is expected behavior. :) GlusterFS volumes are shared storage,<br>
meaning the same data is presented to all accessing users. By default<br>
(in our configuration) it replicates data across multiple bricks, but<br>
all bricks are represented as a single filesystem and all replicas of<br>
a file are presented as a single file. Thus, when you delete a file<br>
you delete it and all its replicas.<br>
<div class="HOEnZb"><div class="h5"><br>
On Tue, Sep 5, 2017 at 6:15 AM, Gaurav Chhabra <<a href="mailto:varuag.chhabra@gmail.com">varuag.chhabra@gmail.com</a>> wrote:<br>
> Hi Jose,<br>
><br>
><br>
> There is some progress after using the following container:<br>
> <a href="https://hub.docker.com/r/gluster/glusterfs-client/" rel="noreferrer" target="_blank">https://hub.docker.com/r/<wbr>gluster/glusterfs-client/</a><br>
><br>
> I created deployment with 5 replicas for Nginx in Kubernetes using your doc.<br>
> Now as described in the last section of your doc, if i create file<br>
> 'index.html' inside of a container running in a pod on node-a, i can see the<br>
> same file becomes available on another pod running on node-b. However, if i<br>
> delete the file on one node (say node-a), it becomes unavailable on all the<br>
> three nodes. The file should have been accessible from other nodes even if i<br>
> delete it from one pod on a given node. Correct me if i'm getting it wrong.<br>
><br>
> I searched for index.html on the Gluster cluster and i could find it on all<br>
> three nodes with same content. Please check the attached log.<br>
><br>
> If i check /etc/fstab on any of the pods, i see the following:<br>
><br>
> root@nginx-deploy-4253768608-<wbr>w645w:/# cat /etc/fstab<br>
> # UNCONFIGURED FSTAB FOR BASE SYSTEM<br>
><br>
> Attached log files.<br>
><br>
> Any idea why the file is not accessible from another pod on node-a or node-b<br>
> if i delete it from a different pod present on another node-c?<br>
><br>
><br>
> Regards,<br>
> Gaurav<br>
><br>
><br>
> On Tue, Sep 5, 2017 at 3:41 AM, Jose A. Rivera <<a href="mailto:jarrpa@redhat.com">jarrpa@redhat.com</a>> wrote:<br>
>><br>
>> Yup, I follow that repo and should be contributing to it in the<br>
>> near-future. :) I've never played with the glusterfs-client image, but<br>
>> here's hoping!<br>
>><br>
>> On Mon, Sep 4, 2017 at 10:18 AM, Gaurav Chhabra<br>
>> <<a href="mailto:varuag.chhabra@gmail.com">varuag.chhabra@gmail.com</a>> wrote:<br>
>> > Actually the script is launching a container and i'm pretty sure that<br>
>> > the<br>
>> > command that's programmed as part of the container launch process is<br>
>> > actually trying to run that mount command. Now the issue is that this<br>
>> > again<br>
>> > falls under Docker area. :( Generally, Docker container providers (in<br>
>> > this<br>
>> > case, nixel) provide a Dockerfile which specifies the command that's<br>
>> > executed as part of the container launch process however, the problem in<br>
>> > case of this container is that it's not maintained. It's 2 years old. If<br>
>> > you<br>
>> > will check active projects such as gluster/gluster-containers, you will<br>
>> > see<br>
>> > a proper Dockerfile explaining what it does.<br>
>> ><br>
>> > <a href="https://hub.docker.com/r/gluster/glusterfs-client/~/dockerfile/" rel="noreferrer" target="_blank">https://hub.docker.com/r/<wbr>gluster/glusterfs-client/~/<wbr>dockerfile/</a><br>
>> ><br>
>> > I am now planning to use the above container and see whether i'm lucky.<br>
>> > If<br>
>> > you will see the above link, it's running<br>
>> ><br>
>> > dnf install -y glusterfs-fuse<br>
>> ><br>
>> > Hopefully, this should take care of this section which asks to make sure<br>
>> > that glusterfs-client is installed<br>
>> ><br>
>> ><br>
>> > <a href="https://github.com/gluster/gluster-kubernetes/tree/master/docs/examples/dynamic_provisioning_external_gluster#validate-communication-and-gluster-prerequisites-on-the-kubernetes-nodes" rel="noreferrer" target="_blank">https://github.com/gluster/<wbr>gluster-kubernetes/tree/<wbr>master/docs/examples/dynamic_<wbr>provisioning_external_gluster#<wbr>validate-communication-and-<wbr>gluster-prerequisites-on-the-<wbr>kubernetes-nodes</a><br>
>> ><br>
>> ><br>
>> ><br>
>> ><br>
>> > On Mon, Sep 4, 2017 at 7:33 PM, Jose A. Rivera <<a href="mailto:jarrpa@redhat.com">jarrpa@redhat.com</a>><br>
>> > wrote:<br>
>> >><br>
>> >> Something is trying to mount a GlusterFS volume, and it's not that<br>
>> >> script. What's trying to mount?<br>
>> >><br>
>> >> On Mon, Sep 4, 2017 at 8:52 AM, Gaurav Chhabra<br>
>> >> <<a href="mailto:varuag.chhabra@gmail.com">varuag.chhabra@gmail.com</a>><br>
>> >> wrote:<br>
>> >> > I am just running this bash script which will start the<br>
>> >> > rancher-glusterfs-client container:<br>
>> >> ><br>
>> >> > [root@node-a ~]# cat rancher-glusterfs-client.sh<br>
>> >> > #!/bin/bash<br>
>> >> > sudo docker run --privileged \<br>
>> >> >Â Â Â Â Â --name=gluster-client \<br>
>> >> >Â Â Â Â Â -d \<br>
>> >> >Â Â Â Â Â -v /sys/fs/cgroup:/sys/fs/cgroup \<br>
>> >> >Â Â Â Â Â -v /var/log/glusterfs:/var/log/<wbr>glusterfs \<br>
>> >> >Â Â Â Â Â --env GLUSTER_PEER=10.128.0.12,10.<wbr>128.0.15,10.128.0.16 \<br>
>> >> >Â Â Â Â Â nixel/rancher-glusterfs-client<br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> > On Mon, Sep 4, 2017 at 6:26 PM, Jose A. Rivera <<a href="mailto:jarrpa@redhat.com">jarrpa@redhat.com</a>><br>
>> >> > wrote:<br>
>> >> >><br>
>> >> >> What is the exact command you're running?<br>
>> >> >><br>
>> >> >> On Mon, Sep 4, 2017 at 4:26 AM, Gaurav Chhabra<br>
>> >> >> <<a href="mailto:varuag.chhabra@gmail.com">varuag.chhabra@gmail.com</a>><br>
>> >> >> wrote:<br>
>> >> >> > Hi Jose,<br>
>> >> >> ><br>
>> >> >> ><br>
>> >> >> > From this link, it seems mount.glusterfs might actually be present<br>
>> >> >> > in<br>
>> >> >> > the<br>
>> >> >> > container that launched and quickly terminated.<br>
>> >> >> ><br>
>> >> >> ><br>
>> >> >> ><br>
>> >> >> ><br>
>> >> >> > <a href="https://unix.stackexchange.com/questions/312178/glusterfs-replicated-volume-mounting-issue" rel="noreferrer" target="_blank">https://unix.stackexchange.<wbr>com/questions/312178/<wbr>glusterfs-replicated-volume-<wbr>mounting-issue</a><br>
>> >> >> ><br>
>> >> >> > If you check the question that the user posted, the same error<br>
>> >> >> > (Mount<br>
>> >> >> > failed) was reported that i sent you in the last email.<br>
>> >> >> ><br>
>> >> >> > After seeing the above, i checked /var/log/glusterfs on my host<br>
>> >> >> > (RancherOS)<br>
>> >> >> > but it was empty. I ran the container again but with explicit<br>
>> >> >> > volume<br>
>> >> >> > mount<br>
>> >> >> > as shown below:<br>
>> >> >> ><br>
>> >> >> > [root@node-a ~]# cat rancher-glusterfs-client.sh<br>
>> >> >> > #!/bin/bash<br>
>> >> >> > sudo docker run --privileged \<br>
>> >> >> >Â Â Â Â Â --name=gluster-client \<br>
>> >> >> >Â Â Â Â Â -d \<br>
>> >> >> >Â Â Â Â Â -v /sys/fs/cgroup:/sys/fs/cgroup \<br>
>> >> >> >Â Â Â Â Â -v /var/log/glusterfs:/var/log/<wbr>glusterfs \<br>
>> >> >> >Â Â Â Â Â --env GLUSTER_PEER=10.128.0.12,10.<wbr>128.0.15,10.128.0.16 \<br>
>> >> >> >Â Â Â Â Â nixel/rancher-glusterfs-client<br>
>> >> >> > This time, i could see a log file<br>
>> >> >> > (/var/log/glusterfs/mnt-<wbr>ranchervol.log)<br>
>> >> >> > present. I have attached the content of the same. Also attached<br>
>> >> >> > are<br>
>> >> >> > logs<br>
>> >> >> > from Heketi client/server (both on one node) and Gluster cluster.<br>
>> >> >> ><br>
>> >> >> ><br>
>> >> >> > Regards,<br>
>> >> >> > Gaurav<br>
>> >> >> ><br>
>> >> >> ><br>
>> >> >> > On Mon, Sep 4, 2017 at 2:29 PM, Gaurav Chhabra<br>
>> >> >> > <<a href="mailto:varuag.chhabra@gmail.com">varuag.chhabra@gmail.com</a>><br>
>> >> >> > wrote:<br>
>> >> >> >><br>
>> >> >> >> Hi Jose,<br>
>> >> >> >><br>
>> >> >> >><br>
>> >> >> >> I tried setting up things using the link you provided and i was<br>
>> >> >> >> able<br>
>> >> >> >> to<br>
>> >> >> >> get all steps working for 3 node Gluster cluster, all running on<br>
>> >> >> >> CentOS7,<br>
>> >> >> >> without any issue. However, as expected, when i tried configuring<br>
>> >> >> >> Kubernetes<br>
>> >> >> >> by installing nixel/rancher-glusterfs-client container, i got<br>
>> >> >> >> error:<br>
>> >> >> >><br>
>> >> >> >> [root@node-a ~]# cat rancher-glusterfs-client.sh<br>
>> >> >> >> #!/bin/bash<br>
>> >> >> >> sudo docker run --privileged \<br>
>> >> >> >>Â Â Â Â Â --name=gluster-client \<br>
>> >> >> >>Â Â Â Â Â -d \<br>
>> >> >> >>Â Â Â Â Â -v /sys/fs/cgroup:/sys/fs/cgroup \<br>
>> >> >> >>Â Â Â Â Â --env GLUSTER_PEER=10.128.0.12,10.<wbr>128.0.15,10.128.0.16 \<br>
>> >> >> >>Â Â Â Â Â nixel/rancher-glusterfs-client<br>
>> >> >> >><br>
>> >> >> >> [root@node-a ~]# ./rancher-glusterfs-client.sh<br>
>> >> >> >> ac069caccdce147d6f423fc5661663<wbr>45191dbc1b11f3416c66207a1fd11f<wbr>da6b<br>
>> >> >> >><br>
>> >> >> >> [root@node-a ~]# docker logs gluster-client<br>
>> >> >> >> => Checking if I can reach GlusterFS node 10.128.0.12 ...<br>
>> >> >> >> => GlusterFS node 10.128.0.12 is alive<br>
>> >> >> >> => Mounting GlusterFS volume ranchervol from GlusterFS node<br>
>> >> >> >> 10.128.0.12<br>
>> >> >> >> ...<br>
>> >> >> >> Mount failed. Please check the log file for more details.<br>
>> >> >> >><br>
>> >> >> >> If i try running the next step as described in your link, i get<br>
>> >> >> >> the<br>
>> >> >> >> following:<br>
>> >> >> >><br>
>> >> >> >> [root@node-a ~]# modprobe fuse<br>
>> >> >> >> modprobe: module fuse not found in modules.dep<br>
>> >> >> >><br>
>> >> >> >> Since the container failed to start, i could only check on the<br>
>> >> >> >> host<br>
>> >> >> >> (RancherOS) and i could only find two mount-related commands:<br>
>> >> >> >> mount<br>
>> >> >> >> &<br>
>> >> >> >> mountpoint<br>
>> >> >> >><br>
>> >> >> >> Any pointers?<br>
>> >> >> >><br>
>> >> >> >><br>
>> >> >> >> Regards,<br>
>> >> >> >> Gaurav<br>
>> >> >> >><br>
>> >> >> >> On Sun, Sep 3, 2017 at 11:18 PM, Jose A. Rivera<br>
>> >> >> >> <<a href="mailto:jarrpa@redhat.com">jarrpa@redhat.com</a>><br>
>> >> >> >> wrote:<br>
>> >> >> >>><br>
>> >> >> >>> Installing the glusterfs-client container should be fine. :) The<br>
>> >> >> >>> main<br>
>> >> >> >>> thing that's needed is that all your Kubernetes nodes need to<br>
>> >> >> >>> have<br>
>> >> >> >>> the<br>
>> >> >> >>> "mount.glusterfs" command available so Kube can mount the<br>
>> >> >> >>> GlusterFS<br>
>> >> >> >>> volumes and present them to the pods.<br>
>> >> >> >>><br>
>> >> >> >>> On Sun, Sep 3, 2017 at 12:14 PM, Gaurav Chhabra<br>
>> >> >> >>> <<a href="mailto:varuag.chhabra@gmail.com">varuag.chhabra@gmail.com</a>> wrote:<br>
>> >> >> >>> > Thanks Jose. The link you've suggested looks good but again it<br>
>> >> >> >>> > expects<br>
>> >> >> >>> > me to<br>
>> >> >> >>> > install gluster-client on Kubernetes node and i fall into the<br>
>> >> >> >>> > same<br>
>> >> >> >>> > issue of<br>
>> >> >> >>> > installing a container for glusterfs. Only difference is that<br>
>> >> >> >>> > this<br>
>> >> >> >>> > time<br>
>> >> >> >>> > it's<br>
>> >> >> >>> > glusterfs-client and not glusterfs-server. :)<br>
>> >> >> >>> ><br>
>> >> >> >>> > I will try this out and let you know tomorrow.<br>
>> >> >> >>> ><br>
>> >> >> >>> ><br>
>> >> >> >>> > Regards,<br>
>> >> >> >>> > Gaurav<br>
>> >> >> >>> ><br>
>> >> >> >>> ><br>
>> >> >> >>> > On Sun, Sep 3, 2017 at 2:11 AM, Jose A. Rivera<br>
>> >> >> >>> > <<a href="mailto:jarrpa@redhat.com">jarrpa@redhat.com</a>><br>
>> >> >> >>> > wrote:<br>
>> >> >> >>> >><br>
>> >> >> >>> >> Hey, no problem! I'm eager to learn more about different<br>
>> >> >> >>> >> flavors<br>
>> >> >> >>> >> of<br>
>> >> >> >>> >> Linux, I just apologize for my relative inexperience with<br>
>> >> >> >>> >> them.<br>
>> >> >> >>> >> :)<br>
>> >> >> >>> >><br>
>> >> >> >>> >> To that end, I will also admit I'm not very experienced with<br>
>> >> >> >>> >> direct<br>
>> >> >> >>> >> Docker myself. I understand the basic workflow and know some<br>
>> >> >> >>> >> of<br>
>> >> >> >>> >> the<br>
>> >> >> >>> >> run options, but not having deep experience keeps me from<br>
>> >> >> >>> >> having<br>
>> >> >> >>> >> a<br>
>> >> >> >>> >> better understanding of the patterns and consequences.<br>
>> >> >> >>> >><br>
>> >> >> >>> >> Thus, I'd like to guide you in a direction I'd more apt to<br>
>> >> >> >>> >> help<br>
>> >> >> >>> >> you<br>
>> >> >> >>> >> in<br>
>> >> >> >>> >> right now. I know that you can't have multiple GlusterFS<br>
>> >> >> >>> >> servers<br>
>> >> >> >>> >> running on the same nodes, and I know that we have been<br>
>> >> >> >>> >> successfully<br>
>> >> >> >>> >> running several configurations using our<br>
>> >> >> >>> >> gluster/gluster-centos<br>
>> >> >> >>> >> image.<br>
>> >> >> >>> >> If you follow the Kubernetes configuration on<br>
>> >> >> >>> >> gluster-kubernetes,<br>
>> >> >> >>> >> the<br>
>> >> >> >>> >> pod/container is run privileged and with host networking, and<br>
>> >> >> >>> >> we<br>
>> >> >> >>> >> require that the node has all listed ports open, not just<br>
>> >> >> >>> >> 2222.<br>
>> >> >> >>> >> The<br>
>> >> >> >>> >> sshd running in the container is listening on 2222, not 22,<br>
>> >> >> >>> >> but<br>
>> >> >> >>> >> it<br>
>> >> >> >>> >> is<br>
>> >> >> >>> >> also not really required if you're not doing geo-replication.<br>
>> >> >> >>> >><br>
>> >> >> >>> >> Alternatively, you can indeed run GlusterFS outside of<br>
>> >> >> >>> >> Kubernetes<br>
>> >> >> >>> >> but<br>
>> >> >> >>> >> still have Kubernetes apps access GlusterFS storage. The<br>
>> >> >> >>> >> nodes<br>
>> >> >> >>> >> can<br>
>> >> >> >>> >> be<br>
>> >> >> >>> >> anything you want, they just need to be running GlusterFS and<br>
>> >> >> >>> >> you<br>
>> >> >> >>> >> need<br>
>> >> >> >>> >> a heketi service managing them. Here is an example of how to<br>
>> >> >> >>> >> set<br>
>> >> >> >>> >> this<br>
>> >> >> >>> >> up using CentOS:<br>
>> >> >> >>> >><br>
>> >> >> >>> >><br>
>> >> >> >>> >><br>
>> >> >> >>> >><br>
>> >> >> >>> >><br>
>> >> >> >>> >><br>
>> >> >> >>> >> <a href="https://github.com/gluster/gluster-kubernetes/tree/master/docs/examples/dynamic_provisioning_external_gluster" rel="noreferrer" target="_blank">https://github.com/gluster/<wbr>gluster-kubernetes/tree/<wbr>master/docs/examples/dynamic_<wbr>provisioning_external_gluster</a><br>
>> >> >> >>> >><br>
>> >> >> >>> >> Hope this is at least leading you in a useful direction. :)<br>
>> >> >> >>> >><br>
>> >> >> >>> >> --Jose<br>
>> >> >> >>> >><br>
>> >> >> >>> >> On Sat, Sep 2, 2017 at 3:16 PM, Gaurav Chhabra<br>
>> >> >> >>> >> <<a href="mailto:varuag.chhabra@gmail.com">varuag.chhabra@gmail.com</a>><br>
>> >> >> >>> >> wrote:<br>
>> >> >> >>> >> > Hi Jose,<br>
>> >> >> >>> >> ><br>
>> >> >> >>> >> > webcenter/rancher-glusterfs-<wbr>server is actually a container<br>
>> >> >> >>> >> > provided<br>
>> >> >> >>> >> > by<br>
>> >> >> >>> >> > Sebastien, its maintainer. It's a Docker container which<br>
>> >> >> >>> >> > has<br>
>> >> >> >>> >> > GlusterFS<br>
>> >> >> >>> >> > server running within it. On the host i.e., RancherOS,<br>
>> >> >> >>> >> > there<br>
>> >> >> >>> >> > is<br>
>> >> >> >>> >> > no<br>
>> >> >> >>> >> > separate<br>
>> >> >> >>> >> > GlusterFS server running because we cannot install anything<br>
>> >> >> >>> >> > that<br>
>> >> >> >>> >> > way.<br>
>> >> >> >>> >> > Running using container is the only way so i started<br>
>> >> >> >>> >> > ancher-glusterfs-server<br>
>> >> >> >>> >> > container with the following parameters:<br>
>> >> >> >>> >> ><br>
>> >> >> >>> >> > [root@node-1 rancher]# cat gluster-server.sh<br>
>> >> >> >>> >> > #!/bin/bash<br>
>> >> >> >>> >> > sudo docker run --name=gluster-server -d \<br>
>> >> >> >>> >> >Â Â Â Â Â --env 'SERVICE_NAME=gluster' \<br>
>> >> >> >>> >> >Â Â Â Â Â --restart always \<br>
>> >> >> >>> >> >Â Â Â Â Â --publish 2222:22 \<br>
>> >> >> >>> >> >Â Â Â Â Â webcenter/rancher-glusterfs-<wbr>server<br>
>> >> >> >>> >> ><br>
>> >> >> >>> >> > Here's the link to the dockerfile:<br>
>> >> >> >>> >> ><br>
>> >> >> >>> >> ><br>
>> >> >> >>> >> ><br>
>> >> >> >>> >> ><br>
>> >> >> >>> >> ><br>
>> >> >> >>> >> > <a href="https://hub.docker.com/r/webcenter/rancher-glusterfs-server/~/dockerfile/" rel="noreferrer" target="_blank">https://hub.docker.com/r/<wbr>webcenter/rancher-glusterfs-<wbr>server/~/dockerfile/</a><br>
>> >> >> >>> >> ><br>
>> >> >> >>> >> > It's similar to other GlusteFS containers provided by other<br>
>> >> >> >>> >> > maintainers<br>
>> >> >> >>> >> > for<br>
>> >> >> >>> >> > different OS. For example, for CentOS, we have<br>
>> >> >> >>> >> ><br>
>> >> >> >>> >> > <a href="https://hub.docker.com/r/gluster/gluster-centos/~/dockerfile/" rel="noreferrer" target="_blank">https://hub.docker.com/r/<wbr>gluster/gluster-centos/~/<wbr>dockerfile/</a><br>
>> >> >> >>> >> ><br>
>> >> >> >>> >> > From what i understand, Heketi does support container based<br>
>> >> >> >>> >> > GlusterFS<br>
>> >> >> >>> >> > server<br>
>> >> >> >>> >> > as mentioned in the prerequisite where it says:<br>
>> >> >> >>> >> ><br>
>> >> >> >>> >> > "Each node must have the following ports opened for<br>
>> >> >> >>> >> > GlusterFS<br>
>> >> >> >>> >> > communications:<br>
>> >> >> >>> >> >Â 2222 - GlusterFS pod's sshd"<br>
>> >> >> >>> >> ><br>
>> >> >> >>> >> > That's the reason i've exposed port 2222 for 22 as shown<br>
>> >> >> >>> >> > above.<br>
>> >> >> >>> >> > Please<br>
>> >> >> >>> >> > correct me if i misunderstood it.<br>
>> >> >> >>> >> ><br>
>> >> >> >>> >> > As soon as i run the above script (gluster-server.sh), it<br>
>> >> >> >>> >> > automatically<br>
>> >> >> >>> >> > creates the following directories on host. This should have<br>
>> >> >> >>> >> > ideally<br>
>> >> >> >>> >> > not<br>
>> >> >> >>> >> > been<br>
>> >> >> >>> >> > empty as you mentioned.<br>
>> >> >> >>> >> ><br>
>> >> >> >>> >> > /etc/glusterfs  /var/lib/glusterd  /var/log/glusterfs<br>
>> >> >> >>> >> ><br>
>> >> >> >>> >> > Just wanted to know in which circumstances do we get this<br>
>> >> >> >>> >> > specific<br>
>> >> >> >>> >> > error<br>
>> >> >> >>> >> > (Failed to get D-Bus connection: Operation not permitted)<br>
>> >> >> >>> >> > related<br>
>> >> >> >>> >> > to<br>
>> >> >> >>> >> > Readiness probe failing. Searching online took me to<br>
>> >> >> >>> >> > discussions<br>
>> >> >> >>> >> > around<br>
>> >> >> >>> >> > running container in privileged mode and some directory to<br>
>> >> >> >>> >> > be<br>
>> >> >> >>> >> > mounted.<br>
>> >> >> >>> >> > Based<br>
>> >> >> >>> >> > on that, i also modified my container startup script to the<br>
>> >> >> >>> >> > following:<br>
>> >> >> >>> >> ><br>
>> >> >> >>> >> > #!/bin/bash<br>
>> >> >> >>> >> > sudo docker run --privileged \<br>
>> >> >> >>> >> >Â Â Â Â Â --name=gluster-server \<br>
>> >> >> >>> >> >Â Â Â Â Â -d \<br>
>> >> >> >>> >> >Â Â Â Â Â -v /sys/fs/cgroup:/sys/fs/cgroup \<br>
>> >> >> >>> >> >Â Â Â Â Â -v /etc/glusterfs:/etc/glusterfs \<br>
>> >> >> >>> >> >Â Â Â Â Â -v /var/lib/glusterd:/var/lib/<wbr>glusterd \<br>
>> >> >> >>> >> >Â Â Â Â Â -v /var/log/glusterfs:/var/log/<wbr>glusterfs \<br>
>> >> >> >>> >> >Â Â Â Â Â --env 'SERVICE_NAME=gluster' \<br>
>> >> >> >>> >> >Â Â Â Â Â --restart always \<br>
>> >> >> >>> >> >Â Â Â Â Â --publish 2222:22 \<br>
>> >> >> >>> >> >Â Â Â Â Â webcenter/rancher-glusterfs-<wbr>server<br>
>> >> >> >>> >> > Still, the issue persists.<br>
>> >> >> >>> >> ><br>
>> >> >> >>> >> > I also logged into the container and checked whether<br>
>> >> >> >>> >> > systemctl<br>
>> >> >> >>> >> > command<br>
>> >> >> >>> >> > is<br>
>> >> >> >>> >> > present. It was there but manualy running the command also<br>
>> >> >> >>> >> > doesn't<br>
>> >> >> >>> >> > work:<br>
>> >> >> >>> >> ><br>
>> >> >> >>> >> > [root@node-c ~]# docker exec -it gluster-server /bin/bash<br>
>> >> >> >>> >> > root@42150f203f80:/app# systemctl status glusterd.service<br>
>> >> >> >>> >> > WARNING: terminal is not fully functional<br>
>> >> >> >>> >> > Failed to connect to bus: No such file or directory<br>
>> >> >> >>> >> ><br>
>> >> >> >>> >> > Under section 'ADVANCED OPTIONS - Security/Host' in this<br>
>> >> >> >>> >> > link,<br>
>> >> >> >>> >> > it<br>
>> >> >> >>> >> > talks<br>
>> >> >> >>> >> > about SYS_ADMIN setting. Any idea how i can try this?<br>
>> >> >> >>> >> ><br>
>> >> >> >>> >> > Also, there was this mentioned in the Heketi setup page:<br>
>> >> >> >>> >> > "If you are not able to deploy a hyper-converged GlusterFS<br>
>> >> >> >>> >> > cluster,<br>
>> >> >> >>> >> > you<br>
>> >> >> >>> >> > must<br>
>> >> >> >>> >> > have one running somewhere that the Kubernetes nodes can<br>
>> >> >> >>> >> > access"<br>
>> >> >> >>> >> ><br>
>> >> >> >>> >> >>>> Does it mean running the three node Gluster cluster<br>
>> >> >> >>> >> >>>> outside<br>
>> >> >> >>> >> >>>> Kubernetes,<br>
>> >> >> >>> >> >>>> may be on some VM running on RHEL/CentOS etc? If yes,<br>
>> >> >> >>> >> >>>> then<br>
>> >> >> >>> >> >>>> how<br>
>> >> >> >>> >> >>>> will i<br>
>> >> >> >>> >> >>>> be<br>
>> >> >> >>> >> >>>> able to tell Gluster which volume from the Kubernetes<br>
>> >> >> >>> >> >>>> cluster<br>
>> >> >> >>> >> >>>> pod<br>
>> >> >> >>> >> >>>> to<br>
>> >> >> >>> >> >>>> sync?<br>
>> >> >> >>> >> >>>> Any references?<br>
>> >> >> >>> >> ><br>
>> >> >> >>> >> ><br>
>> >> >> >>> >> > I really appreciate your responses despite the fact that<br>
>> >> >> >>> >> > you've<br>
>> >> >> >>> >> > not<br>
>> >> >> >>> >> > used<br>
>> >> >> >>> >> > RancherOS but still trying to help.<br>
>> >> >> >>> >> ><br>
>> >> >> >>> >> ><br>
>> >> >> >>> >> > Thanks,<br>
>> >> >> >>> >> > Gaurav<br>
>> >> >> >>> >> ><br>
>> >> >> >>> >> ><br>
>> >> >> >>> >> > On Sat, Sep 2, 2017 at 7:35 PM, Jose A. Rivera<br>
>> >> >> >>> >> > <<a href="mailto:jarrpa@redhat.com">jarrpa@redhat.com</a>><br>
>> >> >> >>> >> > wrote:<br>
>> >> >> >>> >> >><br>
>> >> >> >>> >> >> I'm afraid I have no experience with RancherOS, so I may<br>
>> >> >> >>> >> >> be<br>
>> >> >> >>> >> >> missing<br>
>> >> >> >>> >> >> some things about how it works. My primary experience is<br>
>> >> >> >>> >> >> with<br>
>> >> >> >>> >> >> Fedora,<br>
>> >> >> >>> >> >> CentOS, and Ubuntu.<br>
>> >> >> >>> >> >><br>
>> >> >> >>> >> >> What is webcenter/rancher-glusterfs-<wbr>server? If it's<br>
>> >> >> >>> >> >> running<br>
>> >> >> >>> >> >> another<br>
>> >> >> >>> >> >> glusterd then you probably don't want to be running it and<br>
>> >> >> >>> >> >> should<br>
>> >> >> >>> >> >> remove it from your systems.<br>
>> >> >> >>> >> >><br>
>> >> >> >>> >> >> The glusterfs pods mount hostpath volumes from the host<br>
>> >> >> >>> >> >> they're<br>
>> >> >> >>> >> >> running on to persist their configuration. Thus anything<br>
>> >> >> >>> >> >> they<br>
>> >> >> >>> >> >> write<br>
>> >> >> >>> >> >> to<br>
>> >> >> >>> >> >> those directories should land on the host. If that's not<br>
>> >> >> >>> >> >> happening<br>
>> >> >> >>> >> >> then that's an additional problem.<br>
>> >> >> >>> >> >><br>
>> >> >> >>> >> >> --Jose<br>
>> >> >> >>> >> >><br>
>> >> >> >>> >> >> On Fri, Sep 1, 2017 at 11:17 PM, Gaurav Chhabra<br>
>> >> >> >>> >> >> <<a href="mailto:varuag.chhabra@gmail.com">varuag.chhabra@gmail.com</a>> wrote:<br>
>> >> >> >>> >> >> > Hi Jose,<br>
>> >> >> >>> >> >> ><br>
>> >> >> >>> >> >> ><br>
>> >> >> >>> >> >> > I tried your suggestion but there is one confusion<br>
>> >> >> >>> >> >> > regarding<br>
>> >> >> >>> >> >> > point<br>
>> >> >> >>> >> >> > #3.<br>
>> >> >> >>> >> >> > Since<br>
>> >> >> >>> >> >> > RancherOS has everything running as container, i am<br>
>> >> >> >>> >> >> > running<br>
>> >> >> >>> >> >> > webcenter/rancher-glusterfs-<wbr>server container on all<br>
>> >> >> >>> >> >> > three<br>
>> >> >> >>> >> >> > nodes.<br>
>> >> >> >>> >> >> > Now<br>
>> >> >> >>> >> >> > as<br>
>> >> >> >>> >> >> > far<br>
>> >> >> >>> >> >> > as removing the directories are concerned, i hope you<br>
>> >> >> >>> >> >> > meant<br>
>> >> >> >>> >> >> > removing<br>
>> >> >> >>> >> >> > them on<br>
>> >> >> >>> >> >> > the host and _not_ from within the container. After<br>
>> >> >> >>> >> >> > completing<br>
>> >> >> >>> >> >> > step 1<br>
>> >> >> >>> >> >> > and 2,<br>
>> >> >> >>> >> >> > i checked the contents of all the directories that you<br>
>> >> >> >>> >> >> > specified<br>
>> >> >> >>> >> >> > in<br>
>> >> >> >>> >> >> > point<br>
>> >> >> >>> >> >> > #3. All were empty as you can see in the attached<br>
>> >> >> >>> >> >> > other_logs.txt.<br>
>> >> >> >>> >> >> > So<br>
>> >> >> >>> >> >> > i<br>
>> >> >> >>> >> >> > got<br>
>> >> >> >>> >> >> > confused. I ran the deploy again but the issue persists.<br>
>> >> >> >>> >> >> > Two<br>
>> >> >> >>> >> >> > pods<br>
>> >> >> >>> >> >> > show<br>
>> >> >> >>> >> >> > Liveness error and the third one, Readiness error.<br>
>> >> >> >>> >> >> ><br>
>> >> >> >>> >> >> > I then tried removing those directories (Step #3) from<br>
>> >> >> >>> >> >> > within<br>
>> >> >> >>> >> >> > the<br>
>> >> >> >>> >> >> > container<br>
>> >> >> >>> >> >> > but getting following error:<br>
>> >> >> >>> >> >> ><br>
>> >> >> >>> >> >> > root@c0f8ab4d92a2:/app# rm -rf /var/lib/heketi<br>
>> >> >> >>> >> >> > /etc/glusterfs<br>
>> >> >> >>> >> >> > /var/lib/glusterd /var/log/glusterfs<br>
>> >> >> >>> >> >> > rm: cannot remove '/var/lib/glusterd': Device or<br>
>> >> >> >>> >> >> > resource<br>
>> >> >> >>> >> >> > busy<br>
>> >> >> >>> >> >> ><br>
>> >> >> >>> >> >> ><br>
>> >> >> >>> >> >> ><br>
>> >> >> >>> >> >> > On Fri, Sep 1, 2017 at 8:21 PM, Jose A. Rivera<br>
>> >> >> >>> >> >> > <<a href="mailto:jarrpa@redhat.com">jarrpa@redhat.com</a>><br>
>> >> >> >>> >> >> > wrote:<br>
>> >> >> >>> >> >> >><br>
>> >> >> >>> >> >> >> 1. Add a line to the ssh-exec portion of heketi.json of<br>
>> >> >> >>> >> >> >> the<br>
>> >> >> >>> >> >> >> sort:<br>
>> >> >> >>> >> >> >><br>
>> >> >> >>> >> >> >> "sudo": true,<br>
>> >> >> >>> >> >> >><br>
>> >> >> >>> >> >> >> 2. Run<br>
>> >> >> >>> >> >> >><br>
>> >> >> >>> >> >> >> gk-deploy -g --abort<br>
>> >> >> >>> >> >> >><br>
>> >> >> >>> >> >> >> 3. On the nodes that were/will be running GlusterFS<br>
>> >> >> >>> >> >> >> pods,<br>
>> >> >> >>> >> >> >> run:<br>
>> >> >> >>> >> >> >><br>
>> >> >> >>> >> >> >> rm -rf /var/lib/heketi /etc/glusterfs /var/lib/glusterd<br>
>> >> >> >>> >> >> >> /var/log/glusterfs<br>
>> >> >> >>> >> >> >><br>
>> >> >> >>> >> >> >> Then try the deploy again.<br>
>> >> >> >>> >> >> >><br>
>> >> >> >>> >> >> >> On Fri, Sep 1, 2017 at 6:05 AM, Gaurav Chhabra<br>
>> >> >> >>> >> >> >> <<a href="mailto:varuag.chhabra@gmail.com">varuag.chhabra@gmail.com</a>><br>
>> >> >> >>> >> >> >> wrote:<br>
>> >> >> >>> >> >> >> > Hi Jose,<br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> > Thanks for the reply. It seems the three gluster pods<br>
>> >> >> >>> >> >> >> > might<br>
>> >> >> >>> >> >> >> > have<br>
>> >> >> >>> >> >> >> > been<br>
>> >> >> >>> >> >> >> > a<br>
>> >> >> >>> >> >> >> > copy-paste from another set of cluster where i was<br>
>> >> >> >>> >> >> >> > trying<br>
>> >> >> >>> >> >> >> > to<br>
>> >> >> >>> >> >> >> > setup<br>
>> >> >> >>> >> >> >> > the<br>
>> >> >> >>> >> >> >> > same<br>
>> >> >> >>> >> >> >> > thing using CentOS. Sorry for that. By the way, i did<br>
>> >> >> >>> >> >> >> > check<br>
>> >> >> >>> >> >> >> > for<br>
>> >> >> >>> >> >> >> > the<br>
>> >> >> >>> >> >> >> > kernel<br>
>> >> >> >>> >> >> >> > modules and it seems it's already there. Also, i am<br>
>> >> >> >>> >> >> >> > attaching<br>
>> >> >> >>> >> >> >> > fresh<br>
>> >> >> >>> >> >> >> > set<br>
>> >> >> >>> >> >> >> > of<br>
>> >> >> >>> >> >> >> > files because i created a new cluster and thought of<br>
>> >> >> >>> >> >> >> > giving<br>
>> >> >> >>> >> >> >> > it<br>
>> >> >> >>> >> >> >> > a<br>
>> >> >> >>> >> >> >> > try<br>
>> >> >> >>> >> >> >> > again.<br>
>> >> >> >>> >> >> >> > Issue still persists. :(<br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> > In heketi.json, there is a slight change w.r.t the<br>
>> >> >> >>> >> >> >> > user<br>
>> >> >> >>> >> >> >> > which<br>
>> >> >> >>> >> >> >> > connects<br>
>> >> >> >>> >> >> >> > to<br>
>> >> >> >>> >> >> >> > glusterfs node using SSH. I am not sure how Heketi<br>
>> >> >> >>> >> >> >> > was<br>
>> >> >> >>> >> >> >> > using<br>
>> >> >> >>> >> >> >> > root<br>
>> >> >> >>> >> >> >> > user<br>
>> >> >> >>> >> >> >> > to<br>
>> >> >> >>> >> >> >> > login because i wasn't able to use root and do manual<br>
>> >> >> >>> >> >> >> > SSH.<br>
>> >> >> >>> >> >> >> > With<br>
>> >> >> >>> >> >> >> > rancher<br>
>> >> >> >>> >> >> >> > user, i can login successfully so i think this should<br>
>> >> >> >>> >> >> >> > be<br>
>> >> >> >>> >> >> >> > fine.<br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> > /etc/heketi/heketi.json:<br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> > ------------------------------<wbr>------------------------------<wbr>------<br>
>> >> >> >>> >> >> >> >Â Â Â "executor": "ssh",<br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> >Â Â Â "_sshexec_comment": "SSH username and private key<br>
>> >> >> >>> >> >> >> > file<br>
>> >> >> >>> >> >> >> > information",<br>
>> >> >> >>> >> >> >> >Â Â Â "sshexec": {<br>
>> >> >> >>> >> >> >> >Â Â Â Â "keyfile": "/var/lib/heketi/.ssh/id_rsa",<br>
>> >> >> >>> >> >> >> >Â Â Â Â "user": "rancher",<br>
>> >> >> >>> >> >> >> >Â Â Â Â "port": "22",<br>
>> >> >> >>> >> >> >> >Â Â Â Â "fstab": "/etc/fstab"<br>
>> >> >> >>> >> >> >> >Â Â Â },<br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> > ------------------------------<wbr>------------------------------<wbr>------<br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> > Before running gk-deploy:<br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> > ------------------------------<wbr>------------------------------<wbr>------<br>
>> >> >> >>> >> >> >> > [root@workstation deploy]# kubectl get<br>
>> >> >> >>> >> >> >> > nodes,pods,daemonset,<wbr>deployments,services<br>
>> >> >> >>> >> >> >> > NAMEÂ Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â STATUS<br>
>> >> >> >>> >> >> >> > AGE<br>
>> >> >> >>> >> >> >> > VERSION<br>
>> >> >> >>> >> >> >> > no/node-a.c.kubernetes-174104.<wbr>internal  Ready   3h<br>
>> >> >> >>> >> >> >> > v1.7.2-rancher1<br>
>> >> >> >>> >> >> >> > no/node-b.c.kubernetes-174104.<wbr>internal  Ready   3h<br>
>> >> >> >>> >> >> >> > v1.7.2-rancher1<br>
>> >> >> >>> >> >> >> > no/node-c.c.kubernetes-174104.<wbr>internal  Ready   3h<br>
>> >> >> >>> >> >> >> > v1.7.2-rancher1<br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> > NAMEÂ Â Â Â Â Â Â CLUSTER-IPÂ Â EXTERNAL-IPÂ Â PORT(S)<br>
>> >> >> >>> >> >> >> > AGE<br>
>> >> >> >>> >> >> >> > svc/kubernetes  10.43.0.1  <none>    443/TCP<br>
>> >> >> >>> >> >> >> > 3h<br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> > ------------------------------<wbr>------------------------------<wbr>------<br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> > After running gk-deploy:<br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> > ------------------------------<wbr>------------------------------<wbr>------<br>
>> >> >> >>> >> >> >> > [root@workstation messagegc]# kubectl get<br>
>> >> >> >>> >> >> >> > nodes,pods,daemonset,<wbr>deployments,services<br>
>> >> >> >>> >> >> >> > NAMEÂ Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â STATUS<br>
>> >> >> >>> >> >> >> > AGE<br>
>> >> >> >>> >> >> >> > VERSION<br>
>> >> >> >>> >> >> >> > no/node-a.c.kubernetes-174104.<wbr>internal  Ready   3h<br>
>> >> >> >>> >> >> >> > v1.7.2-rancher1<br>
>> >> >> >>> >> >> >> > no/node-b.c.kubernetes-174104.<wbr>internal  Ready   3h<br>
>> >> >> >>> >> >> >> > v1.7.2-rancher1<br>
>> >> >> >>> >> >> >> > no/node-c.c.kubernetes-174104.<wbr>internal  Ready   3h<br>
>> >> >> >>> >> >> >> > v1.7.2-rancher1<br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> > NAMEÂ Â Â Â Â Â Â Â Â READYÂ Â Â STATUSÂ Â RESTARTS<br>
>> >> >> >>> >> >> >> > AGE<br>
>> >> >> >>> >> >> >> > po/glusterfs-0j9l5  0/1    Running  0<br>
>> >> >> >>> >> >> >> > 2m<br>
>> >> >> >>> >> >> >> > po/glusterfs-gqz4c  0/1    Running  0<br>
>> >> >> >>> >> >> >> > 2m<br>
>> >> >> >>> >> >> >> > po/glusterfs-gxvcb  0/1    Running  0<br>
>> >> >> >>> >> >> >> > 2m<br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> > NAMEÂ Â Â Â Â Â DESIREDÂ Â CURRENTÂ Â READY<br>
>> >> >> >>> >> >> >> > UP-TO-DATE<br>
>> >> >> >>> >> >> >> > AVAILABLE<br>
>> >> >> >>> >> >> >> > NODE-SELECTORÂ Â Â Â Â Â AGE<br>
>> >> >> >>> >> >> >> > ds/glusterfs  3     3     0     3<br>
>> >> >> >>> >> >> >> > 0<br>
>> >> >> >>> >> >> >> > storagenode=glusterfs  2m<br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> > NAMEÂ Â Â Â Â Â Â CLUSTER-IPÂ Â EXTERNAL-IPÂ Â PORT(S)<br>
>> >> >> >>> >> >> >> > AGE<br>
>> >> >> >>> >> >> >> > svc/kubernetes  10.43.0.1  <none>    443/TCP<br>
>> >> >> >>> >> >> >> > 3h<br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> > ------------------------------<wbr>------------------------------<wbr>------<br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> > Kernel module check on all three nodes:<br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> > ------------------------------<wbr>------------------------------<wbr>------<br>
>> >> >> >>> >> >> >> > [root@node-a ~]# find /lib*/modules/$(uname -r) -name<br>
>> >> >> >>> >> >> >> > *.ko<br>
>> >> >> >>> >> >> >> > |<br>
>> >> >> >>> >> >> >> > grep<br>
>> >> >> >>> >> >> >> > 'thin-pool\|snapshot\|mirror' | xargs ls -ltr<br>
>> >> >> >>> >> >> >> > -rw-r--r--  1 root   root     92310 Jun 26<br>
>> >> >> >>> >> >> >> > 04:13<br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> > /lib64/modules/4.9.34-rancher/<wbr>kernel/drivers/md/dm-thin-<wbr>pool.ko<br>
>> >> >> >>> >> >> >> > -rw-r--r--  1 root   root     56982 Jun 26<br>
>> >> >> >>> >> >> >> > 04:13<br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> > /lib64/modules/4.9.34-rancher/<wbr>kernel/drivers/md/dm-snapshot.<wbr>ko<br>
>> >> >> >>> >> >> >> > -rw-r--r--  1 root   root     27070 Jun 26<br>
>> >> >> >>> >> >> >> > 04:13<br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> > /lib64/modules/4.9.34-rancher/<wbr>kernel/drivers/md/dm-mirror.ko<br>
>> >> >> >>> >> >> >> > -rw-r--r--  1 root   root     92310 Jun 26<br>
>> >> >> >>> >> >> >> > 04:13<br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> > /lib/modules/4.9.34-rancher/<wbr>kernel/drivers/md/dm-thin-<wbr>pool.ko<br>
>> >> >> >>> >> >> >> > -rw-r--r--  1 root   root     56982 Jun 26<br>
>> >> >> >>> >> >> >> > 04:13<br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> > /lib/modules/4.9.34-rancher/<wbr>kernel/drivers/md/dm-snapshot.<wbr>ko<br>
>> >> >> >>> >> >> >> > -rw-r--r--  1 root   root     27070 Jun 26<br>
>> >> >> >>> >> >> >> > 04:13<br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> > /lib/modules/4.9.34-rancher/<wbr>kernel/drivers/md/dm-mirror.ko<br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> > ------------------------------<wbr>------------------------------<wbr>------<br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> > Error snapshot attached.<br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> > In my first mail, i checked that Readiness Probe<br>
>> >> >> >>> >> >> >> > failure<br>
>> >> >> >>> >> >> >> > check<br>
>> >> >> >>> >> >> >> > has<br>
>> >> >> >>> >> >> >> > this<br>
>> >> >> >>> >> >> >> > code<br>
>> >> >> >>> >> >> >> > in kube-templates/glusterfs-<wbr>daemonset.yaml file:<br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> > ------------------------------<wbr>------------------------------<wbr>------<br>
>> >> >> >>> >> >> >> >Â Â Â Â Â readinessProbe:<br>
>> >> >> >>> >> >> >> >Â Â Â Â Â Â timeoutSeconds: 3<br>
>> >> >> >>> >> >> >> >Â Â Â Â Â Â initialDelaySeconds: 40<br>
>> >> >> >>> >> >> >> >Â Â Â Â Â Â exec:<br>
>> >> >> >>> >> >> >> >Â Â Â Â Â Â Â command:<br>
>> >> >> >>> >> >> >> >Â Â Â Â Â Â Â - "/bin/bash"<br>
>> >> >> >>> >> >> >> >Â Â Â Â Â Â Â - "-c"<br>
>> >> >> >>> >> >> >> >Â Â Â Â Â Â Â - systemctl status glusterd.service<br>
>> >> >> >>> >> >> >> >Â Â Â Â Â Â periodSeconds: 25<br>
>> >> >> >>> >> >> >> >Â Â Â Â Â Â successThreshold: 1<br>
>> >> >> >>> >> >> >> >Â Â Â Â Â Â failureThreshold: 15<br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> > ------------------------------<wbr>------------------------------<wbr>------<br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> > I tried logging into glustefs container on one of the<br>
>> >> >> >>> >> >> >> > node<br>
>> >> >> >>> >> >> >> > and<br>
>> >> >> >>> >> >> >> > ran<br>
>> >> >> >>> >> >> >> > the<br>
>> >> >> >>> >> >> >> > above<br>
>> >> >> >>> >> >> >> > command:<br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> > [root@node-a ~]# docker exec -it c0f8ab4d92a23b6df2<br>
>> >> >> >>> >> >> >> > /bin/bash<br>
>> >> >> >>> >> >> >> > root@c0f8ab4d92a2:/app# systemctl status<br>
>> >> >> >>> >> >> >> > glusterd.service<br>
>> >> >> >>> >> >> >> > WARNING: terminal is not fully functional<br>
>> >> >> >>> >> >> >> > Failed to connect to bus: No such file or directory<br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> > Any check that i can do manually on nodes to debug<br>
>> >> >> >>> >> >> >> > further?<br>
>> >> >> >>> >> >> >> > Any<br>
>> >> >> >>> >> >> >> > suggestions?<br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> > On Thu, Aug 31, 2017 at 6:53 PM, Jose A. Rivera<br>
>> >> >> >>> >> >> >> > <<a href="mailto:jarrpa@redhat.com">jarrpa@redhat.com</a>><br>
>> >> >> >>> >> >> >> > wrote:<br>
>> >> >> >>> >> >> >> >><br>
>> >> >> >>> >> >> >> >> Hey Gaurav,<br>
>> >> >> >>> >> >> >> >><br>
>> >> >> >>> >> >> >> >> The kernel modules must be loaded on all nodes that<br>
>> >> >> >>> >> >> >> >> will<br>
>> >> >> >>> >> >> >> >> run<br>
>> >> >> >>> >> >> >> >> heketi<br>
>> >> >> >>> >> >> >> >> pods. Additionally, you must have at least three<br>
>> >> >> >>> >> >> >> >> nodes<br>
>> >> >> >>> >> >> >> >> specified<br>
>> >> >> >>> >> >> >> >> in<br>
>> >> >> >>> >> >> >> >> your topology file. I'm not sure how you're getting<br>
>> >> >> >>> >> >> >> >> three<br>
>> >> >> >>> >> >> >> >> gluster<br>
>> >> >> >>> >> >> >> >> pods<br>
>> >> >> >>> >> >> >> >> when you only have two nodes defined... :)<br>
>> >> >> >>> >> >> >> >><br>
>> >> >> >>> >> >> >> >> --Jose<br>
>> >> >> >>> >> >> >> >><br>
>> >> >> >>> >> >> >> >> On Wed, Aug 30, 2017 at 5:27 AM, Gaurav Chhabra<br>
>> >> >> >>> >> >> >> >> <<a href="mailto:varuag.chhabra@gmail.com">varuag.chhabra@gmail.com</a>> wrote:<br>
>> >> >> >>> >> >> >> >> > Hi,<br>
>> >> >> >>> >> >> >> >> ><br>
>> >> >> >>> >> >> >> >> ><br>
>> >> >> >>> >> >> >> >> > I have the following setup in place:<br>
>> >> >> >>> >> >> >> >> ><br>
>> >> >> >>> >> >> >> >> > 1 node  : RancherOS having Rancher application<br>
>> >> >> >>> >> >> >> >> > for<br>
>> >> >> >>> >> >> >> >> > Kubernetes<br>
>> >> >> >>> >> >> >> >> > setup<br>
>> >> >> >>> >> >> >> >> > 2 nodes : RancherOS having Rancher agent<br>
>> >> >> >>> >> >> >> >> > 1 node  : CentOS 7 workstation having kubectl<br>
>> >> >> >>> >> >> >> >> > installed<br>
>> >> >> >>> >> >> >> >> > and<br>
>> >> >> >>> >> >> >> >> > folder<br>
>> >> >> >>> >> >> >> >> > cloned/downloaded from<br>
>> >> >> >>> >> >> >> >> > <a href="https://github.com/gluster/gluster-kubernetes" rel="noreferrer" target="_blank">https://github.com/gluster/<wbr>gluster-kubernetes</a><br>
>> >> >> >>> >> >> >> >> > using<br>
>> >> >> >>> >> >> >> >> > which i run Heketi setup (gk-deploy -g)<br>
>> >> >> >>> >> >> >> >> ><br>
>> >> >> >>> >> >> >> >> > I also have rancher-glusterfs-server container<br>
>> >> >> >>> >> >> >> >> > running<br>
>> >> >> >>> >> >> >> >> > with<br>
>> >> >> >>> >> >> >> >> > the<br>
>> >> >> >>> >> >> >> >> > following<br>
>> >> >> >>> >> >> >> >> > configuration:<br>
>> >> >> >>> >> >> >> >> ><br>
>> >> >> >>> >> >> >> >> ><br>
>> >> >> >>> >> >> >> >> ><br>
>> >> >> >>> >> >> >> >> ><br>
>> >> >> >>> >> >> >> >> ><br>
>> >> >> >>> >> >> >> >> > ------------------------------<wbr>------------------------------<wbr>------<br>
>> >> >> >>> >> >> >> >> > [root@node-1 rancher]# cat gluster-server.sh<br>
>> >> >> >>> >> >> >> >> > #!/bin/bash<br>
>> >> >> >>> >> >> >> >> ><br>
>> >> >> >>> >> >> >> >> > sudo docker run --name=gluster-server -d \<br>
>> >> >> >>> >> >> >> >> >Â Â Â Â Â --env 'SERVICE_NAME=gluster' \<br>
>> >> >> >>> >> >> >> >> >Â Â Â Â Â --restart always \<br>
>> >> >> >>> >> >> >> >> >Â Â Â Â Â --env 'GLUSTER_DATA=/srv/docker/<wbr>gitlab' \<br>
>> >> >> >>> >> >> >> >> >Â Â Â Â Â --publish 2222:22 \<br>
>> >> >> >>> >> >> >> >> >Â Â Â Â Â webcenter/rancher-glusterfs-<wbr>server<br>
>> >> >> >>> >> >> >> >> ><br>
>> >> >> >>> >> >> >> >> ><br>
>> >> >> >>> >> >> >> >> ><br>
>> >> >> >>> >> >> >> >> ><br>
>> >> >> >>> >> >> >> >> ><br>
>> >> >> >>> >> >> >> >> > ------------------------------<wbr>------------------------------<wbr>------<br>
>> >> >> >>> >> >> >> >> ><br>
>> >> >> >>> >> >> >> >> > In /etc/heketi/heketi.json, following is the only<br>
>> >> >> >>> >> >> >> >> > modified<br>
>> >> >> >>> >> >> >> >> > portion:<br>
>> >> >> >>> >> >> >> >> ><br>
>> >> >> >>> >> >> >> >> ><br>
>> >> >> >>> >> >> >> >> ><br>
>> >> >> >>> >> >> >> >> ><br>
>> >> >> >>> >> >> >> >> ><br>
>> >> >> >>> >> >> >> >> > ------------------------------<wbr>------------------------------<wbr>------<br>
>> >> >> >>> >> >> >> >> >Â Â Â "executor": "ssh",<br>
>> >> >> >>> >> >> >> >> ><br>
>> >> >> >>> >> >> >> >> >Â Â Â "_sshexec_comment": "SSH username and private<br>
>> >> >> >>> >> >> >> >> > key<br>
>> >> >> >>> >> >> >> >> > file<br>
>> >> >> >>> >> >> >> >> > information",<br>
>> >> >> >>> >> >> >> >> >Â Â Â "sshexec": {<br>
>> >> >> >>> >> >> >> >> >Â Â Â Â "keyfile": "/var/lib/heketi/.ssh/id_rsa",<br>
>> >> >> >>> >> >> >> >> >Â Â Â Â "user": "root",<br>
>> >> >> >>> >> >> >> >> >Â Â Â Â "port": "22",<br>
>> >> >> >>> >> >> >> >> >Â Â Â Â "fstab": "/etc/fstab"<br>
>> >> >> >>> >> >> >> >> >Â Â Â },<br>
>> >> >> >>> >> >> >> >> ><br>
>> >> >> >>> >> >> >> >> ><br>
>> >> >> >>> >> >> >> >> ><br>
>> >> >> >>> >> >> >> >> ><br>
>> >> >> >>> >> >> >> >> ><br>
>> >> >> >>> >> >> >> >> > ------------------------------<wbr>------------------------------<wbr>------<br>
>> >> >> >>> >> >> >> >> ><br>
>> >> >> >>> >> >> >> >> > Status before running gk-deploy:<br>
>> >> >> >>> >> >> >> >> ><br>
>> >> >> >>> >> >> >> >> > [root@workstation deploy]# kubectl get<br>
>> >> >> >>> >> >> >> >> > nodes,pods,services,<wbr>deployments<br>
>> >> >> >>> >> >> >> >> > NAMEÂ Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â STATUS<br>
>> >> >> >>> >> >> >> >> > AGE<br>
>> >> >> >>> >> >> >> >> > VERSION<br>
>> >> >> >>> >> >> >> >> > no/node-1.c.kubernetes-174104.<wbr>internal  Ready<br>
>> >> >> >>> >> >> >> >> > 2d<br>
>> >> >> >>> >> >> >> >> > v1.7.2-rancher1<br>
>> >> >> >>> >> >> >> >> > no/node-2.c.kubernetes-174104.<wbr>internal  Ready<br>
>> >> >> >>> >> >> >> >> > 2d<br>
>> >> >> >>> >> >> >> >> > v1.7.2-rancher1<br>
>> >> >> >>> >> >> >> >> > no/node-3.c.kubernetes-174104.<wbr>internal  Ready<br>
>> >> >> >>> >> >> >> >> > 2d<br>
>> >> >> >>> >> >> >> >> > v1.7.2-rancher1<br>
>> >> >> >>> >> >> >> >> ><br>
>> >> >> >>> >> >> >> >> > NAMEÂ Â Â Â Â Â Â CLUSTER-IPÂ Â EXTERNAL-IP<br>
>> >> >> >>> >> >> >> >> > PORT(S)<br>
>> >> >> >>> >> >> >> >> > AGE<br>
>> >> >> >>> >> >> >> >> > svc/kubernetes  10.43.0.1  <none><br>
>> >> >> >>> >> >> >> >> > 443/TCP<br>
>> >> >> >>> >> >> >> >> > 2d<br>
>> >> >> >>> >> >> >> >> ><br>
>> >> >> >>> >> >> >> >> ><br>
>> >> >> >>> >> >> >> >> > Now when i run 'gk-deploy -g', in the Rancher<br>
>> >> >> >>> >> >> >> >> > console, i<br>
>> >> >> >>> >> >> >> >> > see<br>
>> >> >> >>> >> >> >> >> > the<br>
>> >> >> >>> >> >> >> >> > following<br>
>> >> >> >>> >> >> >> >> > error:<br>
>> >> >> >>> >> >> >> >> > Readiness probe failed: Failed to get D-Bus<br>
>> >> >> >>> >> >> >> >> > connection:<br>
>> >> >> >>> >> >> >> >> > Operation<br>
>> >> >> >>> >> >> >> >> > not<br>
>> >> >> >>> >> >> >> >> > permitted<br>
>> >> >> >>> >> >> >> >> ><br>
>> >> >> >>> >> >> >> >> > From the attached gk-deploy_log i see that it<br>
>> >> >> >>> >> >> >> >> > failed<br>
>> >> >> >>> >> >> >> >> > at:<br>
>> >> >> >>> >> >> >> >> > Waiting for GlusterFS pods to start ... pods not<br>
>> >> >> >>> >> >> >> >> > found.<br>
>> >> >> >>> >> >> >> >> ><br>
>> >> >> >>> >> >> >> >> > In the kube-templates/glusterfs-<wbr>daemonset.yaml<br>
>> >> >> >>> >> >> >> >> > file,<br>
>> >> >> >>> >> >> >> >> > i<br>
>> >> >> >>> >> >> >> >> > see<br>
>> >> >> >>> >> >> >> >> > this<br>
>> >> >> >>> >> >> >> >> > for<br>
>> >> >> >>> >> >> >> >> > Readiness probe section:<br>
>> >> >> >>> >> >> >> >> ><br>
>> >> >> >>> >> >> >> >> ><br>
>> >> >> >>> >> >> >> >> ><br>
>> >> >> >>> >> >> >> >> ><br>
>> >> >> >>> >> >> >> >> ><br>
>> >> >> >>> >> >> >> >> > ------------------------------<wbr>------------------------------<wbr>------<br>
>> >> >> >>> >> >> >> >> >Â Â Â Â Â readinessProbe:<br>
>> >> >> >>> >> >> >> >> >Â Â Â Â Â Â timeoutSeconds: 3<br>
>> >> >> >>> >> >> >> >> >Â Â Â Â Â Â initialDelaySeconds: 40<br>
>> >> >> >>> >> >> >> >> >Â Â Â Â Â Â exec:<br>
>> >> >> >>> >> >> >> >> >Â Â Â Â Â Â Â command:<br>
>> >> >> >>> >> >> >> >> >Â Â Â Â Â Â Â - "/bin/bash"<br>
>> >> >> >>> >> >> >> >> >Â Â Â Â Â Â Â - "-c"<br>
>> >> >> >>> >> >> >> >> >Â Â Â Â Â Â Â - systemctl status glusterd.service<br>
>> >> >> >>> >> >> >> >> >Â Â Â Â Â Â periodSeconds: 25<br>
>> >> >> >>> >> >> >> >> >Â Â Â Â Â Â successThreshold: 1<br>
>> >> >> >>> >> >> >> >> >Â Â Â Â Â Â failureThreshold: 15<br>
>> >> >> >>> >> >> >> >> ><br>
>> >> >> >>> >> >> >> >> ><br>
>> >> >> >>> >> >> >> >> ><br>
>> >> >> >>> >> >> >> >> ><br>
>> >> >> >>> >> >> >> >> ><br>
>> >> >> >>> >> >> >> >> > ------------------------------<wbr>------------------------------<wbr>------<br>
>> >> >> >>> >> >> >> >> ><br>
>> >> >> >>> >> >> >> >> ><br>
>> >> >> >>> >> >> >> >> > Status after running gk-deploy:<br>
>> >> >> >>> >> >> >> >> ><br>
>> >> >> >>> >> >> >> >> > [root@workstation deploy]# kubectl get<br>
>> >> >> >>> >> >> >> >> > nodes,pods,deployments,<wbr>services<br>
>> >> >> >>> >> >> >> >> > NAMEÂ Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â STATUS<br>
>> >> >> >>> >> >> >> >> > AGE<br>
>> >> >> >>> >> >> >> >> > VERSION<br>
>> >> >> >>> >> >> >> >> > no/node-1.c.kubernetes-174104.<wbr>internal  Ready<br>
>> >> >> >>> >> >> >> >> > 2d<br>
>> >> >> >>> >> >> >> >> > v1.7.2-rancher1<br>
>> >> >> >>> >> >> >> >> > no/node-2.c.kubernetes-174104.<wbr>internal  Ready<br>
>> >> >> >>> >> >> >> >> > 2d<br>
>> >> >> >>> >> >> >> >> > v1.7.2-rancher1<br>
>> >> >> >>> >> >> >> >> > no/node-3.c.kubernetes-174104.<wbr>internal  Ready<br>
>> >> >> >>> >> >> >> >> > 2d<br>
>> >> >> >>> >> >> >> >> > v1.7.2-rancher1<br>
>> >> >> >>> >> >> >> >> ><br>
>> >> >> >>> >> >> >> >> > NAMEÂ Â Â Â Â Â Â Â Â READYÂ Â Â STATUSÂ Â RESTARTS<br>
>> >> >> >>> >> >> >> >> > AGE<br>
>> >> >> >>> >> >> >> >> > po/glusterfs-0s440  0/1    Running  0<br>
>> >> >> >>> >> >> >> >> > 1m<br>
>> >> >> >>> >> >> >> >> > po/glusterfs-j7dgr  0/1    Running  0<br>
>> >> >> >>> >> >> >> >> > 1m<br>
>> >> >> >>> >> >> >> >> > po/glusterfs-p6jl3  0/1    Running  0<br>
>> >> >> >>> >> >> >> >> > 1m<br>
>> >> >> >>> >> >> >> >> ><br>
>> >> >> >>> >> >> >> >> > NAMEÂ Â Â Â Â Â Â CLUSTER-IPÂ Â EXTERNAL-IP<br>
>> >> >> >>> >> >> >> >> > PORT(S)<br>
>> >> >> >>> >> >> >> >> > AGE<br>
>> >> >> >>> >> >> >> >> > svc/kubernetes  10.43.0.1  <none><br>
>> >> >> >>> >> >> >> >> > 443/TCP<br>
>> >> >> >>> >> >> >> >> > 2d<br>
>> >> >> >>> >> >> >> >> ><br>
>> >> >> >>> >> >> >> >> ><br>
>> >> >> >>> >> >> >> >> > Also, from prerequisite perspective, i was also<br>
>> >> >> >>> >> >> >> >> > seeing<br>
>> >> >> >>> >> >> >> >> > this<br>
>> >> >> >>> >> >> >> >> > mentioned:<br>
>> >> >> >>> >> >> >> >> ><br>
>> >> >> >>> >> >> >> >> > The following kernel modules must be loaded:<br>
>> >> >> >>> >> >> >> >> >Â * dm_snapshot<br>
>> >> >> >>> >> >> >> >> >Â * dm_mirror<br>
>> >> >> >>> >> >> >> >> >Â * dm_thin_pool<br>
>> >> >> >>> >> >> >> >> ><br>
>> >> >> >>> >> >> >> >> > Where exactly is this to be checked? On all<br>
>> >> >> >>> >> >> >> >> > Gluster<br>
>> >> >> >>> >> >> >> >> > server<br>
>> >> >> >>> >> >> >> >> > nodes?<br>
>> >> >> >>> >> >> >> >> > How<br>
>> >> >> >>> >> >> >> >> > can i<br>
>> >> >> >>> >> >> >> >> > check whether it's there?<br>
>> >> >> >>> >> >> >> >> ><br>
>> >> >> >>> >> >> >> >> > I have attached topology.json and gk-deploy log<br>
>> >> >> >>> >> >> >> >> > for<br>
>> >> >> >>> >> >> >> >> > reference.<br>
>> >> >> >>> >> >> >> >> ><br>
>> >> >> >>> >> >> >> >> > Does this issue has anything to do with the host<br>
>> >> >> >>> >> >> >> >> > OS<br>
>> >> >> >>> >> >> >> >> > (RancherOS)<br>
>> >> >> >>> >> >> >> >> > that<br>
>> >> >> >>> >> >> >> >> > i<br>
>> >> >> >>> >> >> >> >> > am<br>
>> >> >> >>> >> >> >> >> > using for Gluster nodes? Any idea how i can fix<br>
>> >> >> >>> >> >> >> >> > this?<br>
>> >> >> >>> >> >> >> >> > Any<br>
>> >> >> >>> >> >> >> >> > help<br>
>> >> >> >>> >> >> >> >> > will<br>
>> >> >> >>> >> >> >> >> > really<br>
>> >> >> >>> >> >> >> >> > be appreciated.<br>
>> >> >> >>> >> >> >> >> ><br>
>> >> >> >>> >> >> >> >> ><br>
>> >> >> >>> >> >> >> >> > Thanks.<br>
>> >> >> >>> >> >> >> >> ><br>
>> >> >> >>> >> >> >> >> ><br>
>> >> >> >>> >> >> >> >> ><br>
>> >> >> >>> >> >> >> >> > ______________________________<wbr>_________________<br>
>> >> >> >>> >> >> >> >> > heketi-devel mailing list<br>
>> >> >> >>> >> >> >> >> > <a href="mailto:heketi-devel@gluster.org">heketi-devel@gluster.org</a><br>
>> >> >> >>> >> >> >> >> ><br>
>> >> >> >>> >> >> >> >> ><br>
>> >> >> >>> >> >> >> >> > <a href="http://lists.gluster.org/mailman/listinfo/heketi-devel" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/heketi-devel</a><br>
>> >> >> >>> >> >> >> >> ><br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> >> ><br>
>> >> >> >>> >> >> ><br>
>> >> >> >>> >> >> ><br>
>> >> >> >>> >> ><br>
>> >> >> >>> >> ><br>
>> >> >> >>> ><br>
>> >> >> >>> ><br>
>> >> >> >><br>
>> >> >> >><br>
>> >> >> ><br>
>> >> ><br>
>> >> ><br>
>> ><br>
>> ><br>
><br>
><br>
</div></div></blockquote></div><br></div>