[heketi-devel] Trouble with SSH executor

Jose A. Rivera jarrpa at redhat.com
Tue Aug 29 18:17:24 UTC 2017


Ping.

Also, I forgot to mention that I'm running glusterd natively on the
same nodes running kubelet, so similar to our default test setup just
non-containerized GlusterFS. I realized that the node that succeeds is
the one that the heketi pod is running on. :)

On Sat, Aug 26, 2017 at 5:39 PM, Jose A. Rivera <jarrpa at redhat.com> wrote:
> Howdy, List!
>
> I'm having some trouble with (I think?) getting the heketi SSH
> executor to work from a container in Kubernetes.
>
> For a reference of my environment, I'm running and testing gk-deploy from here:
>
> https://github.com/jarrpa/gluster-kubernetes/tree/jarrpa-dev
>
> I'm using the provided vagrant environment (with some minor storage
> modifications) and running the following test files:
>
> tests/complex/test-setup.sh <-- sets up the vagrant environment
> tests/complex/test-gk-deploy-ssh.sh <-- runs test-inside-gk-deploy with SSH exec
> tests/complex/test-inside-gk-deploy.sh <-- actually runs gk-deploy
> with SSH params
>
> When I run gk-deploy, the script has checks to verify that glusterd is
> running on the target nodes (check if "gluster volume info" succeeds)
> and those succeed. The script then goes along as normal until it tries
> to load the topology. At that point the two nodes of my three nodes
> experience SSH timeouts but the third one succeeds. Naturally, the
> script fails out after that. There seems to be no pattern on which
> nodes fail or succeed between iterations of the vagrant environment,
> but thus far it seems consistent that only one will succeed.
>
> I've attached the heketi log from the container trying to run the
> topology load. I've turned on as much debugging as I could find, but
> still didn't see any clues.
>
> Any ideas? :)
>
> Thanks,
> --Jose


More information about the heketi-devel mailing list