<html>
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<div class="moz-cite-prefix">Ashiq, <br>
<br>
In that case you have to have nodeselector in your deployment, and
hostname defined inside container spec.<br>
As follows:<br>
---<br>
apiVersion: extensions/v1beta1<br>
kind: Deployment<br>
metadata:<br>
name: glusterfs-user-cluster-sas-108<br>
spec:<br>
template:<br>
metadata:<br>
name: glusterfs-user-cluster-sas-108<br>
labels:<br>
name: glusterfs-sas<br>
app: glusterfs-user-cluster-sas-108<br>
spec:<br>
nodeSelector:<br>
kubernetes.io/hostname: server37<br>
hostname: glusterfs-sas-node-server37<br>
subdomain: sas<br>
<br>
and service for this deployment:<br>
<br>
apiVersion: v1<br>
kind: Service<br>
metadata:<br>
name: sas<br>
spec:<br>
selector:<br>
name: glusterfs-sas<br>
clusterIP: None<br>
ports:<br>
- name: fake # Actually, no port is needed.<br>
port: 1<br>
<br>
<br>
Moreover, you should have kubeDNS pod IP in your resolv.conf on
the host, otherwise kubelet wont be able to mount volume inside
pod using generated endpoint because kubelet pod doesn't know how
to resolve A records which contains kubeDNS. <br>
<br>
Btw, this won't work with DaemonSet. <br>
<br>
Thank you,<br>
Pavel K. <br>
<br>
<br>
<br>
3/3/17 3:46 PM, Mohamed Ashiq Liyazudeen пишет:<br>
</div>
<blockquote
cite="mid:5a91da87-57c5-4bf1-bbf8-45d046552152@email.android.com"
type="cite">
<div dir="auto">I agree with the first point. Just that if the
container goes down it should come in the same node as the
glusterd config saved in the host will expect gluster on the
node will be of same hostname.
<div dir="auto"><br>
</div>
<div dir="auto">Then this will work. I don't have any
questions. </div>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Mar 3, 2017 6:31 PM, Pavel
Kutishchev <a class="moz-txt-link-rfc2396E" href="mailto:pavel.kutishchev@gmail.com"><pavel.kutishchev@gmail.com></a> wrote:<br
type="attribution">
<blockquote class="quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div>
<div>Hey Ashiq, <br>
1. hostname of container can be permanent and resolved
thru service. <br>
<a moz-do-not-send="true"
href="https://kubernetes.io/docs/admin/dns/">https://kubernetes.io/docs/admin/dns/</a>
- <b>A Records and hostname based on Pod’s hostname and
subdomain fields paragraph</b> <br>
2. part of the spec below should answer question
regarding persistent folders:<br>
volumeMounts:<br>
- name: dev<br>
mountPath: /dev<br>
- name: system-etc<br>
mountPath: /etc/glusterd<br>
- name: system-config<br>
mountPath: /var/lib/glusterd<br>
- name: heketi-ssh<br>
mountPath: /root/.ssh<br>
- name: heketi-var<br>
mountPath: /var/lib/heketi<br>
- name: workaround-1<br>
mountPath: /run<br>
- name: system-sys<br>
mountPath: /sys/fs/cgroup<br>
- name: selinux<br>
mountPath: /etc/selinux<br>
- name: selinux-lib<br>
mountPath: /usr/lib/selinux<br>
volumes:<br>
- name: dev<br>
hostPath:<br>
path: /dev<br>
- name: system-etc<br>
hostPath:<br>
path: /etc/glusterd-user-sas<br>
- name: system-config<br>
hostPath:<br>
path: /var/lib/glusterd-user-sas<br>
- name: heketi-ssh<br>
hostPath:<br>
path: /etc/glusterd-user/ssh<br>
- name: heketi-var<br>
hostPath:<br>
path: /var/lib/heketi-user-sas<br>
- name: workaround-1<br>
hostPath:<br>
path: /var/sds/sas<br>
- name: system-sys<br>
hostPath:<br>
path: /sys/fs/cgroup<br>
- name: selinux<br>
hostPath:<br>
path: /etc/selinux<br>
- name: selinux-lib<br>
hostPath:<br>
path: /usr/lib/selinux<br>
<br>
3. Udev work properly, there is no issues to handle udev
events by lvm socket from different clusters. <br>
4. Yes, it works correctly as you won't use the same
devices at the same time :)<br>
5. With host network it wont work, you need to use
docker/kubernetes network<br>
<br>
and then you can manage two clusters of gluster using
heketi. <br>
<br>
Please let me know in case of any questions. <br>
<br>
Thank you,<br>
Pavel K. <br>
<br>
<br>
3/3/17 2:43 PM, Mohamed Ashiq Liyazudeen пишет:<br>
</div>
<blockquote>
<pre>Hi Pavel,
If you have configured kube to run two instance of gluster container on same node and taken care of:
1) hostname of the gluster container should not change on restarts or any docker issue. Also The container on re-spawn or restart Kube should not start in a different node.
2) the gluster and heketi configuration should be persisted.
* /var/lib/heketi
* /var/lib/glusterd
* /var/log/glusterfs
* /etc/glusterfs
3) honestly, I have not done much testing on /dev of host bind mounted on two containers and doing lvcreate. How the Udev will handle these cases I am not sure. Let us say this works properly.
4) Both privileged container sharing /dev has no issues.
Then Yeah you can do the same now with heketi and peer probe with hostname of container. Heketi will consider as two different cluster and Hekeit will use only the devices which are mentioned in the topology file. Let me know If you need any help. I not saying this will work we have not done this. Just for the sake of Creativity, I would like to see what happens :). Good luck.
--
Ashiq
----- Original Message -----
From: "Pavel Kutishchev" <a moz-do-not-send="true" href="mailto:pavel.kutishchev@gmail.com"><pavel.kutishchev@gmail.com></a>
To: <a moz-do-not-send="true" href="mailto:heketi-devel@gluster.org">heketi-devel@gluster.org</a>
Sent: Friday, March 3, 2017 5:42:36 PM
Subject: Re: [heketi-devel] heketi-devel Digest, Vol 6, Issue 3
Hi Ashiq,
Actually i'm doing the same as described in this issue, at one physical
node we have two glusterFS clusters for hdd and ssd, using kubernetes.
There is no problems to have two clusters on the same node.
If so i can help and clarify how we're doing this in separate thread.
Thank you,
Pavel K.
3/3/17 2:00 PM, <a moz-do-not-send="true" href="mailto:heketi-devel-request@gluster.org">heketi-devel-request@gluster.org</a> пишет:
</pre>
<blockquote>
<pre>Send heketi-devel mailing list submissions to
        <a moz-do-not-send="true" href="mailto:heketi-devel@gluster.org">heketi-devel@gluster.org</a>
To subscribe or unsubscribe via the World Wide Web, visit
        <a moz-do-not-send="true" href="http://lists.gluster.org/mailman/listinfo/heketi-devel">http://lists.gluster.org/mailman/listinfo/heketi-devel</a>
or, via email, send a message with subject or body 'help' to
        <a moz-do-not-send="true" href="mailto:heketi-devel-request@gluster.org">heketi-devel-request@gluster.org</a>
You can reach the person managing the list at
        <a moz-do-not-send="true" href="mailto:heketi-devel-owner@gluster.org">heketi-devel-owner@gluster.org</a>
When replying, please edit your Subject line so it is more specific
than "Re: Contents of heketi-devel digest..."
Today's Topics:
1. Re: [heketi] Pre-existing GlusterFS cluster
(Mohamed Ashiq Liyazudeen)
----------------------------------------------------------------------
Message: 1
Date: Fri, 3 Mar 2017 05:07:40 -0500 (EST)
From: Mohamed Ashiq Liyazudeen <a moz-do-not-send="true" href="mailto:mliyazud@redhat.com"><mliyazud@redhat.com></a>
To: Raghavendra Talur <a moz-do-not-send="true" href="mailto:rtalur@redhat.com"><rtalur@redhat.com></a>
Cc: <a moz-do-not-send="true" href="mailto:heketi-devel@gluster.org">heketi-devel@gluster.org</a>
Subject: Re: [heketi-devel] [heketi] Pre-existing GlusterFS cluster
Message-ID:
        <a moz-do-not-send="true" href="mailto:878669976.31135169.1488535660381.JavaMail.zimbra@redhat.com"><878669976.31135169.1488535660381.JavaMail.zimbra@redhat.com></a>
Content-Type: text/plain; charset=utf-8
Hi,
We can do something like this. Not for now but as a support. Let me know if this helps?
<a moz-do-not-send="true" href="https://github.com/heketi/heketi/issues/700">https://github.com/heketi/heketi/issues/700</a>
--
Ashiq
----- Original Message -----
From: "Raghavendra Talur" <a moz-do-not-send="true" href="mailto:rtalur@redhat.com"><rtalur@redhat.com></a>
To: "Jose A. Rivera" <a moz-do-not-send="true" href="mailto:jarrpa@redhat.com"><jarrpa@redhat.com></a>
Cc: <a moz-do-not-send="true" href="mailto:heketi-devel@gluster.org">heketi-devel@gluster.org</a>
Sent: Thursday, March 2, 2017 2:19:11 PM
Subject: Re: [heketi-devel] [heketi] Pre-existing GlusterFS cluster
On Tue, Feb 28, 2017 at 10:09 PM, Jose A. Rivera <a moz-do-not-send="true" href="mailto:jarrpa@redhat.com"><jarrpa@redhat.com></a> wrote:
</pre>
<blockquote>
<pre>Poking at this again. This time putting forth the idea of having two
heketi-controlled clusters overlapping in nodes (but not in devices,
of course). This facilitates, for example, a node that has both HDDs
and SSD, and being able to group the HDDs in come cluster and SSDs in
another so that users could have some ability to select the underlying
storage type if it matters to them.
</pre>
</blockquote>
<pre>The same node being part of two clusters is not possible with Gluster
architecture. We might be able to achieve that only if we figure out
how to run Gluster containers with using host networking.
I do want to see a solution for the problem mentioned in the subject
and like the previous algorithm given by Jose.
</pre>
<blockquote>
<pre>--Jose
On Wed, Feb 8, 2017 at 11:51 AM, Jose A. Rivera <a moz-do-not-send="true" href="mailto:jarrpa@redhat.com"><jarrpa@redhat.com></a> wrote:
</pre>
<blockquote>
<pre>Ping :)
On Thu, Jan 26, 2017 at 8:59 AM, Jose A. Rivera <a moz-do-not-send="true" href="mailto:jarrpa@redhat.com"><jarrpa@redhat.com></a> wrote:
</pre>
<blockquote>
<pre>Sure thing!
The ask is for an OpenShift use case where I want to create a
GlusterFS volume to store the local Docker registry before any
containers are running. I was thinking of doing this by running
Gluster natively on the OpenShift nodes, outside of containers,
creating a cluster of them, then selecting a directory on each node to
serve as bricks for the volume. The idea here is that I would still
want to deploy heketi in a container later on, and just use these same
nodes in the topology file. heketi would still need to be given
dedicated storage devices on each node.
As far as the algorithm, I figure it should be something like:
For adding a node of a pre-existing cluster, watch for the return
code/value from the exec of gluster peer probe and if it says peer
already in list we return success. If for some reason the pre-existing
cluster only overlaps on a subset of nodes with the heketi cluster,
gluster can handle this.
</pre>
</blockquote>
</blockquote>
</blockquote>
<pre>+1
</pre>
<blockquote>
<blockquote>
<blockquote>
<pre>In the inverse, when you remove a node from heketi, watch for a
message that peer cannot be detached because it has bricks and remove
the node from heketi anyway. heketi already does its own checks to see
if a volume is on a particular node, so we can't get to the point
where a heketi-managed brick is still extant on a heketi-managed node
unless something goes really wrong (and then we have to resort to the
backend command line anyway, I'd imagine?).
Feedback, concerns, or flames welcome. :)
--Jose
On Wed, Jan 25, 2017 at 10:55 PM, Luis Pabon <a moz-do-not-send="true" href="mailto:lpabon@gmail.com"><lpabon@gmail.com></a> wrote:
</pre>
<blockquote>
<pre>I think that people keep asking because they want to continue what they
know.
I think it would work better if you provide a set of requirements and
preconditions and the algorithm which satisfies what you would like to do.
Provide it here, and let's discuss it. No code needed.
- Luis
On Wed, Jan 25, 2017 at 7:29 PM, Jose A. Rivera <a moz-do-not-send="true" href="mailto:jarrpa@redhat.com"><jarrpa@redhat.com></a> wrote:
</pre>
<blockquote>
<pre>On Wed, Jan 25, 2017 at 2:16 PM, Luis Pabon <a moz-do-not-send="true" href="mailto:lpabon@gmail.com"><lpabon@gmail.com></a> wrote:
</pre>
<blockquote>
<pre>Hi Jos?,
This has been asked for many many times. Heketi was designed to "rule
them all". Heketi was never designed for systems that have been setup
already because the permutations of possibilities of configurations
could be
extensive to figure out how to manage. It is like creating a Ceph Rados
system by yourself, then asking the pool manager to figure out what you
did.
If instead Ceph is viewed as a collection of the access+pool+storage and
not
as individual parts, then it all works well and is predictable. In the
same
way, it should not be viewed as Heketi managing GlusterFS, but
Heketi/GlusterFS instead. Once this view is accepted (which is what
users
want, but old school gluster users have a hard time with), then what
Heketi
currently does makes perfect sense.
So, back to the question, no, Heketi does not and will never manage such
a
model. Any software that manages such a configuration would be hard to
productize and guarantee. Can you make a hack that does it? Maybe, but
reliability and simplicity is what Heketi is after.
Hope this answers your question.
</pre>
</blockquote>
<pre>I know this has been asked more than once and I believe this keeps
being asked for because the above is still an unsatisfactory answer.
:) Already we are seeing new users asking for maintenance features
that would be perfectly possible with Gluster but which are currently
out of reach when going with heketi. I think focusing too hard on
"simplicity" will quickly become limiting to heketi's desirability. It
would seem to make more sense to go with a mindset of a tailored
experience, with the ability to go in deeper if desired.
There doesn't seem to be anything technically complicated about the
idea that heketi could tolerate a peer probe coming back already
satisfied, or that a node is removed without removing it from the peer
list. I don't see how this would prove to be dangerous as long as we
maintain the understanding that you are not to go in on the backend to
mess with anything heketi is actively managing. This seems like
something we could easily test, make reliable, and productize.
--Jose
</pre>
<blockquote>
<pre>- Luis
On Tue, Jan 24, 2017 at 12:24 PM, Jose A. Rivera <a moz-do-not-send="true" href="mailto:jarrpa@redhat.com"><jarrpa@redhat.com></a>
wrote:
</pre>
<blockquote>
<pre>Hey Luis, et al.,
I talked to Ashiq about $SUBJECT, and he raised some concerns.
Apparently heketi can not load/import nodes that are already part of a
Gluster cluster? E.g. if I have an existing cluster with all the nodes
already peer probed, heketi will try to redo the probe and then fail
when it comes back already in peer list? This seems odd to me, but if
so sounds like a relatively easy thing to change. Thoughts?
--Jose
</pre>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
<pre>_______________________________________________
heketi-devel mailing list
<a moz-do-not-send="true" href="mailto:heketi-devel@gluster.org">heketi-devel@gluster.org</a>
<a moz-do-not-send="true" href="http://lists.gluster.org/mailman/listinfo/heketi-devel">http://lists.gluster.org/mailman/listinfo/heketi-devel</a>
</pre>
</blockquote>
<pre>_______________________________________________
heketi-devel mailing list
<a moz-do-not-send="true" href="mailto:heketi-devel@gluster.org">heketi-devel@gluster.org</a>
<a moz-do-not-send="true" href="http://lists.gluster.org/mailman/listinfo/heketi-devel">http://lists.gluster.org/mailman/listinfo/heketi-devel</a>
</pre>
</blockquote>
<pre>
</pre>
</blockquote>
<br>
<p><br>
</p>
<pre>--
Best regards
Pavel Kutishchev
DevOPS Engineer at
Self employed.</pre>
</div>
</blockquote>
</div>
<br>
</div>
</blockquote>
<br>
<p><br>
</p>
<pre class="moz-signature" cols="72">--
Best regards
Pavel Kutishchev
DevOPS Engineer at
Self employed.</pre>
</body>
</html>