[Gluster-devel] [Gluster-users] Gluster Docker images are available at docker hub
Humble Devassy Chirammal
humble.devassy at gmail.com
Wed Jul 1 07:13:59 UTC 2015
> Yeah this followed by glusterd restart should help
>
> But frankly, i was hoping that 'rm' the file isn't a neat way to fix this
> issue
>>Why is rm not a neat way? Is it because the container deployment tool
needs to
>>know about gluster internals? But isn't a Dockerfile dealing with details
>>of the service(s) that is being deployment in a container.
I do think fixing the dockerfile is *not* the correct way. That said, the
use case is not just
containers. This issue can pop up in Ovirt or virtualization environments
as well.
The VM template may have pre configured glusterd in it and the pool created
out of this template
can show the same behaviour.
I believe fixing it in gluster code base would be the right thing to do.
Thanks Atin for the heads up!
> AFAICT we have 2 scenarios:
>
> 1) Non-container scenario, where the current behaviour of glusterd
persisting
> the info in .info file makes sense
>
> 2) Container scenario, where the same image gets used as the base, hence
all
> containers gets the same UUID
> For this we can have an option to tell glusterd that instructs it to
refresh
> the UUID during next start.
>
--Humble
On Wed, Jul 1, 2015 at 11:32 AM, Krishnan Parthasarathi <kparthas at redhat.com
> wrote:
>
> > Yeah this followed by glusterd restart should help
> >
> > But frankly, i was hoping that 'rm' the file isn't a neat way to fix this
> > issue
>
> Why is rm not a neat way? Is it because the container deployment tool
> needs to
> know about gluster internals? But isn't a Dockerfile dealing with details
> of the service(s) that is being deployment in a container.
>
> > AFAICT we have 2 scenarios:
> >
> > 1) Non-container scenario, where the current behaviour of glusterd
> persisting
> > the info in .info file makes sense
> >
> > 2) Container scenario, where the same image gets used as the base, hence
> all
> > containers gets the same UUID
> > For this we can have an option to tell glusterd that instructs it to
> refresh
> > the UUID during next start.
> >
> > Maybe somethign like presence of a file /var/lib/glusterd/refresh_uuid
> makes
> > glusterd refresh the UUID in .info
> > and then delete this file, that ways, Dockerfile can touch this file,
> post
> > gluster rpm install step and things should
> > work as expected ?
>
> If container deployment needs are different it should should address
> issues like
> above. If we start addressing glusterd's configuration handling for every
> new deployment technology it would quickly become unmaintainable.
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-devel/attachments/20150701/bf7c5dd4/attachment.html>
More information about the Gluster-devel
mailing list