[Gluster-users] GlusterFS and Kafka

Raghavendra Talur rtalur at redhat.com
Mon May 29 11:18:04 UTC 2017


On 29-May-2017 3:49 PM, "Christopher Schmidt" <fakod666 at gmail.com> wrote:

Hi Raghavendra Talur,

this does not work for me. Most certainly because I forgot something.
So just put the file in the folder, make it executable and create a volume?
Thats all?

If I am doing this, there is no /var/lib/glusterd/hooks/1/create/post/log
file and the Performance Translator is still on.

and idea?


Most certainly selinux. I think you will have to set context on the file to
same as others in hooks dir.

You can test temporarily by
setenforce  0


Raghavendra Talur <rtalur at redhat.com> schrieb am Fr., 26. Mai 2017 um
07:34 Uhr:

> On Thu, May 25, 2017 at 8:39 PM, Joe Julian <joe at julianfamily.org> wrote:
> > Maybe hooks?
>
> Yes, we were thinking of the same :)
>
> Christopher,
> Gluster has hook-scripts facility that admins can write and set those
> to be run on certain events in Gluster. We have a event for volume
> creation.
> Here are the steps for using hook scripts.
>
> 1. deploy the gluster pods and create a cluster as you have already done.
> 2. on the kubernetes nodes that are running gluster pods(make sure
> they are running now, because we want to write into the bind mount),
> create a new file in location /var/lib/glusterd/hooks/1/create/post/
> 3. name of the fie could be S29disable-perf.sh , important part being
> that the number should have capital S as first letter.
> 4. I tried out a sample script with content as below
>
> ```
> #!/bin/bash
>
>
>
> PROGNAME="Sdisable-perf"
> OPTSPEC="volname:,gd-workdir:"
> VOL=
> CONFIGFILE=
> LOGFILEBASE=
> PIDDIR=
> GLUSTERD_WORKDIR=
>
> function parse_args () {
>         ARGS=$(getopt -l $OPTSPEC  -name $PROGNAME $@)
>         eval set -- "$ARGS"
>
>         while true; do
>             case $1 in
>                 --volname)
>                     shift
>                     VOL=$1
>                     ;;
>                 --gd-workdir)
>                     shift
>                     GLUSTERD_WORKDIR=$1
>                     ;;
>                 *)
>                     shift
>                     break
>                     ;;
>             esac
>             shift
>         done
> }
>
> function disable_perf_xlators () {
>         volname=$1
>         gluster volume set $volname performance.write-behind off
>         echo "executed and return is $?" >>
> /var/lib/glusterd/hooks/1/create/post/log
> }
>
> echo "starting" >> /var/lib/glusterd/hooks/1/create/post/log
> parse_args $@
> disable_perf_xlators $VOL
> ```
> 5. set execute permissions on the file
>
> I tried this out and it worked for me. Let us know if that helps!
>
> Thanks,
> Raghavendra Talur
>
>
>
> >
> >
> > On May 25, 2017 6:48:04 AM PDT, Christopher Schmidt <fakod666 at gmail.com>
> > wrote:
> >>
> >> Hi Humble,
> >>
> >> thanks for that, it is really appreciated.
> >>
> >> In the meanwhile, using K8s 1.5, what can I do to disable the
> performance
> >> translator that doesn't work with Kafka? Maybe something while
> generating
> >> the Glusterfs container for Kubernetes?
> >>
> >> Best Christopher
> >>
> >> Humble Chirammal <hchiramm at redhat.com> schrieb am Do., 25. Mai 2017,
> >> 09:36:
> >>>
> >>> On Thu, May 25, 2017 at 12:57 PM, Raghavendra Talur <rtalur at redhat.com
> >
> >>> wrote:
> >>>>
> >>>> On Thu, May 25, 2017 at 11:21 AM, Christopher Schmidt
> >>>> <fakod666 at gmail.com> wrote:
> >>>> > So this change of the Gluster Volume Plugin will make it into K8s
> 1.7
> >>>> > or
> >>>> > 1.8. Unfortunately too late for me.
> >>>> >
> >>>> > Does anyone know how to disable performance translators by default?
> >>>>
> >>>> Humble,
> >>>>
> >>>> Do you know of any way Christopher can proceed here?
> >>>
> >>>
> >>> I am trying to get it in 1.7 branch, will provide an update here as
> soon
> >>> as its available.
> >>>>
> >>>>
> >>>> >
> >>>> >
> >>>> > Raghavendra Talur <rtalur at redhat.com> schrieb am Mi., 24. Mai 2017,
> >>>> > 19:30:
> >>>> >>
> >>>> >> On Wed, May 24, 2017 at 4:08 PM, Christopher Schmidt
> >>>> >> <fakod666 at gmail.com>
> >>>> >> wrote:
> >>>> >> >
> >>>> >> >
> >>>> >> > Vijay Bellur <vbellur at redhat.com> schrieb am Mi., 24. Mai 2017
> um
> >>>> >> > 05:53
> >>>> >> > Uhr:
> >>>> >> >>
> >>>> >> >> On Tue, May 23, 2017 at 1:39 AM, Christopher Schmidt
> >>>> >> >> <fakod666 at gmail.com>
> >>>> >> >> wrote:
> >>>> >> >>>
> >>>> >> >>> OK, seems that this works now.
> >>>> >> >>>
> >>>> >> >>> A couple of questions:
> >>>> >> >>> - What do you think, are all these options necessary for Kafka?
> >>>> >> >>
> >>>> >> >>
> >>>> >> >> I am not entirely certain what subset of options will make it
> work
> >>>> >> >> as I
> >>>> >> >> do
> >>>> >> >> not understand the nature of failure with  Kafka and the default
> >>>> >> >> gluster
> >>>> >> >> configuration. It certainly needs further analysis to identify
> the
> >>>> >> >> list
> >>>> >> >> of
> >>>> >> >> options necessary. Would it be possible for you to enable one
> >>>> >> >> option
> >>>> >> >> after
> >>>> >> >> the other and determine the configuration that ?
> >>>> >> >>
> >>>> >> >>
> >>>> >> >>>
> >>>> >> >>> - You wrote that there have to be kind of application profiles.
> >>>> >> >>> So to
> >>>> >> >>> find out, which set of options work is currently a matter of
> >>>> >> >>> testing
> >>>> >> >>> (and
> >>>> >> >>> hope)? Or are there any experiences for MongoDB / ProstgreSQL /
> >>>> >> >>> Zookeeper
> >>>> >> >>> etc.?
> >>>> >> >>
> >>>> >> >>
> >>>> >> >> Application profiles are work in progress. We have a few that
> are
> >>>> >> >> focused
> >>>> >> >> on use cases like VM storage, block storage etc. at the moment.
> >>>> >> >>
> >>>> >> >>>
> >>>> >> >>> - I am using Heketi and Dynamik Storage Provisioning together
> >>>> >> >>> with
> >>>> >> >>> Kubernetes. Can I set this volume options somehow by default or
> >>>> >> >>> by
> >>>> >> >>> volume
> >>>> >> >>> plugin?
> >>>> >> >>
> >>>> >> >>
> >>>> >> >>
> >>>> >> >> Adding Raghavendra and Michael to help address this query.
> >>>> >> >
> >>>> >> >
> >>>> >> > For me it would be sufficient to disable some (or all)
> translators,
> >>>> >> > for
> >>>> >> > all
> >>>> >> > volumes that'll be created, somewhere here:
> >>>> >> > https://github.com/gluster/gluster-containers/tree/master/CentOS
> >>>> >> > This is the container used by the GlusterFS DaemonSet for
> >>>> >> > Kubernetes.
> >>>> >>
> >>>> >> Work is in progress to give such option at volume plugin level. We
> >>>> >> currently have a patch[1] in review for Heketi that allows users to
> >>>> >> set Gluster options using heketi-cli instead of going into a
> Gluster
> >>>> >> pod. Once this is in, we can add options in storage-class of
> >>>> >> Kubernetes that pass down Gluster options for every volume created
> in
> >>>> >> that storage-class.
> >>>> >>
> >>>> >> [1] https://github.com/heketi/heketi/pull/751
> >>>> >>
> >>>> >> Thanks,
> >>>> >> Raghavendra Talur
> >>>> >>
> >>>> >> >
> >>>> >> >>
> >>>> >> >>
> >>>> >> >> -Vijay
> >>>> >> >>
> >>>> >> >>
> >>>> >> >>
> >>>> >> >>>
> >>>> >> >>>
> >>>> >> >>> Thanks for you help... really appreciated.. Christopher
> >>>> >> >>>
> >>>> >> >>> Vijay Bellur <vbellur at redhat.com> schrieb am Mo., 22. Mai
> 2017 um
> >>>> >> >>> 16:41
> >>>> >> >>> Uhr:
> >>>> >> >>>>
> >>>> >> >>>> Looks like a problem with caching. Can you please try by
> >>>> >> >>>> disabling
> >>>> >> >>>> all
> >>>> >> >>>> performance translators? The following configuration commands
> >>>> >> >>>> would
> >>>> >> >>>> disable
> >>>> >> >>>> performance translators in the gluster client stack:
> >>>> >> >>>>
> >>>> >> >>>> gluster volume set <volname> performance.quick-read off
> >>>> >> >>>> gluster volume set <volname> performance.io-cache off
> >>>> >> >>>> gluster volume set <volname> performance.write-behind off
> >>>> >> >>>> gluster volume set <volname> performance.stat-prefetch off
> >>>> >> >>>> gluster volume set <volname> performance.read-ahead off
> >>>> >> >>>> gluster volume set <volname> performance.readdir-ahead off
> >>>> >> >>>> gluster volume set <volname> performance.open-behind off
> >>>> >> >>>> gluster volume set <volname> performance.client-io-threads off
> >>>> >> >>>>
> >>>> >> >>>> Thanks,
> >>>> >> >>>> Vijay
> >>>> >> >>>>
> >>>> >> >>>>
> >>>> >> >>>>
> >>>> >> >>>> On Mon, May 22, 2017 at 9:46 AM, Christopher Schmidt
> >>>> >> >>>> <fakod666 at gmail.com> wrote:
> >>>> >> >>>>>
> >>>> >> >>>>> Hi all,
> >>>> >> >>>>>
> >>>> >> >>>>> has anyone ever successfully deployed a Kafka (Cluster) on
> >>>> >> >>>>> GlusterFS
> >>>> >> >>>>> volumes?
> >>>> >> >>>>>
> >>>> >> >>>>> I my case it's a Kafka Kubernetes-StatefulSet and a Heketi
> >>>> >> >>>>> GlusterFS.
> >>>> >> >>>>> Needless to say that I am getting a lot of filesystem related
> >>>> >> >>>>> exceptions like this one:
> >>>> >> >>>>>
> >>>> >> >>>>> Failed to read `log header` from file channel
> >>>> >> >>>>> `sun.nio.ch.FileChannelImpl at 67afa54a`. Expected to read 12
> >>>> >> >>>>> bytes,
> >>>> >> >>>>> but
> >>>> >> >>>>> reached end of file after reading 0 bytes. Started read from
> >>>> >> >>>>> position
> >>>> >> >>>>> 123065680.
> >>>> >> >>>>>
> >>>> >> >>>>> I limited the amount of exceptions with the
> >>>> >> >>>>> log.flush.interval.messages=1 option, but not all...
> >>>> >> >>>>>
> >>>> >> >>>>> best Christopher
> >>>> >> >>>>>
> >>>> >> >>>>>
> >>>> >> >>>>> _______________________________________________
> >>>> >> >>>>> Gluster-users mailing list
> >>>> >> >>>>> Gluster-users at gluster.org
> >>>> >> >>>>> http://lists.gluster.org/mailman/listinfo/gluster-users
> >>>> >> >>>>
> >>>> >> >>>>
> >>>> >> >
> >>>
> >>>
> >>>
> >>>
> >>> --
> >>> Cheers,
> >>> Humble
> >>>
> >>> Sr.Software Engineer - Red Hat Storage Engineering
> >>> website: http://humblec.com
> >
> >
> > --
> > Sent from my Android device with K-9 Mail. Please excuse my brevity.
> >
> > _______________________________________________
> > Gluster-users mailing list
> > Gluster-users at gluster.org
> > http://lists.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170529/b7dc6820/attachment.html>


More information about the Gluster-users mailing list