[Gluster-users] GlusterFS and Kafka
Christopher Schmidt
fakod666 at gmail.com
Tue May 23 06:56:40 UTC 2017
Well, a "turning off caching" warning is ok.
But without doing anything, Kafka definitely doesn't work. Which is somehow
strange because it is a normal (IMHO) JVM process written in Scala. I am
wondering if there are some issues with other tools too.
Vijay Bellur <vbellur at redhat.com> schrieb am Di., 23. Mai 2017 um 05:48 Uhr:
> On Mon, May 22, 2017 at 11:49 AM, Joe Julian <joe at julianfamily.org> wrote:
>
>> This may be asking too much, but can you explain why or how it's even
>> possible to bypass the cache like this, Vijay?
>>
>
> This is a good question and the answers to that is something I should have
> elaborated a bit more in my previous response. As far as the why is
> concerned, gluster's client stack is configured by default to provide
> reasonable performance and not be very strongly consistent for
> applications that need the most accurate metadata for their functioning.
> Turning off the performance translators provide more stronger consistency
> and we have seen applications that rely on a high degree of consistency
> work better with that configuration. It is with this backdrop I suggested
> performance translators be turned off from the client stack for Kafka.
>
> On how it is possible, the translator model of gluster helps us to enable
> or disable optional functionality from the stack. There is no single
> configuration that can accommodate workloads with varying profiles and
> having a modular architecture is a plus for gluster - the storage stack can
> be tuned to suit varying application profiles. We are exploring the
> possibility of providing custom profiles (collections of options) for
> popular applications to make it easier for all of us. Note that disabling
> performance translators in gluster is similar to turning off caching with
> the NFS client. In parallel we are also looking to alter the behavior of
> performance translators to provide as much consistency as possible by
> default.
>
> Thanks,
> Vijay
>
>>
>>
>> On May 22, 2017 7:41:40 AM PDT, Vijay Bellur <vbellur at redhat.com> wrote:
>>>
>>> Looks like a problem with caching. Can you please try by disabling all
>>> performance translators? The following configuration commands would disable
>>> performance translators in the gluster client stack:
>>>
>>> gluster volume set <volname> performance.quick-read off
>>> gluster volume set <volname> performance.io-cache off
>>> gluster volume set <volname> performance.write-behind off
>>> gluster volume set <volname> performance.stat-prefetch off
>>> gluster volume set <volname> performance.read-ahead off
>>> gluster volume set <volname> performance.readdir-ahead off
>>> gluster volume set <volname> performance.open-behind off
>>> gluster volume set <volname> performance.client-io-threads off
>>>
>>> Thanks,
>>> Vijay
>>>
>>>
>>>
>>> On Mon, May 22, 2017 at 9:46 AM, Christopher Schmidt <fakod666 at gmail.com
>>> > wrote:
>>>
>>>> Hi all,
>>>>
>>>> has anyone ever successfully deployed a Kafka (Cluster) on GlusterFS
>>>> volumes?
>>>>
>>>> I my case it's a Kafka Kubernetes-StatefulSet and a Heketi GlusterFS.
>>>> Needless to say that I am getting a lot of filesystem related
>>>> exceptions like this one:
>>>>
>>>> Failed to read `log header` from file channel
>>>> `sun.nio.ch.FileChannelImpl at 67afa54a`. Expected to read 12 bytes, but
>>>> reached end of file after reading 0 bytes. Started read from position
>>>> 123065680.
>>>>
>>>> I limited the amount of exceptions with
>>>> the log.flush.interval.messages=1 option, but not all...
>>>>
>>>> best Christopher
>>>>
>>>>
>>>> _______________________________________________
>>>> Gluster-users mailing list
>>>> Gluster-users at gluster.org
>>>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>>>
>>>
>>>
>> --
>> Sent from my Android device with K-9 Mail. Please excuse my brevity.
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170523/860bea2e/attachment.html>
More information about the Gluster-users
mailing list