[Gluster-users] Glusterfs as databse store
Alex K
rightkicktech at gmail.com
Tue Oct 13 18:42:15 UTC 2020
On Mon, Oct 12, 2020, 21:50 Olaf Buitelaar <olaf.buitelaar at gmail.com> wrote:
> Hi Alex,
>
> I've been running databases both directly and indirectly through qemu
> images vms (managed by oVirt), and since the recent gluster versions (6+,
> haven't tested 7-8) I'm generally happy with the stability. I'm running
> mostly write intensive workloads.
> For mariadb, any gluster volume seems to workfine, i've both running
> shared and none-sharded volumes (using none-sharded for backup slave's to
> keep the file's as a whole).
> For postgresql it's required to enable the volume
> option; performance.strict-o-direct: on. but both shared and none-sharded
> work in that case too.
> none the less i would advise to run any database with strict-o-direct on.
>
Thanx Olaf for your feedback. Appreciated
>
> Best Olaf
>
>
> Op ma 12 okt. 2020 om 20:10 schreef Alex K <rightkicktech at gmail.com>:
>
>>
>>
>> On Mon, Oct 12, 2020, 19:24 Strahil Nikolov <hunter86_bg at yahoo.com>
>> wrote:
>>
>>> Hi Alex,
>>>
>>> I can share that oVirt is using Gluster as a HCI solution and many
>>> people are hosting DBs in their Virtual Machines.Yet, oVirt bypasses any
>>> file system caches and uses Direct I/O in order to ensure consistency.
>>>
>>> As you will be using pacemaker, drbd is a viable solution that can be
>>> controlled easily.
>>>
>> Thank you Strahil. I am using ovirt with glusterfs successfully for the
>> last 5 years and I'm very happy about it. Though the vms gluster volume has
>> sharding enabled by default and I suspect this is different if you run DB
>> directly on top glusterfs. I assume there are optimizations one could apply
>> at gluster volumes (use direct io?, small file workload optimizations, etc)
>> and was hoping that there were success stories of DBs on top glusterfs. I
>> might go with drbd as the latest version is much more scalable and
>> simplified.
>>
>>>
>>> Best Regards,
>>> Strahil Nikolov
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> В понеделник, 12 октомври 2020 г., 12:12:18 Гринуич+3, Alex K <
>>> rightkicktech at gmail.com> написа:
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> On Mon, Oct 12, 2020 at 9:47 AM Diego Zuccato <diego.zuccato at unibo.it>
>>> wrote:
>>> > Il 10/10/20 16:53, Alex K ha scritto:
>>> >
>>> >> Reading from the docs i see that this is not recommended?
>>> > IIUC the risk of having partially-unsynced data is is too high.
>>> > DB replication is not easy to configure because it's hard to do well,
>>> > even active/passive.
>>> > But I can tell you that a 3-node mariadb (galera) cluster is not hard
>>> to
>>> > setup. Just follow one of the tutorials. It's nearly as easy as setting
>>> > up a replica3 gluster volume :)
>>> > And "guarantees" consinstency in the DB data.
>>> I see. Since I will not have only mariadb, then I have to setup the same
>>> replication for postgresql and later influxdb, which adds into the
>>> complexity.
>>> For cluster management I will be using pacemaker/corosync.
>>>
>>> Thanx for your feedback
>>>
>>> >
>>> > --
>>> > Diego Zuccato
>>> > DIFA - Dip. di Fisica e Astronomia
>>> > Servizi Informatici
>>> > Alma Mater Studiorum - Università di Bologna
>>> > V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
>>> > tel.: +39 051 20 95786
>>> >
>>> ________
>>>
>>>
>>>
>>> Community Meeting Calendar:
>>>
>>> Schedule -
>>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>>> Bridge: https://bluejeans.com/441850968
>>>
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>>
>> ________
>>
>>
>>
>> Community Meeting Calendar:
>>
>> Schedule -
>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>> Bridge: https://bluejeans.com/441850968
>>
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20201013/8d00dd57/attachment.html>
More information about the Gluster-users
mailing list