[Gluster-users] Gluster and LVM

Alex K rightkicktech at gmail.com
Tue Apr 9 05:26:18 UTC 2019


On Mon, Apr 8, 2019, 21:47 Strahil <hunter86_bg at yahoo.com> wrote:

> Correct me if I'm wrong but thin LVM is needed for creation of snapshots.
>
Yes, you need thin provisioned logical volumes for gluster snapshots.
Actually, gluster snapshots are lvm snapshots under the hood.

> I am a new gluster user , but I don't see any LVM issues so far.
>
Neither me

> Best Regards,
> Strahil Nikolov
> On Apr 8, 2019 21:15, Alex K <rightkicktech at gmail.com> wrote:
>
> I use gluster on top of lvm for several years without any issues.
>
> On Mon, Apr 8, 2019, 10:43 Felix Kölzow <felix.koelzow at gmx.de> wrote:
>
> Thank you very much for your response.
>
> I fully agree that using LVM has great advantages. Maybe there is a
> misunderstanding,
>
> but I really got the recommendation to not use (normal) LVM in combination
> with gluster to
>
> increase the volume. *Maybe someone in the community has some good or bad
> experience*
>
> *using LVM and gluster in combination.* So please let me know :)
>
>
> One of the arguments for things like Gluster and Ceph is that you can many
> storage nodes that operate in parallel so that the ideal is a very large
> number of small drive arrays over a small number of very large drive
> arrays.
>
> I also agree we that. In our case, we actually plan to get Redhat Gluster
> Storage Support and an increase of
>
> storage nodes would mean an increase of support costs while the same
> amount of storage volume is available.
>
> So we are looking for a reasonable compromise.
>
> Felix
> On 03.04.19 17:12, Alvin Starr wrote:
>
> As a general rule I always suggest using LVM.
> I have had LVM save my career a few times.
> I believe that if you wish to use Gluster snapshots then the underlying
> system needs to be a thinly provisioned LVM volume.
>
> Adding storage space to an LVM is easy and all modern file-systems support
> online growing so it is easy to grow a file-system.
>
> If you have directory trees that are very deep and wide then you may want
> to put a bit of thought into how you configure your Gluster installation.
> We have a volume with about 50M files and something like an xfs dump or
> rsync of the underlying filesystem will take close to a day but copying the
> data over Gluster takes weeks.
> This is a problem with all clustered file systems because there is extra
> locking and co-ordination required for file operations.
>
> Also you need to realize that the performance of something like the
> powervault is limited to the speed of the connection to your server.
> So that a single SAS link is limited to 6Gb(for example) and so is your
> disk array but most internal raid controllers will support the number of
> ports * 6Gb.
> This means that a computer with 12 drives in the front will access disk
> faster than a system with a 12 drive disk array attached by a few SAS
> links.
>
> One of the arguments for things like Gluster and Ceph is that you can many
> storage nodes that operate in parallel so that the ideal is a very large
> number of small drive arrays over a small number of very large drive
> arrays.
>
>
> On 4/3/19 10:20 AM, kbh-admin wrote:
>
> Hello Gluster-Community,
>
>
> we consider to build several Gluster-servers and have a question
> regarding  lvm and Glusterfs.
>
>
> Scenario 1: Snapshots
>
> Of course, taking snapshots is a good capability and we want to use lvm
> for that.
>
>
> Scenaraio 2: Increase Gluster volume
>
> We want to increase the Gluster volume by adding hdd's and/or by adding
>
> dell powervaults later. We got the recommendation to set up a new Gluster
> volume
>
> for the powervaults and don't use lvm in that case (lvresize ....) .
>
>
> What would you suggest and how do you manage both lvm and Glusterfs
> together?
>
>
> Thanks in advance.
>
>
> Felix
>
> _______________________________________________
> Gluster-users mailing list
> <Gluster-users at Gluster.org>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190409/b54960f4/attachment.html>


More information about the Gluster-users mailing list