<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<p>Thank you very much for your response.</p>
<p>I fully agree that using LVM has great advantages. Maybe there is
a misunderstanding,</p>
<p>but I really got the recommendation to not use (normal) LVM in
combination with gluster to <br>
</p>
<p>increase the volume. <b>Maybe someone in the community has some
good or bad experience</b></p>
<p><b>using LVM and gluster in combination.</b> So please let me
know :)</p>
<p><br>
</p>
<p>
<blockquote type="cite">One of the arguments for things like
Gluster and Ceph is that you can many storage nodes that operate
in parallel so that the ideal is a very large number of small
drive arrays over a small number of very large drive arrays.
</blockquote>
I also agree we that. In our case, we actually plan to get Redhat
Gluster Storage Support and an increase of <br>
</p>
<p>storage nodes would mean an increase of support costs while the
same amount of storage volume is available.<br>
</p>
<p>So we are looking for a reasonable compromise.</p>
<p>Felix<br>
</p>
<div class="moz-cite-prefix">On 03.04.19 17:12, Alvin Starr wrote:<br>
</div>
<blockquote type="cite"
cite="mid:9f4c439f-7893-4169-8d91-ebc41d22277d@netvel.net">As a
general rule I always suggest using LVM.
<br>
I have had LVM save my career a few times.
<br>
I believe that if you wish to use Gluster snapshots then the
underlying system needs to be a thinly provisioned LVM volume.
<br>
<br>
Adding storage space to an LVM is easy and all modern file-systems
support online growing so it is easy to grow a file-system.
<br>
<br>
If you have directory trees that are very deep and wide then you
may want to put a bit of thought into how you configure your
Gluster installation.
<br>
We have a volume with about 50M files and something like an xfs
dump or rsync of the underlying filesystem will take close to a
day but copying the data over Gluster takes weeks.
<br>
This is a problem with all clustered file systems because there is
extra locking and co-ordination required for file operations.
<br>
<br>
Also you need to realize that the performance of something like
the powervault is limited to the speed of the connection to your
server.
<br>
So that a single SAS link is limited to 6Gb(for example) and so is
your disk array but most internal raid controllers will support
the number of ports * 6Gb.
<br>
This means that a computer with 12 drives in the front will access
disk faster than a system with a 12 drive disk array attached by a
few SAS links.
<br>
<br>
One of the arguments for things like Gluster and Ceph is that you
can many storage nodes that operate in parallel so that the ideal
is a very large number of small drive arrays over a small number
of very large drive arrays.
<br>
<br>
<br>
On 4/3/19 10:20 AM, kbh-admin wrote:
<br>
<blockquote type="cite">Hello Gluster-Community,
<br>
<br>
<br>
we consider to build several Gluster-servers and have a question
regarding lvm and Glusterfs.
<br>
<br>
<br>
Scenario 1: Snapshots
<br>
<br>
Of course, taking snapshots is a good capability and we want to
use lvm for that.
<br>
<br>
<br>
Scenaraio 2: Increase Gluster volume
<br>
<br>
We want to increase the Gluster volume by adding hdd's and/or by
adding
<br>
<br>
dell powervaults later. We got the recommendation to set up a
new Gluster volume
<br>
<br>
for the powervaults and don't use lvm in that case (lvresize
....) .
<br>
<br>
<br>
What would you suggest and how do you manage both lvm and
Glusterfs together?
<br>
<br>
<br>
Thanks in advance.
<br>
<br>
<br>
Felix
<br>
<br>
_______________________________________________
<br>
Gluster-users mailing list
<br>
<a class="moz-txt-link-abbreviated" href="mailto:Gluster-users@Gluster.org">Gluster-users@Gluster.org</a>
<br>
<a class="moz-txt-link-freetext" href="https://lists.Gluster.org/mailman/listinfo/Gluster-users">https://lists.Gluster.org/mailman/listinfo/Gluster-users</a>
<br>
</blockquote>
<br>
</blockquote>
</body>
</html>