[Gluster-users] gluster volume + lvm : recommendation or neccessity ?

ML lists at websiteburo.com
Wed Oct 11 13:37:44 UTC 2017


After some extra reading about LVM snapshots & Gluster, I think I can 
conclude it may be a bad idea to use it on big storage bricks.

I understood that the LVM maximum metadata, used to store the snapshots 
data, is about 16GB.

So if I have a brick with a volume arount 10TB (for example), daily 
snapshots, files changing ~100GB : the LVM snapshot is useless.

LVM's snapshots doesn't seems to be a good idea with very big LVM 
partitions.

Did I missed something ? Hard to find clear documentation on the subject.

++

Quentin


Le 11/10/2017 à 09:07, Ric Wheeler a écrit :
> On 10/11/2017 09:50 AM, ML wrote:
>> Hi everyone,
>>
>> I've read on the gluster & redhat documentation, that it seems 
>> recommended to use XFS over LVM before creating & using gluster volumes.
>>
>> Sources :
>> https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/html/Administration_Guide/Formatting_and_Mounting_Bricks.html 
>>
>> http://gluster.readthedocs.io/en/latest/Administrator%20Guide/Setting%20Up%20Volumes/ 
>>
>>
>> My point is : do we really need LVM ?
>> For example , on a dedicated server with disks & partitions that will 
>> not change of size, it doesn't seems necessary to use LVM.
>>
>> I can't understand clearly wich partitioning strategy would be the 
>> best for "static size" hard drives :
>>
>> 1 LVM+XFS partition = multiple gluster volumes
>> or 1 LVM+XFS partition = 1 gluster volume per LVM+XFS partition
>> or 1 XFS partition = multiple gluster volumes
>> or 1 XFS partition = 1 gluster volume per XFS partition
>>
>> What do you use on your servers ?
>>
>> Thanks for your help! :)
>>
>> Quentin
>
> Hi Quentin,
>
> Gluster relies on LVM for snapshots - you won't get those unless you 
> deploy on LVM.
>
> Regards,
> Ric
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users



More information about the Gluster-users mailing list