[Gluster-users] DHT vs LVM for multiple bricks on a server
Joe Julian
joe at julianfamily.org
Thu Jan 10 23:27:42 UTC 2013
I don't have time to write up a long answer right now (work's killing me
today) but if you search for lvm on the IRC log, we had a bit of a
discussion about that a few days (or was it a week... they're all
blending together) ago.
On 01/10/2013 03:06 PM, Gaurav P wrote:
> *bump*
>
>
> On Mon, Jan 7, 2013 at 8:13 PM, Gaurav P
> <gaurav.lists+gluster at gmail.com
> <mailto:gaurav.lists+gluster at gmail.com>> wrote:
>
> Hi,
>
> I've been reading up on GlusterFS and I'm looking for best
> practices around using multiple disks as bricks in servers that
> will be part of a replicated volume.
>
> Say I start with a single disk each in two servers (/dev/sda1
> mounted at /a)
>
> gluster volume create test-volume replica 2 transport tcp server1:/a server2:/a
>
>
> Then I add a second disk in each server (/dev/sdb1 mounted at /b)
>
> gluster volume add-brick test-volume replica 2 transport tcp server1:/b server2:/b
>
>
> With this (after rebalancing), am I correct in understanding that
> I will have a distributed replicated volume with GlusterFS
> providing the equivalent of RAID1+0 for data on my volume.
>
> Now as I understand, I will be restricted to adding disks (bricks)
> of the same size whenever I need to extend the volume. What are
> the pros/cons of instead using LVM to provide a single LV on each
> server and extending the LV and filesystem each time I add
> additional storage? The other benefit to LVM being the ability to
> take snapshots. The one downside I foresee is that a concatenated
> LV will not use the second PV (disk) till the first PV is full,
> though I could perhaps stripe?
>
> More questions to follow, but I'm trying to think through this
> before I get started with my first deployment.
>
> TIA
> Gaurav
>
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130110/e1b6c7ef/attachment.html>
More information about the Gluster-users
mailing list