[Gluster-users] DHT vs LVM for multiple bricks on a server

Gaurav P gaurav.lists+gluster at gmail.com
Fri Jan 11 21:53:18 UTC 2013


On Fri, Jan 11, 2013 at 12:18 PM, Jeff Darcy <jdarcy at redhat.com> wrote:

> My usual answer is: it depends.  I've seen cases where using each disk as
> a separate brick performed better, and I've seen cases where combining them
> via LVM performed better.  There doesn't even seem to be a simple pattern
> to which will be faster for which workloads, though I'd say brick-per-disk
> probably wins slightly more often than not.  It will also have better
> failure characteristics than RAID0 - a point also brought up recently by
> the HDFS folks.
>
> http://hortonworks.com/blog/**why-not-raid-0-its-about-time-**
> and-snowflakes/<http://hortonworks.com/blog/why-not-raid-0-its-about-time-and-snowflakes/>
>
> They're characteristically wrong about having to wait for the slowest
> (might be true for their specific workload but not for most others as would
> be the case for RAID1) but make some other good points.
>
>
Excellent link and I have the same concerns about the larger failure mode
and impact of losing a single disk in RAID0/LVM stripe.


> You also bring up the issue of bricks needing to be the same size.  This
> is kind of true right now.  It won't fail completely, but it also won't
> distribute files properly and that can lead to premature ENOSPC.  However,
> I expect that to be addressed fairly soon so you might want to consider
> that as you make your longer-term plans.
>
>
Even with upcoming support for variable sized bricks, are there merits to
carving my current 3TB (best bang for buck) disks into 1TB partitions or
PVs (to be concatenated as LV) and use the partitions/LVs as my bricks so I
am covered when larger TB disks are available?

Next question: I know that RHS recommends XFS, and there haven't been any
updates to https://bugzilla.redhat.com/show_bug.cgi?id=838784 since
October, but I'd really like to stay with ext4 and perhaps some day convert
to btrfs. I will be on 2.6.32-279, which I see is affected, but have there
been any recent workarounds?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130111/9e6d2cdc/attachment.html>


More information about the Gluster-users mailing list