[Gluster-users] Replicated and Non Replicated Bricks on Same Partition

Anand Avati anand.avati at gmail.com
Tue Apr 30 03:28:10 UTC 2013


On Mon, Apr 29, 2013 at 9:19 AM, Heath Skarlupka <
heath.skarlupka at ssec.wisc.edu> wrote:

> Gluster-Users,
>
> We currently have a 30 node Gluster Distributed-Replicate 15 x 2
> filesystem.  Each node has a ~20TB xfs filesystem mounted to /data and the
> bricks live on /data/brick.  We have been very happy with this setup, but
> are now collecting more data that doesn't need to be replicated because it
> can be easily regenerated.  Most of the data lives on our replicated volume
> and is starting to waste space.  My plan was to create a second directory
> under the /data partition called /data/non_replicated_brick on each of the
> 30 nodes and start up a second Gluster filesystem.  This would allow me to
> dynamically size the replicated and non_replicated space based on our
> current needs.
>
> I'm a bit worried about going forward with this because I haven't seen
> many users talk about putting two gluster bricks on the same underlying
> filesystem.  I've gotten passed the technical hurdle and know that it is
> technically possible, but I'm worried about corner cases and issues that
> might crop up when we add more bricks and need to rebalance both gluster
> volumes at once.  Does anybody have any insight in what the caveats of
> doing this are or are there any users putting multiple bricks on a single
> filesystem in the 50-100 node size range.  Thank you all for your insights
> and help!


This is a very common use case and should work fine. In the future we are
exploring better integration with dm-thinp so that each brick has its own
XFS filesystem on a thin provisioned logical volume. But for now you can
create a second volume on the same XFS filesystems.

Avati
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130429/154db2ed/attachment.html>


More information about the Gluster-users mailing list