[Gluster-users] Replicated and Non Replicated Bricks on Same Partition

Robert Hajime Lanning lanning at lanning.cc
Tue Apr 30 03:44:09 UTC 2013


On 04/29/13 20:28, Anand Avati wrote:
>
> On Mon, Apr 29, 2013 at 9:19 AM, Heath Skarlupka 
> <heath.skarlupka at ssec.wisc.edu <mailto:heath.skarlupka at ssec.wisc.edu>> 
> wrote:
>
>     Gluster-Users,
>
>     We currently have a 30 node Gluster Distributed-Replicate 15 x 2
>     filesystem.  Each node has a ~20TB xfs filesystem mounted to /data
>     and the bricks live on /data/brick.  We have been very happy with
>     this setup, but are now collecting more data that doesn't need to
>     be replicated because it can be easily regenerated.  Most of the
>     data lives on our replicated volume and is starting to waste
>     space.  My plan was to create a second directory under the /data
>     partition called /data/non_replicated_brick on each of the 30
>     nodes and start up a second Gluster filesystem.  This would allow
>     me to dynamically size the replicated and non_replicated space
>     based on our current needs.
>
>     I'm a bit worried about going forward with this because I haven't
>     seen many users talk about putting two gluster bricks on the same
>     underlying filesystem.  I've gotten passed the technical hurdle
>     and know that it is technically possible, but I'm worried about
>     corner cases and issues that might crop up when we add more bricks
>     and need to rebalance both gluster volumes at once.  Does anybody
>     have any insight in what the caveats of doing this are or are
>     there any users putting multiple bricks on a single filesystem in
>     the 50-100 node size range.  Thank you all for your insights and help!
>
>
> This is a very common use case and should work fine. In the future we 
> are exploring better integration with dm-thinp so that each brick has 
> its own XFS filesystem on a thin provisioned logical volume. But for 
> now you can create a second volume on the same XFS filesystems.
>
> Avati
>

There is an issue when replicated bricks fill unevenly.  The 
non-replicated volume will cause uneven filling of bricks as seen in the 
replicated volume.

I am not sure how ENOSPC is handled asymmetrically, but if the fuller 
brick happens to be down during a write that would be causing ENOSPC, 
you won't get the error and replication will fail, when the self-heal 
kicks in.

-- 
Mr. Flibble
King of the Potato People




More information about the Gluster-users mailing list