[Gluster-devel] Brick path used for gluster shared storage volume
Niels de Vos
ndevos at redhat.com
Sun Jul 5 08:54:03 UTC 2015
On Sun, Jul 05, 2015 at 10:16:06AM +0530, Avra Sengupta wrote:
> Hi,
>
> Today with with enabling volume set option cluster.enable-shared-storage, we
> create a shared storage volume called gluster_shared_storage for the user,
> and mount it on all the nodes in the cluster. Currently this volume is used
> for features like nfs-ganesha, snapshot scheduler and geo-replication to
> save some internal data required by these features. The brick path we use to
> create this shared storage is /var/run/gluster/ss_brick.
>
> The problem with using this brick path is /var/run/gluster is a tmpfs and
> all the brick/shared storage data will be wiped off when the node restarts.
> Hence I propose using /var/lib/glusterd/ss_brick as the brick path for
> shared storage volume as this brick and the shared storage volume is
> internally created by us (albeit on user's request), and contains only
> internal state data and no user data.
/var/run/ is not a tmpfs on EL6 and before, but it is cleaned out on
boot. /var/run/ or /run/ on recent Fedora and EL7 is really only valid
for the current boot.
> We are also aware that admins sometime take backup of /var/lib/glusterd to
> save the current state of gluster. Again this shouldn't be an issue as the
> data contained in these bricks is only internal state data and is very
> minimal.
>
> Please let me know if there are any issues or concerns with using
> /var/lib/glusterd/ss_brick as the brick path for the shared storage, and
> also suggest an alternate brick path.
Yes, /var/lib/glusterd/ss_brick/ is much more suitable. Please check
what is a common path for NetBSD and others, I think they use /var/db/
for these kind of things. The already used #defines and autoconf/make
variables should just apply.
If the shared volume gets mounted, (/var)/run/gluster/state/ is
suitable. There is no need to have the glusterfs-fuse mountpoint under
/var/lib/, (/var)/run/ is more appropriate.
Thanks,
Niels
More information about the Gluster-devel
mailing list