[Gluster-users] Questions about the limitations on using Gluster Volume Tiering.
Jeff Byers
jbyers.sfly at gmail.com
Mon May 1 19:31:20 UTC 2017
Hello,
We've been thinking about giving GlusterFS Tiering a try, but
had noticed the following limitations documented in the:
Red Hat Gluster Storage 3.2 Administration Guide
Limitations of arbitrated replicated volumes:
Tiering is not compatible with arbitrated replicated volumes.
17.3. Tiering Limitations
In this release, only Fuse and NFSv3 access is supported.
Server Message Block (SMB) and NFSv4 access to tiered
volume is not supported.
I don't quite understand the SMB restriction. Is the restriction
that you cannot use the GlusterFS 'gfapi' vfs interface to Samba,
but you can use Samba layered over a FUSE mount?
Is the problem here that with the 'gfapi' vfs interface, the
'tier-xlator' is not involved, or does not work properly?
BTW, my colleague did a quick test using SMB with 'libgfapi',
configured, and it seemed to work fine, but that doesn't mean
that it was working correctly.
The same questions regarding NFSv3 vs NFSv4. My understanding
is that NFSv3 is supported internally by GlusterFS, but NFSv4
is external. That would make me think that NFSv3 would have a
problem with tiering, but it is NFSv4 that is not supported, but
it is the opposite.
I guess I don't understand what's behind these limitations.
Related question, the tiering operates on volume files, not
brick files, so tiering should be compatible with sharding?
In a scale-out configuration, I assume that the heat
map/counters are shared globally so that no matter where the
client(s) read/write to/from, they get counted properly in the
heat counts, and get the correct file.
There must be some place that stores this meta-data. Is
this meta-data shared between all of the GlusterFS nodes,
does it go on a GlusterFS meta-data volume? I didn't see
any way to specify the storage location, but I suppose it
could go in a bricks .glusterfs/ directory, but isn't that is
per-brick, not per-volume.
Thanks.
~ Jeff Byers ~
More information about the Gluster-users
mailing list