[Gluster-users] First time user questions: transition from NFSv4

Marc Eisenbarth mr.eisenbarth at gmail.com
Sun Feb 22 22:42:49 UTC 2015


I'm looking to give glusterfs a go and have a few questions that I haven't
found definitely answers to. I was wondering if I might be able to poll
those that have gone down this path already :) Sorry for the barrage of
questions, these are the remaining questions that I need answers to
green-light the project.

1. currently using cachefilesd on my clients with nfsv4 with a decent
performance gain. Will cachefilesd work with a nfs-compliant gluster mount?
Does the native FUSE client offer similar local caching capability?

2. I'm most interested in the distributed mode of operating bricks, as I'll
still provide resiliency via hardware RAID6. My plan is to get a single
brick up and running and then add a second one in distributed mode later.
The likely scenario is this first brick will be at 80% capacity when I add
the second. What happens in this case, are the files balanced between the
two when the second brick is added?

3. I have an existing file system, with permissions, etc. on the machine
already that I'm about to setup glusterfs on. Can I simply use this
directory structure when creating the volumes, thus bringing up my first
instance with all files intact and ready to go? Will this still allow use
of things like libgfapi? Is there an example somewhere showing the commands
to bring and existing filesystem online as a glusterfs?

4. In the above scenario, will all existing permissions/ownership/etc. be
retained?

5. In a cluster consisting solely of distributed bricks, is it okay to also
mount the filesystem locally to each on of these bricks? I have some
lightweight jobs that will be adding files to the filesystem and would like
to run these directly from the distributed bricks, if possible.

Thanks!!
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150223/e00d176d/attachment.html>


More information about the Gluster-users mailing list