[Gluster-users] CIFS options - anyone done A-B comparisons on various setups?

Whit Blauvelt whit.gluster at transpect.com
Tue Dec 4 17:12:33 UTC 2012


Hi,

I'm about to set up Gluster 3.3.1 in a cloud environment. The clients are
going to be nfs and cifs as well as Gluster. The Gluster docs suggest
setting up cifs as a share of a local Gluster client mount. My setup in
another, cloudless setup (w/ Gluster 3.1.5) has been of Gluster mounted on a
separate system via nfs, and then that nfs mount shared via cifs - which I
don't see discussed in the Gluster docs but which has worked fine for us.

There are different scenarios which I'd expect would all pretty much work.
I'm wondering if any stand out in anyone's experience as particularly better
or worse. The constant is two replicated Gluster storage VMs, and use of
Gluster's built-in nfs, most likely with ucarp handling nfs failover. Beyond
that the cifs options include:

1. Local Gluster client mounts on both storage systems, with cifs locally
providing exports for remote systems, sharing the nfs-ucarp failover

2. A third VM, using a Gluster client to mount the Gluster storage, with
cifs providing re-export to remote systems

3. A third VM, using an nfs client to mount the Gluster storage, with cifs
providing re-export to remote systems

Those last two as-is introduce a single point-of-failure, but the third VM
could have a fourth VM in ucarp-failover relation to it (assuming ucarp
failover works for cifs).

The profusion of VMs, in the billing structure of our cloud provider, isn't
costly. Billing is mostly based on CPU cycles rather than cores or number of
VMs provisioned. From that POV an nfs client re-exported to cifs may hold an
advantage over a Glusterfs client - being built into the kernel is more
efficient than running in fuse. Also, probably unfairly based on experience
with even older versions of Gluster, I trust Gluster-as-server more than I
do the Gluster client. So if it's scenario (2) above, the Gluster client is
running on a separate instance than the storage. If it runs away and
overloads that VM, the storage VMs shouldn't be affected and can keep
serving nfs even if cifs chokes.

Scenarios (1) and (3) could be set up so that an nfs channel to one of the
storage pair, in normal running, serves nfs to clients, while a second nfs
channel to the other of the storage pair is the basis for the cifs export. 

Maybe any of these would work reasonably well. Maybe not. I don't have the
leisure of setting up extensive testing on A-B comparisons right now -
management says I have to deploy this "yesterday." Any advice is most
welcome.

Thanks,
Whit



More information about the Gluster-users mailing list