[Gluster-users] CIFS options - anyone done A-B comparisons on various setups?

Gunnar gluster at akitogo.com
Tue Dec 4 22:06:45 UTC 2012


Hi Whit,

could you post your smb.conf? I'm currently a bit lost with a 
performance optimized
setting for millions of small files accessed via a Samba share (local 
Gluster fuse mount).
I would be glad to try out your approach and see how the results will be 
since
a NFS access from Windows gives me a much better throughput than the 
Samba share.

Thanks,

Gunnar

Am 04.12.2012 18:12, schrieb Whit Blauvelt:
> Hi,
>
> I'm about to set up Gluster 3.3.1 in a cloud environment. The clients are
> going to be nfs and cifs as well as Gluster. The Gluster docs suggest
> setting up cifs as a share of a local Gluster client mount. My setup in
> another, cloudless setup (w/ Gluster 3.1.5) has been of Gluster mounted on a
> separate system via nfs, and then that nfs mount shared via cifs - which I
> don't see discussed in the Gluster docs but which has worked fine for us.
>
> There are different scenarios which I'd expect would all pretty much work.
> I'm wondering if any stand out in anyone's experience as particularly better
> or worse. The constant is two replicated Gluster storage VMs, and use of
> Gluster's built-in nfs, most likely with ucarp handling nfs failover. Beyond
> that the cifs options include:
>
> 1. Local Gluster client mounts on both storage systems, with cifs locally
> providing exports for remote systems, sharing the nfs-ucarp failover
>
> 2. A third VM, using a Gluster client to mount the Gluster storage, with
> cifs providing re-export to remote systems
>
> 3. A third VM, using an nfs client to mount the Gluster storage, with cifs
> providing re-export to remote systems
>
> Those last two as-is introduce a single point-of-failure, but the third VM
> could have a fourth VM in ucarp-failover relation to it (assuming ucarp
> failover works for cifs).
>
> The profusion of VMs, in the billing structure of our cloud provider, isn't
> costly. Billing is mostly based on CPU cycles rather than cores or number of
> VMs provisioned. From that POV an nfs client re-exported to cifs may hold an
> advantage over a Glusterfs client - being built into the kernel is more
> efficient than running in fuse. Also, probably unfairly based on experience
> with even older versions of Gluster, I trust Gluster-as-server more than I
> do the Gluster client. So if it's scenario (2) above, the Gluster client is
> running on a separate instance than the storage. If it runs away and
> overloads that VM, the storage VMs shouldn't be affected and can keep
> serving nfs even if cifs chokes.
>
> Scenarios (1) and (3) could be set up so that an nfs channel to one of the
> storage pair, in normal running, serves nfs to clients, while a second nfs
> channel to the other of the storage pair is the basis for the cifs export.
>
> Maybe any of these would work reasonably well. Maybe not. I don't have the
> leisure of setting up extensive testing on A-B comparisons right now -
> management says I have to deploy this "yesterday." Any advice is most
> welcome.
>
> Thanks,
> Whit
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users




More information about the Gluster-users mailing list