[Gluster-devel] [Gluster-users] OS X porting merged

Andrew Hatfield andrew.hatfield at cynosureservices.com
Tue Apr 29 23:59:37 UTC 2014


Thanks Dan, very helpful indeed


On Mon, Apr 28, 2014 at 7:44 PM, Dan Mons <dmons at cuttingedge.com.au> wrote:

> On 28 April 2014 16:11, Andrew Hatfield
> <andrew.hatfield at cynosureservices.com> wrote:
> > Hey Dan,
> >
> > Did you ever test SMB with the streams xattr vfs object?
>
> Yes. :)
>
> > Did it work?  Was performance ok?
>
> It functions, but forces us to compromise.  It was the only way to get
> GlusterFS working via SMB on MacOSX.
>
> For anyone who hasn't discovered this: MacOSX's Finder looks for a
> "._filename" resource fork file for every "filename" data file it
> finds.  GlusterFS (and any clustered file system, to be fair) is quite
> poor at negative lookups (i.e.: when it can't find a file on a
> particular brick, it has to run around asking every brick in the
> cluster if they have the file), and so a folder with ~1000 files in it
> (quite common for a VFX studio, across dozens of shots inside dozens
> of projects) can take several minutes to populate when trying to view
> it in OSX's Finder.
>
> streams xattr means that the resource fork information can be held in
> the xattr component of the underlying file system (for us that's XFS
> on our CentOS6 bricks), and completely removes the need for the
> negative lookup, as that resource fork data lives with the file.  (I
> for one find the whole resource fork concept totally outdated and
> silly, but Apple seem keen to keep it around).
>
> Downsides:
>
> 1) SMB is slower than either NFS or FUSE+GlusterFS.  vfs_glusterfs
> helps, but still doesn't compete in real world usage.
>
> 2) MacOSX does not allow system-wide mounting of SMB shares, ala NFS3.
>  Our studio relies heavily on this as machines need network file
> systems mounted up so that multiple users can hit them at once.  We
> run a large number of machines in what's called a "render farm" which
> are batch-processing multiple jobs each in parallel. Our standard spec
> render node currently is a dual Xeon (8 cores per proc, 2 procs, per
> node, with HT = 32 threads / logical CPUs) and 128GB RAM.  These
> sometimes do one very large job, and sometimes many smaller jobs in
> parallel.  Jobs run as the UID of the person who submitted them, so
> NFS3 style network mounting is mandatory for this to work.  There is
> no such thing as "one user on one machine" in our world.
>
> MacOSX can't do this via SMB.  In 10.8 and 10.9, whichever user
> initiates the SMB connection "owns" the mount (regardless of the
> credentials they use at mount time), and any other UID is locked out
> of that share.
>
> Upsides:
>
> 1) At least it works at all, unlike NFS which Apple broke in Finder
> back in 10.5 and have refused to even acknowledge let alone fix.
> Honestly, how does an OS based on UNIX not do NFS?  10.9 now
> represents 5 complete production versions of their OS in a row with
> broken NFS.  Ludicrous.
>
> 2) AFP has the same UID-clobbering mount issues as SMB, but SMB is
> faster than AFP (thanks to streams xattr) and we can cluster SMB more
> easily.
> Long story short, SMB is a stop-gap.  Ideally either Apple fix NFS
> (I'm not holding my breath, as it's clear Apple's priority is selling
> iPhones and media, and not actually offering a usable operating
> system), or the Gluster community add the ever moving target of MacOSX
> support to their FUSE+GlusterFS client, which is now moving along
> nicely as per this thread.
>
> So my eternal thanks to everyone contributing to this OSX porting
> effort (and of course everyone who's ever contributed a line of code
> to GlusterFS in general).  You are all wonderful human beings.  :)
>
> -Dan
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-devel
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-devel/attachments/20140430/e8e2631d/attachment-0003.html>


More information about the Gluster-devel mailing list