[Gluster-devel] xfs, fstab, glusterfs

At Work admin at matphot.com
Thu Jan 15 11:09:13 UTC 2009


Dear All,

I've made much progress on the mounting side of things - glusterfs is  
now clustering perfectly my Debian sub-servers in my "head" Leopard  
xServe - but I was still unable to write to the mounted filesystem  
without errors/refusals, and I only recently understood why: my xServe  
is unable to read/write to an xfs filesystem that is mounted into the  
local userspace through macfuse.

I know that without macFuse on a Leopard server, glusterfs has "server- 
only" capabilities. I also read that it is possible to re-export a  
glusterfs cluster through nfs, which leopard is able to understand -  
but I read that (a non-kernel) Fuse is necessary to do this, yet if  
the filesystem is first mounted into the local userspace, Leopard  
won't be able to read it! Catch-22...

For the time being, the only solution I see is to redo my network  
architecture to make one of the Debian sub-servers the nfs exporter  
(instead of having the Debian servers clustered and mounted in the  
Leopard head server) - yet is it possible that a unique Debian server  
be both a client and server? This way I could have a Debian server  
cluster and NFS-export two other debian servers, at the same time as  
being part of another cluster mounted on another Debian server. Does  
this sound feasable?

Thanks, best,

Josef.

On Jan 7, 2009, at 10:32 , admin at matphot.com wrote:

> Dear Amar,
>
> Hello - did you manage to look into the directory related problems  
> since?
>
> Thank you,
>
> Josef.
>
> On Jan 3, 2009, at 09:53 , Amar Tumballi (bulde) wrote:
>
>> hi 'At Work',
>>  I got similar report in another user of glusterfs in macfuse  
>> mailing list too. I will look into this mac 'directory' related  
>> issues on monday. Will get back to you after I investigate it.
>>
>> Regards,
>> Amar
>>
>> 2009/1/3 At Work <admin at matphot.com>
>> What's more, I see that the proper permissions and UID are being  
>> forwarded to the remote filesystem - as the user of the service  
>> creating the files exists only on the "head" server, is it possible  
>> that it is the remote server is refusing to do a mkdir and chown  
>> directories? This would be odd, as it would seem logical that it  
>> would be the mount-point server that would decide who gets to read  
>> or write.
>>
>> What of the "glusterfs-fuse" error I get every two seconds? Is this  
>> in your domain, or should I be asking this of the FUSE developers?
>>
>> Thanks, best.
>>
>>> That's it exactly. As it stands I have glusterfs server (or its  
>>> server.vol file) on the sub-servers setting up (and exporting?)  
>>> the bricks, and the OS X uses only the client.vol file to import  
>>> and assemble the remote bricks into a cluster. Also, yes, the  
>>> problems are as you say: I can read/write files, but I cannot  
>>> create/upload/rename directories.
>>>
>>> Here is a copy of the server.vol files from two servers:
>>>
>>> matserve01:
>>>
>>> volume posix01a
>>>  type storage/posix
>>>  option directory /raid01a/clients
>>> end-volume
>>>
>>> volume raid01a
>>>   type features/locks
>>>   subvolumes posix01a
>>> end-volume
>>>
>>> volume posix01b
>>>  type storage/posix
>>>  option directory /raid01b/clients
>>> end-volume
>>>
>>> volume raid01b
>>>   type features/locks
>>>   subvolumes posix01b
>>> end-volume
>>>
>>>
>>> ### Add network serving capability to above exports.
>>> volume server
>>>  type protocol/server
>>>  option transport-type tcp
>>>  subvolumes raid01a raid01b
>>>  option auth.addr.raid01a.allow 192.168.1.* # Allow access to  
>>> "raid01a" volume
>>>  option auth.addr.raid01b.allow 192.168.1.* # Allow access to  
>>> "raid01b" volume
>>> end-volume
>>>
>>>
>>> matserve02:
>>>
>>> volume posix02a
>>>  type storage/posix
>>>  option directory /raid02a/clients
>>> end-volume
>>>
>>> volume raid02a
>>>   type features/locks
>>>   subvolumes posix02a
>>> end-volume
>>>
>>> volume posix02b
>>>  type storage/posix
>>>  option directory /raid02b/clients
>>> end-volume
>>>
>>> volume raid02b
>>>   type features/locks
>>>   subvolumes posix02b
>>> end-volume
>>>
>>> ### Add network serving capability to above exports.
>>> volume server
>>>  type protocol/server
>>>  option transport-type tcp
>>>  subvolumes raid02a raid02b
>>>  option auth.addr.raid02a.allow 192.168.1.* # Allow access to  
>>> "raid02a" volume
>>>  option auth.addr.raid02b.allow 192.168.1.* # Allow access to  
>>> "raid02b" volume
>>> end-volume
>>>
>>> ...and the client.vol file from the OS X server.
>>>
>>> ### Add client feature and attach to remote subvolume of server1
>>>
>>> # import RAID a's on matserve01 & matserve02
>>>
>>> volume rRaid01a
>>>  type protocol/client
>>>  option transport-type tcp/client
>>>  option remote-host 192.168.1.6      # IP address of the remote  
>>> brick
>>>  option remote-subvolume raid01a        # name of the remote volume
>>> end-volume
>>>
>>> volume rRaid02a
>>>  type protocol/client
>>>  option transport-type tcp/client
>>>  option remote-host 192.168.1.7      # IP address of the remote  
>>> brick
>>>  option remote-subvolume raid02a        # name of the remote volume
>>> end-volume
>>>
>>> ## add c, d, e, etc sections as bays expand for each server
>>> ###################
>>>
>>> ### Add client feature and attach to remote subvolume of server2
>>>
>>> # combine raid a's
>>> volume cluster0102a
>>>  type cluster/afr
>>>  subvolumes rRaid01a rRaid02a
>>> end-volume
>>>
>>> ## add c, d, e, etc sections as bays expand for each server
>>> ###################
>>>
>>>
>>> ...you may notice that I am for the time being assembling but one  
>>> cluster (a) - for testing purposes.
>>>
>>> Does all this seem correct to you?
>>>
>>>
>>> On Jan 2, 2009, at 14:17 , Krishna Srinivas wrote:
>>>
>>>> Schomburg,
>>>>
>>>> You have 4 servers and one client. Each server has to export 2
>>>> directories /raid01a and /raid01b (FUSE do not play any role on the
>>>> servers). On the client machine the glusterfs mounts using the  
>>>> client
>>>> vol file combining all the exported directories. This would be a
>>>> typical setup in your case. How is your setup? Can you mail the  
>>>> client
>>>> vol file? According to your mail creation of directory fails. But
>>>> creation/read/write of files are fine. Right?
>>>>
>>>> Krishna
>>>>
>>>> On Fri, Jan 2, 2009 at 5:01 PM, Jake Maul <jakemaul at gmail.com>  
>>>> wrote:
>>>>> On Fri, Jan 2, 2009 at 3:55 AM, At Work <admin at matphot.com> wrote:
>>>>>> Thank you for your rapid reply. Just one question: by "leave  
>>>>>> your fstab
>>>>>> mount alone" do you mean leave it mount the xfs disk on startup?
>>>>>
>>>>> Yes. Mount your XFS partition via fstab as you normally would.
>>>>>
>>>>> As for the rest.... dunno what to tell ya. Maybe one of the  
>>>>> glusterfs
>>>>> devs can chime in with some ideas.
>>>>>
>>>>> Good luck,
>>>>> Jake
>>>>>
>>>>>> This problem is odd to say the least - when I do a 'mount'  
>>>>>> after activating
>>>>>> the glusterfs client and cluster on Leopard, I get the following:
>>>>>>
>>>>>>       glusterfs on /Volumes/raid0102a (fusefs, local,  
>>>>>> synchronous)
>>>>>>
>>>>>> ...and on the Debian host server I get:
>>>>>>
>>>>>>       fusectl on /sys/fs/fuse/connections type fusectl (rw) #  
>>>>>> seems to be a
>>>>>> fuse connection - should fuse-accessible mounts go here?
>>>>>>       /dev/sdb1 on /raid01a type xfs (rw) # raid block a
>>>>>>       /dev/sdc1 on /raid01b type xfs (rw) # raid block b
>>>>>>
>>>>>> ...and in the glusterfs log I get:
>>>>>>
>>>>>> 2009-01-02 11:06:42 E [fuse-bridge.c:279:fuse_loc_fill] fuse- 
>>>>>> bridge: failed
>>>>>> to search parent for 576 ((null))
>>>>>> 2009-01-02 11:06:42 E [fuse-bridge.c:703:do_chmod] glusterfs- 
>>>>>> fuse: 2: CHMOD
>>>>>> 576 ((null)) (fuse_loc_fill() failed)
>>>>>> 2009-01-02 11:06:42 E [fuse-bridge.c:279:fuse_loc_fill] fuse- 
>>>>>> bridge: failed
>>>>>> to search parent for 576 ((null))
>>>>>> 2009-01-02 11:06:42 E [fuse-bridge.c:581:fuse_getattr]  
>>>>>> glusterfs-fuse: 1:
>>>>>> GETATTR 576 (fuse_loc_fill() failed)
>>>>>> 2009-01-02 11:08:16 E [fuse-bridge.c:279:fuse_loc_fill] fuse- 
>>>>>> bridge: failed
>>>>>> to search parent for 578 ((null))
>>>>>> 2009-01-02 11:08:16 E [fuse-bridge.c:2193:fuse_getxattr]  
>>>>>> glusterfs-fuse: 2:
>>>>>> GETXATTR (null)/578 (com.apple.FinderInfo) (fuse_loc_fill()  
>>>>>> failed)
>>>>>> 2009-01-02 11:08:16 E [fuse-bridge.c:279:fuse_loc_fill] fuse- 
>>>>>> bridge: failed
>>>>>> to search parent for 578 ((null))
>>>>>> 2009-01-02 11:08:16 E [fuse-bridge.c:2193:fuse_getxattr]  
>>>>>> glusterfs-fuse: 2:
>>>>>> GETXATTR (null)/578 (com.apple.FinderInfo) (fuse_loc_fill()  
>>>>>> failed)
>>>>>> 2009-01-02 11:08:17 E [fuse-bridge.c:279:fuse_loc_fill] fuse- 
>>>>>> bridge: failed
>>>>>> to search parent for 578 ((null))
>>>>>> 2009-01-02 11:08:17 E [fuse-bridge.c:2193:fuse_getxattr]  
>>>>>> glusterfs-fuse: 0:
>>>>>> GETXATTR (null)/578 (com.apple.FinderInfo) (fuse_loc_fill()  
>>>>>> failed)
>>>>>> 2009-01-02 11:09:58 E [fuse-bridge.c:279:fuse_loc_fill] fuse- 
>>>>>> bridge: failed
>>>>>> to search parent for 578 ((null))
>>>>>> 2009-01-02 11:09:58 E [fuse-bridge.c:581:fuse_getattr]  
>>>>>> glusterfs-fuse: 1:
>>>>>> GETATTR 578 (fuse_loc_fill() failed)
>>>>>>
>>>>>> ...and the last two lines are repeated every few minutes.
>>>>>>
>>>>>> Am I correct in understanding that I have no need for FUSE on  
>>>>>> the Debian
>>>>>> servers? There seems to be a bridge-failure of some sort going  
>>>>>> on here.
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Jan 2, 2009, at 08:34 , Jake Maul wrote:
>>>>>>
>>>>>>> On the brick server (the content server... the one with the
>>>>>>> XFS-formatted volume), FUSE is actually not used or even  
>>>>>>> needed as far
>>>>>>> as I can tell. Leave your fstab mount alone, and treat  
>>>>>>> GlusterFS as a
>>>>>>> pure replacement for NFS's /etc/exports.
>>>>>>>
>>>>>>> FUSE only comes into play on the client side, where it's no  
>>>>>>> longer
>>>>>>> relevant what the underlying filesystem is. If I'm reading you  
>>>>>>> right,
>>>>>>> your XServe is the client in this scenario. Perhaps Mac OSX's  
>>>>>>> FUSE
>>>>>>> implementation is strange somehow, I'm not familiar with it.
>>>>>>> Otherwise, it sounds to me like you're doing it right. Sounds  
>>>>>>> like
>>>>>>> either a permissions problem or a bug somewhere (first guesses  
>>>>>>> would
>>>>>>> be Mac OSX's FUSE, or GlusterFS client on OSX).
>>>>>>>
>>>>>>> On Thu, Jan 1, 2009 at 11:55 PM, admin at matphot.com <admin at matphot.com 
>>>>>>> >
>>>>>>> wrote:
>>>>>>>>
>>>>>>>> Dear All,
>>>>>>>>
>>>>>>>> I'm afraid I'm a bit new to this. I hope I'm not missing the  
>>>>>>>> obvious, but
>>>>>>>> in
>>>>>>>> all the documentation I can't seem to find a clear answer to  
>>>>>>>> my problem.
>>>>>>>>
>>>>>>>> I have a head server (Leopard X serve) that will be used as a  
>>>>>>>> mount point
>>>>>>>> for four sub-servers (Debian Etch) that each have two SATA  
>>>>>>>> RAID 5 blocks
>>>>>>>> running an XFS filesystem.
>>>>>>>>
>>>>>>>> Before I switched to glusterfs, I would do an NFS export (/ 
>>>>>>>> etc/exports)
>>>>>>>> of
>>>>>>>> the XFS filesystem mounted in /etc/fstab. I have since  
>>>>>>>> cancelled
>>>>>>>> (commented
>>>>>>>> out) the NFS export, but I am not quite sure what to do about  
>>>>>>>> the fstab:
>>>>>>>> Should I mount the drives using this file, then export the  
>>>>>>>> filesystem
>>>>>>>> using
>>>>>>>> glusterfs? Or should it be glusterfs doing the mounting? What  
>>>>>>>> role does
>>>>>>>> FUSE
>>>>>>>> have in the mount operation?
>>>>>>>>
>>>>>>>> The RAID drives are at /dev/sdb and /dev/sdc, and their  
>>>>>>>> filesystems are
>>>>>>>> accessible at /dev/sdb1 and /dev/sdc1 - should I be mounting  
>>>>>>>> these with
>>>>>>>> glusterfs (instead of mounting them to a folder in the server  
>>>>>>>> root as I
>>>>>>>> am
>>>>>>>> doing presently)?
>>>>>>>>
>>>>>>>> With my present configuration, all works correctly if I mount  
>>>>>>>> the raid
>>>>>>>> drives individually, yet when I mirror two drives across two  
>>>>>>>> servers
>>>>>>>> using
>>>>>>>> AFS things get wonky - I can upload files to a folder (and  
>>>>>>>> see that they
>>>>>>>> have indeed been replicated to both drives), yet I am unable  
>>>>>>>> to create a
>>>>>>>> new
>>>>>>>> folder (it becomes an inaccessible icon).
>>>>>>>>
>>>>>>>> Thank you for any advice.
>>>>>>>>
>>>>>>>> Best,
>>>>>>>>
>>>>>>>> J.M. Schomburg.
>>>>>>>>
>>>>>>>>
>>>>>>>> _______________________________________________
>>>>>>>> Gluster-devel mailing list
>>>>>>>> Gluster-devel at nongnu.org
>>>>>>>> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> Gluster-devel mailing list
>>>>>>> Gluster-devel at nongnu.org
>>>>>>> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> Gluster-devel mailing list
>>>>> Gluster-devel at nongnu.org
>>>>> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>>>>>
>>>
>>> _______________________________________________
>>> Gluster-devel mailing list
>>> Gluster-devel at nongnu.org
>>> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>>
>> _______________________________________________
>> Gluster-devel mailing list
>> Gluster-devel at nongnu.org
>> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>>
>>
>>
>>
>> -- 
>> Amar Tumballi
>> Gluster/GlusterFS Hacker
>> [bulde on #gluster/irc.gnu.org]
>> http://www.zresearch.com - Commoditizing Super Storage!
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-devel/attachments/20090115/493baf3b/attachment-0003.html>


More information about the Gluster-devel mailing list