[Gluster-users] sticky bit?
Matthew Wilkins
daibutsu at gmail.com
Wed Apr 8 03:07:02 UTC 2009
oh, i also saw this in my log on mu-rhdev2:
009-04-08 15:00:20 W [dht-common.c:231:dht_revalidate_cbk] nufa:
linkfile found in revalidate for /foo2
2009-04-08 15:00:20 W [fuse-bridge.c:301:need_fresh_lookup]
fuse-bridge: revalidate of /foo2 failed (Stale NFS file handle)
2009-04-08 15:04:15 W [dht-common.c:215:dht_revalidate_cbk] nufa:
mismatching filetypes 0100000 v/s 040000 for /foo1
2009-04-08 15:04:15 W [dht-common.c:215:dht_revalidate_cbk] nufa:
mismatching filetypes 0100000 v/s 040000 for /foo1
2009-04-08 15:04:15 W [fuse-bridge.c:301:need_fresh_lookup]
fuse-bridge: revalidate of /foo1 failed (Invalid argument)
and my version of fuse is
fuse-2.7.4-1.el5.rf
thx
matt
On Wed, Apr 8, 2009 at 3:02 PM, Matthew Wilkins <daibutsu at gmail.com> wrote:
> hi there,
>
> i am doing some testing of 2.0.0rc7 on two RHEL machines. i have a
> nufa setup, my config is below.
> files that i create have the sticky bit set on them, why is that? in detail:
>
> on mu-rhdev1 i create the file /mnt/foo1 (where gluster is mounted on
> /mnt), i do an ls
>
> [root at mu-rhdev1 glusterfs]# vi /mnt/foo1
> [root at mu-rhdev1 glusterfs]# ls -l /mnt/ /export/brick0/
> /export/brick0/:
> total 4
> -rw-r--r-T 1 root root 3 Apr 8 14:55 foo1
>
> /mnt/:
> total 4
> -rw-r--r-T 1 root root 3 Apr 8 14:55 foo1
>
> and on mu-rhdev2 i see:
>
> [root at mu-rhdev2 mnt]# ls -l /mnt/ /export/brick0/
> /export/brick0/:
> total 4
> ---------T 1 root root 0 Apr 8 14:55 foo1
>
> /mnt/:
> total 4
> -rw-r--r-T 1 root root 3 Apr 8 14:55 foo1
>
> so what are those sticky bits doing there? also why is foo1 showing
> up in mu-rhdev2:/export/brick0? is this a namespace?
>
> now i create a file on mu-rhdev2:
>
> [root at mu-rhdev2 mnt]# vi /mnt/foo2
> [root at mu-rhdev2 mnt]# ls -l /mnt/ /export/brick0/
> /export/brick0/:
> total 8
> ---------T 1 root root 0 Apr 8 14:55 foo1
> -rw-r--r-- 1 root root 5 Apr 8 14:55 foo2
>
> /mnt/:
> total 8
> -rw-r--r-T 1 root root 3 Apr 8 14:55 foo1
> -rw-r--r-- 1 root root 5 Apr 8 14:55 foo2
>
> no sticky bits on foo2! and on mu-rhdev1 it looks like:
>
> root at mu-rhdev1 glusterfs]# ls -l /mnt/ /export/brick0/
> /export/brick0/:
> total 4
> -rw-r--r-T 1 root root 3 Apr 8 14:55 foo1
>
> /mnt/:
> total 8
> -rw-r--r-T 1 root root 3 Apr 8 14:55 foo1
> -rw-r--r-- 1 root root 5 Apr 8 14:55 foo2
>
> foo2 doesn't show up as zero size in /export/brick0, perhaps it will
> over time or if i stat it from mu-rhdev1?
>
> thanks for any help in clarifying what is happening here. here is my config:
>
> volume posix
> type storage/posix
> option directory /export/brick0
> end-volume
>
> volume locks
> type features/locks
> subvolumes posix
> end-volume
>
> volume brick
> type performance/io-threads
> subvolumes locks
> end-volume
>
> volume server
> type protocol/server
> option transport-type tcp
> option auth.addr.brick.allow *
> subvolumes brick
> end-volume
>
> volume mu-rhdev1
> type protocol/client
> option transport-type tcp
> option remote-host mu-rhdev1
> option remote-subvolume brick
> end-volume
>
> volume mu-rhdev2
> type protocol/client
> option transport-type tcp
> option remote-host mu-rhdev2
> option remote-subvolume brick
> end-volume
>
> volume nufa
> type cluster/nufa
> option local-volume-name `hostname`
> subvolumes mu-rhdev1 mu-rhdev2
> end-volume
>
> volume writebehind
> type performance/write-behind
> option cache-size 1MB
> subvolumes nufa
> end-volume
>
> # before or after writebehind?
> volume ra
> type performance/read-ahead
> subvolumes writebehind
> end-volume
>
> volume cache
> type performance/io-cache
> option cache-size 512MB
> subvolumes ra
> end-volume
>
>
>
>
> matt
>
More information about the Gluster-users
mailing list