[Gluster-users] sticky bit?

Raghavendra G raghavendra at zresearch.com
Tue Apr 14 09:13:56 UTC 2009


Hi Matthew,

When the node to which the file gets hashed does not have enough free space,
file gets created on another node having enough free space and a zero byte
sized file with its sticky bit set is created on the hashed node. The zero
byte sized with its sticky bit set file is a special file which dht
identifies as a "linkfile". linkfile has enough information set in its
extended attributes to identify the node in which file is actually stored.



On Thu, Apr 9, 2009 at 2:58 AM, Matthew Wilkins <daibutsu at gmail.com> wrote:

> hi there,
>
> i am doing some testing of 2.0.0rc7 on two RHEL machines.  i have a
> nufa setup, my config is below.
> files that i create have the sticky bit set on them, why is that?  in
> detail:
>
> on mu-rhdev1 i create the file /mnt/foo1 (where gluster is mounted on
> /mnt), i do an ls
>
> [root at mu-rhdev1 glusterfs]# vi /mnt/foo1
> [root at mu-rhdev1 glusterfs]# ls -l /mnt/ /export/brick0/
> /export/brick0/:
> total 4
> -rw-r--r-T 1 root root 3 Apr  8 14:55 foo1
>

sticky bit here identifies that mu-rhdev1:/export/brick0/foo1 is a
"linkfile to"  the file mu-rhdev2:/export/brick0/foo1


>
> /mnt/:
> total 4
> -rw-r--r-T 1 root root 3 Apr  8 14:55 foo1
>
> and on mu-rhdev2 i see:
>
> [root at mu-rhdev2 mnt]# ls -l /mnt/ /export/brick0/
> /export/brick0/:
> total 4
> ---------T 1 root root 0 Apr  8 14:55 foo1


>
> /mnt/:
> total 4
> -rw-r--r-T 1 root root 3 Apr  8 14:55 foo1
>
> so what are those sticky bits doing there?  also why is foo1 showing
> up in mu-rhdev2:/export/brick0?  is this a namespace?


 read the above explanation.


>
>
> now i create a file on mu-rhdev2:
>
> [root at mu-rhdev2 mnt]# vi /mnt/foo2
> [root at mu-rhdev2 mnt]# ls -l /mnt/ /export/brick0/
> /export/brick0/:
> total 8
> ---------T 1 root root 0 Apr  8 14:55 foo1
> -rw-r--r-- 1 root root 5 Apr  8 14:55 foo2
>
> /mnt/:
> total 8
> -rw-r--r-T 1 root root 3 Apr  8 14:55 foo1
> -rw-r--r-- 1 root root 5 Apr  8 14:55 foo2
>
> no sticky bits on foo2!  and on mu-rhdev1 it looks like:
>
> root at mu-rhdev1 glusterfs]# ls -l /mnt/ /export/brick0/
> /export/brick0/:
> total 4
> -rw-r--r-T 1 root root 3 Apr  8 14:55 foo1
>
> /mnt/:
> total 8
> -rw-r--r-T 1 root root 3 Apr  8 14:55 foo1
> -rw-r--r-- 1 root root 5 Apr  8 14:55 foo2
>
> foo2 doesn't show up as zero size in /export/brick0, perhaps it will
> over time or if i stat it from mu-rhdev1?


here foo2 gets hashed to mu-rhdev2 and since mu-rhdev2 has enough disk
space, foo2 is actually stored there (not the linkfile).


>
>
> thanks for any help in clarifying what is happening here.  here is my
> config:
>
> volume posix
>  type storage/posix
>  option directory /export/brick0
> end-volume
>
> volume locks
>  type features/locks
>  subvolumes posix
> end-volume
>
> volume brick
>  type performance/io-threads
>  subvolumes locks
> end-volume
>
> volume server
>  type protocol/server
>  option transport-type tcp
>  option auth.addr.brick.allow *
>  subvolumes brick
> end-volume
>
> volume mu-rhdev1
>  type protocol/client
>  option transport-type tcp
>  option remote-host mu-rhdev1
>  option remote-subvolume brick
> end-volume
>
> volume mu-rhdev2
>  type protocol/client
>  option transport-type tcp
>  option remote-host mu-rhdev2
>  option remote-subvolume brick
> end-volume
>
> volume nufa
>  type cluster/nufa
>  option local-volume-name `hostname`
>  subvolumes mu-rhdev1 mu-rhdev2
> end-volume
>
> volume writebehind
>  type performance/write-behind
>  option cache-size 1MB
>  subvolumes nufa
> end-volume
>
> # before or after writebehind?
> volume ra
>  type performance/read-ahead
>  subvolumes writebehind
> end-volume
>
> volume cache
>  type performance/io-cache
>  option cache-size 512MB
>  subvolumes ra
> end-volume
>
>
>
> matt
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>



-- 
Raghavendra G
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20090414/4d3fc7b9/attachment.html>


More information about the Gluster-users mailing list