[Gluster-users] File (setuid) permission changes during volume heal - possible bug?
Chalcogen
chalcogen_eg_oxygen at yahoo.com
Mon Jan 27 20:37:14 UTC 2014
Hi,
I am working on a twin-replicated setup (server1 and server2) with
glusterfs 3.4.0. I perform the following steps:
1. Create a distributed volume 'testvol' with the XFS brick
server1:/brick/testvol on server1, and mount it using the glusterfs
native client at /testvol.
2. I copy the following file to /testvol:
server1:~$ ls -l /bin/su
-rw*s*r-xr-x 1 root root 84742 Jan 17 2014 /bin/su
server1:~$ cp -a /bin/su /testvol
3. Within /testvol if I list out the file I just copied, I find its
attributes intact.
4. Now, I add the XFS brick server2:/brick/testvol.
server2:~$ gluster volume add-brick testvol replica 2
server2:/brick/testvol
At this point, heal kicks in and the file is replicated on server 2.
5. If I list out su in testvol on either server now, now, this is what
I see.
server1:~$ ls -l /testvol/su
-rw*s*r-xr-x 1 root root 84742 Jan 17 2014 /bin/su
server2:~$ ls -l /testvol/su
-rw*x*r-xr-x 1 root root 84742 Jan 17 2014 /bin/su
That is, the 's' file mode gets changed to plain 'x' - meaning, all the
attributes are not preserved upon heal completion. Would you consider
this a bug? Is the behavior different on a higher release?
Thanks a lot.
Anirban
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20140128/d1220f8a/attachment.html>
More information about the Gluster-users
mailing list