[Gluster-users] group write permissions not being respected

Pranith Kumar Karampuri pkarampu at redhat.com
Thu Sep 1 07:31:41 UTC 2016


hi Pat,
       I think the other thing we should probably look for would be to see
the tcp dump of what uid/gid parameters are sent over network when this
command is executed.

On Thu, Sep 1, 2016 at 7:14 AM, Pat Haley <phaley at mit.edu> wrote:

> ------------------------------------------------------------
> --------------------------------
>
> hi Pat,
>       Are you seeing this issue only after migration or even before? May
> be we should look at the gid numbers on the disk and the ones that are
> coming from client for the given user to see if they match or not?
>
> ------------------------------------------------------------
> -------------------------------------
> This issue was not being seen before the migration.  We have copied the
> /etc/passwd and /etc/group files from the front-end machine (the client) to
> the data server, so they all match
> ------------------------------------------------------------
> -------------------------------------
>
> Could you give stat output of the directory in question from both the
> brick and the nfs client
>
> ------------------------------------------------------------
> --------------------------------------
> From the server for gluster:
> [root at mseas-data2 ~]# stat /gdata/projects/nsf_alpha
>   File: `/gdata/projects/nsf_alpha'
>   Size: 4096          Blocks: 8          IO Block: 131072 directory
> Device: 13h/19d    Inode: 13094773206281819436  Links: 13
> Access: (2775/drwxrwsr-x)  Uid: (    0/    root)   Gid: (  598/nsf_alpha)
> Access: 2016-08-31 19:08:59.735990904 -0400
> Modify: 2016-08-31 16:37:09.048997167 -0400
> Change: 2016-08-31 16:37:41.315997148 -0400
>
> From the server for first underlying brick
> [root at mseas-data2 ~]# stat /mnt/brick1/projects/nsf_alpha/
>   File: `/mnt/brick1/projects/nsf_alpha/'
>   Size: 4096          Blocks: 8          IO Block: 4096   directory
> Device: 800h/2048d    Inode: 185630      Links: 13
> Access: (2775/drwxrwsr-x)  Uid: (    0/    root)   Gid: (  598/nsf_alpha)
> Access: 2016-08-31 19:08:59.669990907 -0400
> Modify: 2016-08-31 16:37:09.048997167 -0400
> Change: 2016-08-31 16:37:41.315997148 -0400
>
> From the server for second underlying brick
> [root at mseas-data2 ~]# stat /mnt/brick2/projects/nsf_alpha/
>   File: `/mnt/brick2/projects/nsf_alpha/'
>   Size: 4096          Blocks: 8          IO Block: 4096   directory
> Device: 810h/2064d    Inode: 24085468    Links: 13
> Access: (2775/drwxrwsr-x)  Uid: (    0/    root)   Gid: (  598/nsf_alpha)
> Access: 2016-08-31 19:08:59.735990904 -0400
> Modify: 2016-08-03 14:01:52.000000000 -0400
> Change: 2016-08-31 16:37:41.315997148 -0400
>
> From the client
> [root at mseas FixOwn]# stat /gdata/projects/nsf_alpha
>   File: `/gdata/projects/nsf_alpha'
>   Size: 4096          Blocks: 8          IO Block: 1048576 directory
> Device: 23h/35d    Inode: 13094773206281819436  Links: 13
> Access: (2775/drwxrwsr-x)  Uid: (    0/    root)   Gid: (  598/nsf_alpha)
> Access: 2016-08-31 19:08:59.735990904 -0400
> Modify: 2016-08-31 16:37:09.048997167 -0400
> Change: 2016-08-31 16:37:41.315997148 -0400
>
> ------------------------------------------------------------
> ------------------------------------
>
> Could you also let us know version of gluster you are using
>
> ------------------------------------------------------------
> -------------------------------------
>
>
> [root at mseas-data2 ~]# gluster --version
> glusterfs 3.7.11 built on Apr 27 2016 14:09:22
>
> [root at mseas-data2 ~]# gluster volume info
>
> Volume Name: data-volume
> Type: Distribute
> Volume ID: c162161e-2a2d-4dac-b015-f31fd89ceb18
> Status: Started
> Number of Bricks: 2
> Transport-type: tcp
> Bricks:
> Brick1: mseas-data2:/mnt/brick1
> Brick2: mseas-data2:/mnt/brick2
> Options Reconfigured:
> performance.readdir-ahead: on
> nfs.disable: on
> nfs.export-volumes: off
>
> [root at mseas-data2 ~]# gluster volume status
> Status of volume: data-volume
> Gluster process                             TCP Port  RDMA Port  Online
> Pid
> ------------------------------------------------------------
> ------------------
> Brick mseas-data2:/mnt/brick1               49154     0          Y
> 5005
> Brick mseas-data2:/mnt/brick2               49155     0          Y
> 5010
>
> Task Status of Volume data-volume
> ------------------------------------------------------------
> ------------------
> Task                 : Rebalance
> ID                   : 892d9e3a-b38c-4971-b96a-8e4a496685ba
> Status               : completed
>
>
> [root at mseas-data2 ~]# gluster peer status
> Number of Peers: 0
>
>
> ------------------------------------------------------------
> -------------------------------------
>
> On Thu, Sep 1, 2016 at 2:46 AM, Pat Haley <phaley at mit.edu> wrote:
>
>>
>> Hi,
>>
>> Another piece of data.  There are 2 distinct volumes on the file server
>>
>>    1. a straight nfs partition
>>    2. a gluster volume (served over nfs)
>>
>> The straight nfs partition does respect the group write permissions,
>> while the gluster volume does not.  Any suggestions on how to debug this or
>> what additional information would be helpful would be greatly appreciated
>>
>> Thanks
>>
>> On 08/30/2016 06:01 PM, Pat Haley wrote:
>>
>>
>> Hi
>>
>> We have just migrated our data to a new file server (more space, old
>> server was showing its age). We have a volume for collaborative use, based
>> on group membership.  In our new server, the group write permissions are
>> not being respected (e.g.  the owner of a directory can still write to that
>> directory but any other member of the associated group cannot, even though
>> the directory clearly has group write permissions set).  This is occurring
>> regardless of how many groups the user is a member of (i.e. users that are
>> members of fewer then 16 groups are still affected).
>>
>> the relevant fstab line from the server looks like
>> localhost:/data-volume /gdata    glusterfs       defaults 0 0
>>
>> and for a client:
>> mseas-data2:/gdata       /gdata      nfs     defaults        0 0
>>
>> Any help would be greatly appreciated.
>>
>> Thanks
>>
>>
>> --
>>
>> -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
>> Pat Haley                          Email:  phaley at mit.edu
>> Center for Ocean Engineering       Phone:  (617) 253-6824
>> Dept. of Mechanical Engineering    Fax:    (617) 253-8125
>> MIT, Room 5-213                    http://web.mit.edu/phaley/www/
>> 77 Massachusetts Avenue
>> Cambridge, MA  02139-4301
>>
>> _______________________________________________ Gluster-users mailing
>> list Gluster-users at gluster.org http://www.gluster.org/mailman
>> /listinfo/gluster-users
>
> --
> Pranith
>
> --
>
> -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
> Pat Haley                          Email:  phaley at mit.edu
> Center for Ocean Engineering       Phone:  (617) 253-6824
> Dept. of Mechanical Engineering    Fax:    (617) 253-8125
> MIT, Room 5-213                    http://web.mit.edu/phaley/www/
> 77 Massachusetts Avenue
> Cambridge, MA  02139-4301
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>



-- 
Pranith
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160901/a75b4a83/attachment.html>


More information about the Gluster-users mailing list