[Bugs] [Bug 1797099] New: After upgrade from gluster 7.0 to 7.2 posix-acl.c:262:posix_acl_log_permit_denied

bugzilla at redhat.com bugzilla at redhat.com
Fri Jan 31 21:48:16 UTC 2020


https://bugzilla.redhat.com/show_bug.cgi?id=1797099

            Bug ID: 1797099
           Summary: After upgrade from gluster 7.0 to 7.2
                    posix-acl.c:262:posix_acl_log_permit_denied
           Product: GlusterFS
           Version: 7
          Hardware: x86_64
                OS: Linux
            Status: NEW
         Component: posix-acl
          Severity: medium
          Assignee: bugs at gluster.org
          Reporter: hunter86_bg at yahoo.com
                CC: bugs at gluster.org
  Target Milestone: ---
    Classification: Community



Created attachment 1656814
  --> https://bugzilla.redhat.com/attachment.cgi?id=1656814&action=edit
Trace Logs from gluster2 (choose local on)

Description of problem:
After upgrade from ovirt 4.3.8 to 4.3.9 RC1 and Gluster 7.0 to 7.2 -> ACL is
denying access to some shards.


[2020-01-31 21:14:28.967838] I [MSGID: 139001]
[posix-acl.c:262:posix_acl_log_permit_denied] 0-data_fast-access-control:
client: CTX_ID:3b25391c-1eb3-424d-a1e8-1a2c08ffb556-GRAPH_ID:0-PID:2207
5-HOST:ovirt2.localdomain-PC_NAME:data_fast-client-1-RECON_NO:-1, gfid:
be318638-e8a0-4c6d-977d-7a937aa84806, req(uid:107,gid:107,perm:1,ngrps:4),
ctx(uid:0,gid:0,in-groups:0,perm:000,updated-
fop:INVALID, acl:-) [Permission denied]


Version-Release number of selected component (if applicable):
glusterfs-7.2-1.el7.x86_64
glusterfs-coreutils-0.2.0-1.el7.x86_64
glusterfs-devel-7.2-1.el7.x86_64
python2-gluster-7.2-1.el7.x86_64
glusterfs-libs-7.2-1.el7.x86_64
glusterfs-fuse-7.2-1.el7.x86_64

How reproducible:
Always. Cluster cannot be used at all 

Steps to Reproduce:
1.Upgrade the Engine & reboot
2.Upgrade 1 of the hosts
3.Upgrade another Host
4.Upgrade the last host

Actual results:
Replica volume is not accessible. ACL is denying access, but there is no ACL in
mount options -> 'backup-volfile-servers=gluster2:ovirt3'

[root at ovirt2 bricks]# gluster volume info data_fast

Volume Name: data_fast
Type: Replicate
Volume ID: 378804bf-2975-44d8-84c2-b541aa87f9ef
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: gluster1:/gluster_bricks/data_fast/data_fast
Brick2: gluster2:/gluster_bricks/data_fast/data_fast
Brick3: ovirt3:/gluster_bricks/data_fast/data_fast (arbiter)
Options Reconfigured:
diagnostics.client-log-level: TRACE
diagnostics.brick-log-level: TRACE
performance.client-io-threads: on
nfs.disable: on
transport.address-family: inet
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.low-prio-threads: 32
network.remote-dio: on
cluster.eager-lock: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 10000
features.shard: on
user.cifs: off
cluster.choose-local: on
client.event-threads: 4
server.event-threads: 4
storage.owner-uid: 36
storage.owner-gid: 36
performance.strict-o-direct: on
network.ping-timeout: 30
cluster.granular-entry-heal: enable
cluster.enable-shared-storage: enable


Expected results:
ACL should not prevent access to qemu.

Additional info:
1. Cluster was completely powered off and then on -> No result
2. All affected volumes were powered off and then on -> No result
3. Run a dummy acl to reset the cache -> find /rhev/data-center/mnt/glusterSD/
-exec setfacl -u:root:rwx {} \; -> No result
4. Run recursive chown on -> chown -R 36:36 /rhev/data-center/mnt/glusterSD/ ->
no result

Note: The same has happened when upgrading from 6.5 to 6.6 which led me to
upgrade to 7.0 .

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list