[Bugs] [Bug 1797099] After upgrade from gluster 7.0 to 7.2 posix-acl.c:262:posix_acl_log_permit_denied

bugzilla at redhat.com bugzilla at redhat.com
Fri Feb 21 05:43:50 UTC 2020


https://bugzilla.redhat.com/show_bug.cgi?id=1797099

Strahil Nikolov <hunter86_bg at yahoo.com> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
              Flags|                            |needinfo?(ravishankar at redha
                   |                            |t.com)



--- Comment #3 from Strahil Nikolov <hunter86_bg at yahoo.com> ---
Hey Ravi,

Currently I cannot afford to loose the lab.
I will update the ticket , once I have the ability to upgrade to v7.3 (at least
one month from now).

Would you recommend enabling the trace logs during the upgrade ?
Any other suggestions for the upgrade process ?

My setup started as oVirt lab (4.2.7) 14 months ago with gluster v3.Due to a
bug on gluster 5.5/5.6 - I have upgrared to 6.x.
Later after issues  in v6.5  , I have managed to resolve the ACL issue by
upgrading to 7.0.

My data_ volumes  were  affected and  the shards of each file were not
accessible.
The strange thing is that the engine volume & data volumes were not affected in
the 6.5 issue that forced me to v7.0 and those volumes were also  not affected
in this one.

The only difference is that data_fast consists of 2 NVMe bricks instead of
regular ssd (engine) and spinning disks (data).

root at ovirt1 ~]# gluster volume info all

Volume Name: data
Type: Replicate
Volume ID: ff1b73d2-de13-4b5f-af55-bedda66e8180
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: gluster1:/gluster_bricks/data/data
Brick2: gluster2:/gluster_bricks/data/data
Brick3: ovirt3:/gluster_bricks/data/data (arbiter)
Options Reconfigured:
cluster.choose-local: off
performance.client-io-threads: off
nfs.disable: on
transport.address-family: inet
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.low-prio-threads: 32
network.remote-dio: on
cluster.eager-lock: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 10000
features.shard: on
user.cifs: off
storage.owner-uid: 36
storage.owner-gid: 36
network.ping-timeout: 30
performance.strict-o-direct: on
cluster.granular-entry-heal: enable
server.event-threads: 4
client.event-threads: 4
cluster.enable-shared-storage: enable

Volume Name: data_fast
Type: Replicate
Volume ID: 378804bf-2975-44d8-84c2-b541aa87f9ef
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: gluster1:/gluster_bricks/data_fast/data_fast
Brick2: gluster2:/gluster_bricks/data_fast/data_fast
Brick3: ovirt3:/gluster_bricks/data_fast/data_fast (arbiter)Options
Reconfigured:
storage.fips-mode-rchecksum: on
performance.client-io-threads: on
nfs.disable: on
transport.address-family: inet
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.low-prio-threads: 32
network.remote-dio: on
cluster.eager-lock: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 10000
features.shard: on
user.cifs: off
cluster.choose-local: on
client.event-threads: 4
server.event-threads: 4
storage.owner-uid: 36
storage.owner-gid: 36
performance.strict-o-direct: on
network.ping-timeout: 30
cluster.granular-entry-heal: enable
cluster.enable-shared-storage: enable

Volume Name: data_fast2
Type: Replicate
Volume ID: 58a41eab-29a1-4b4d-904f-837eb3d7597e
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: gluster1:/gluster_bricks/data_fast2/data_fast2
Brick2: gluster2:/gluster_bricks/data_fast2/data_fast2
Brick3: ovirt3:/gluster_bricks/data_fast2/data_fast2 (arbiter)
Options Reconfigured:
performance.client-io-threads: on
nfs.disable: on
transport.address-family: inet
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.low-prio-threads: 32
network.remote-dio: on
cluster.eager-lock: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 10000
features.shard: on
user.cifs: off
cluster.choose-local: on
client.event-threads: 4
server.event-threads: 4
storage.owner-uid: 36
storage.owner-gid: 36
performance.strict-o-direct: on
network.ping-timeout: 30
cluster.granular-entry-heal: enable
cluster.enable-shared-storage: enable

Volume Name: data_fast3
Type: Replicate
Volume ID: 2bef6141-fc50-41fe-8db4-edcddf925f2a
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: gluster1:/gluster_bricks/data_fast3/data_fast3
Brick2: gluster2:/gluster_bricks/data_fast3/data_fast3
Brick3: ovirt3:/gluster_bricks/data_fast3/data_fast3 (arbiter)
Options Reconfigured:
performance.client-io-threads: on
nfs.disable: on
transport.address-family: inet
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.low-prio-threads: 32
network.remote-dio: on
cluster.eager-lock: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 10000
features.shard: on
user.cifs: off
cluster.choose-local: on
client.event-threads: 4
server.event-threads: 4
storage.owner-uid: 36
storage.owner-gid: 36
performance.strict-o-direct: on
network.ping-timeout: 30
cluster.granular-entry-heal: enable
cluster.enable-shared-storage: enable

Volume Name: data_fast4
Type: Replicate
Volume ID: 6b98de22-1f3c-4e40-a73d-90d425df986f
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: gluster1:/gluster_bricks/data_fast4/data_fast4
Brick2: gluster2:/gluster_bricks/data_fast4/data_fast4
Brick3: ovirt3:/gluster_bricks/data_fast4/data_fast4 (arbiter)
Options Reconfigured:
performance.client-io-threads: on
nfs.disable: on
transport.address-family: inet
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.low-prio-threads: 32
network.remote-dio: on
cluster.eager-lock: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 10000
features.shard: on
user.cifs: off
cluster.choose-local: on
client.event-threads: 4
server.event-threads: 4
storage.owner-uid: 36
storage.owner-gid: 36
performance.strict-o-direct: on
network.ping-timeout: 30
cluster.granular-entry-heal: enable
cluster.enable-shared-storage: enable

Volume Name: engine
Type: Replicate
Volume ID: 30ca1cc2-f2f7-4749-9e2e-cee9d7099ded
Status: Started
Snapshot Count: 2
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1:
gluster1:/run/gluster/snaps/e3e22cbf22c349df95f8421591fead04/brick1/engine
Brick2:
gluster2:/run/gluster/snaps/e3e22cbf22c349df95f8421591fead04/brick2/engine
Brick3:
ovirt3:/run/gluster/snaps/e3e22cbf22c349df95f8421591fead04/brick3/engine
(arbiter)
Options Reconfigured:
storage.owner-gid: 36
storage.owner-uid: 36
client.event-threads: 4
server.event-threads: 4
performance.client-io-threads: off
nfs.disable: on
transport.address-family: inet
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.low-prio-threads: 32
network.remote-dio: on
cluster.eager-lock: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 10000
features.shard: on
user.cifs: off
network.ping-timeout: 30
performance.strict-o-direct: on
cluster.granular-entry-heal: enable
cluster.choose-local: on
features.quota: off
features.inode-quota: off
features.quota-deem-statfs: off
features.barrier: disable
cluster.enable-shared-storage: enable

Volume Name: gluster_shared_storage
Type: Replicate
Volume ID: a95052ae-d641-4834-bbc5-6f87898c369b
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: gluster2:/var/lib/glusterd/ss_brick
Brick2: ovirt3:/var/lib/glusterd/ss_brick
Brick3: gluster1:/var/lib/glusterd/ss_brick
Options Reconfigured:
cluster.granular-entry-heal: enable
client.event-threads: 4
server.event-threads: 4
network.remote-dio: on
transport.address-family: inet
nfs.disable: on
features.shard: on
user.cifs: off
cluster.choose-local: off
cluster.enable-shared-storage: enable
[root at ovirt1 ~]#

gluster1-> is  the gluster IP on ovirt1
gluster2-> is  the gluster IP on ovirt2
ovirt3  is the arbiter


My mount points in oVirt have only 'backup-volfile-servers=gluster2:ovirt3' and
no ACL option was set anywhere.
The pool is also its own client.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list