[Bugs] [Bug 1679744] Minio gateway nas does not work with 2 + 1 dispersed volumes

bugzilla at redhat.com bugzilla at redhat.com
Thu Feb 21 18:52:25 UTC 2019


https://bugzilla.redhat.com/show_bug.cgi?id=1679744



--- Comment #1 from Otavio Cipriani <otavio.n.cipriani at gmail.com> ---
Here is the output of `gluster --version` (latest packages from CentOS SIG,
version 4.1):

glusterfs 4.1.7
Repository revision: git://git.gluster.org/glusterfs.git
Copyright (c) 2006-2016 Red Hat, Inc. <https://www.gluster.org/>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.


Here is the output of `gluster volume info` **after** applying the settings
from the _virt_ group (defaults do not work, either):

Volume Name: myvolume
Type: Disperse
Volume ID: 82a71fc3-2ffa-42a3-8fe1-b439b7c3211c
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: server-h01.cnj.jus.br:/var/local/lib/glusterfs/brick01/brick
Brick2: server-h02.cnj.jus.br:/var/local/lib/glusterfs/brick01/brick
Brick3: server-h03.cnj.jus.br:/var/local/lib/glusterfs/brick01/brick
Options Reconfigured:
cluster.choose-local: off
user.cifs: off
network.remote-dio: enable
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
features.shard: on
cluster.server-quorum-type: server
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.quorum-type: auto
cluster.eager-lock: enable
transport.address-family: inet
nfs.disable: on

The `cluster.shd*` settings were not applied, since they cannot be set for a
non-replicated volumes.

I stopped/started and umounted/mounted the volume, but the problem persists.

The problem does **not** occur with a 3-way replicated volume:

Volume Name: myvolume
Type: Replicate
Volume ID: eb0c9e63-ddb2-47ef-a6a8-26ddfd31d627
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: server-h01.cnj.jus.br:/var/local/lib/glusterfs/brick01/brick
Brick2: server-h02.cnj.jus.br:/var/local/lib/glusterfs/brick01/brick
Brick3: server-h03.cnj.jus.br:/var/local/lib/glusterfs/brick01/brick
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off

Also does **not** occur when using a 4 + 2 dispersed volume:

Volume Name: myvolume
Type: Disperse
Volume ID: 66bc6521-6e09-4f7a-a04b-79f66e424024
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (4 + 2) = 6
Transport-type: tcp
Bricks:
Brick1: server-p01.cnj.jus.br:/var/local/lib/glusterfs/brick01/brick
Brick2: server-p02.cnj.jus.br:/var/local/lib/glusterfs/brick01/brick
Brick3: server-p03.cnj.jus.br:/var/local/lib/glusterfs/brick01/brick
Brick4: server-p04.cnj.jus.br:/var/local/lib/glusterfs/brick01/brick
Brick5: server-p05.cnj.jus.br:/var/local/lib/glusterfs/brick01/brick
Brick6: server-p06.cnj.jus.br:/var/local/lib/glusterfs/brick01/brick
Options Reconfigured:
nfs.disable: on
transport.address-family: inet

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list