[Bugs] [Bug 1224128] New: Disperse volume: Input/output error on nfs mount after the volume start force
bugzilla at redhat.com
bugzilla at redhat.com
Fri May 22 08:57:01 UTC 2015
https://bugzilla.redhat.com/show_bug.cgi?id=1224128
Bug ID: 1224128
Summary: Disperse volume: Input/output error on nfs mount after
the volume start force
Product: Red Hat Gluster Storage
Version: 3.1
Component: glusterfs
Sub Component: disperse
Keywords: Triaged
Assignee: rhs-bugs at redhat.com
Reporter: byarlaga at redhat.com
QA Contact: byarlaga at redhat.com
CC: aspandey at redhat.com, bugs at gluster.org,
byarlaga at redhat.com, dlambrig at redhat.com,
gluster-bugs at redhat.com, pkarampu at redhat.com,
xhernandez at datalab.es
Depends On: 1202218
Blocks: 1186580 (qe_tracker_everglades)
Group: redhat
+++ This bug was initially created as a clone of Bug #1202218 +++
Description of problem:
=======================
In a 1x(8+4) disperse volume, there are input/output errors on nfs mount after
2 bricks were brought down and the gluster volume is started by force. The
mount point goes into state state and takes long time to recover.
Version-Release number of selected component (if applicable):
==============================================================
[root at vertigo ~]# gluster --version
glusterfs 3.7dev built on Mar 12 2015 01:40:59
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General
Public License.
How reproducible:
=================
100%
Steps to Reproduce:
1. create a 1x(8+4) disperse volume.
2. NFS mount on the client and start creating files/directories with mkdir and
dd
3. Now, bring down 2 of the bricks
4. Wait for some time and let IO run.
5. Start the gluster volume with force option from any of the server.
Actual results:
================
input/output errors on nfs mount
Expected results:
=================
No errors should be seen and IO should resume as normal
Gluster volume options:
=======================
[root at vertigo ~]# gluster v get testvol all
Option Value
------ -----
cluster.lookup-unhashed on
cluster.min-free-disk 10%
cluster.min-free-inodes 5%
cluster.rebalance-stats off
cluster.subvols-per-directory (null)
cluster.readdir-optimize off
cluster.rsync-hash-regex (null)
cluster.extra-hash-regex (null)
cluster.dht-xattr-name trusted.glusterfs.dht
cluster.randomize-hash-range-by-gfid off
cluster.local-volume-name (null)
cluster.weighted-rebalance on
cluster.switch-pattern (null)
cluster.entry-change-log on
cluster.read-subvolume (null)
cluster.read-subvolume-index -1
cluster.read-hash-mode 1
cluster.background-self-heal-count 16
cluster.metadata-self-heal on
cluster.data-self-heal on
cluster.entry-self-heal on
cluster.self-heal-daemon on
cluster.heal-timeout 600
cluster.self-heal-window-size 1
cluster.data-change-log on
cluster.metadata-change-log on
cluster.data-self-heal-algorithm (null)
cluster.eager-lock on
cluster.quorum-type none
cluster.quorum-count (null)
cluster.choose-local true
cluster.self-heal-readdir-size 1KB
cluster.post-op-delay-secs 1
cluster.ensure-durability on
cluster.stripe-block-size 128KB
cluster.stripe-coalesce true
diagnostics.latency-measurement off
diagnostics.dump-fd-stats off
diagnostics.count-fop-hits off
diagnostics.brick-log-level INFO
diagnostics.client-log-level INFO
diagnostics.brick-sys-log-level CRITICAL
diagnostics.client-sys-log-level CRITICAL
diagnostics.brick-logger (null)
diagnostics.client-logger (null)
diagnostics.brick-log-format (null)
diagnostics.client-log-format (null)
diagnostics.brick-log-buf-size 5
diagnostics.client-log-buf-size 5
diagnostics.brick-log-flush-timeout 120
diagnostics.client-log-flush-timeout 120
performance.cache-max-file-size 0
performance.cache-min-file-size 0
performance.cache-refresh-timeout 1
performance.cache-priority
performance.cache-size 32MB
performance.io-thread-count 16
performance.high-prio-threads 16
performance.normal-prio-threads 16
performance.low-prio-threads 16
performance.least-prio-threads 1
performance.enable-least-priority on
performance.least-rate-limit 0
performance.cache-size 128MB
performance.flush-behind on
performance.nfs.flush-behind on
performance.write-behind-window-size 1MB
performance.nfs.write-behind-window-size1MB
performance.strict-o-direct off
performance.nfs.strict-o-direct off
performance.strict-write-ordering off
performance.nfs.strict-write-ordering off
performance.lazy-open yes
performance.read-after-open no
performance.read-ahead-page-count 4
performance.md-cache-timeout 1
features.encryption off
encryption.master-key (null)
encryption.data-key-size 256
encryption.block-size 4096
network.frame-timeout 1800
network.ping-timeout 42
network.tcp-window-size (null)
features.lock-heal off
features.grace-timeout 10
network.remote-dio disable
client.event-threads 2
network.tcp-window-size (null)
network.inode-lru-limit 16384
auth.allow *
auth.reject (null)
transport.keepalive (null)
server.allow-insecure (null)
server.root-squash off
server.anonuid 65534
server.anongid 65534
server.statedump-path /var/run/gluster
server.outstanding-rpc-limit 64
features.lock-heal off
features.grace-timeout (null)
server.ssl (null)
auth.ssl-allow *
server.manage-gids off
client.send-gids on
server.gid-timeout 2
server.own-thread (null)
server.event-threads 2
performance.write-behind on
performance.read-ahead on
performance.readdir-ahead off
performance.io-cache on
performance.quick-read on
performance.open-behind on
performance.stat-prefetch on
performance.client-io-threads off
performance.nfs.write-behind on
performance.nfs.read-ahead off
performance.nfs.io-cache off
performance.nfs.quick-read off
performance.nfs.stat-prefetch off
performance.nfs.io-threads off
performance.force-readdirp true
features.file-snapshot off
features.uss on
features.snapshot-directory .snaps
features.show-snapshot-directory off
network.compression off
network.compression.window-size -15
network.compression.mem-level 8
network.compression.min-size 0
network.compression.compression-level -1
network.compression.debug false
features.limit-usage (null)
features.quota-timeout 0
features.default-soft-limit 80%
features.soft-timeout 60
features.hard-timeout 5
features.alert-time 86400
features.quota-deem-statfs off
geo-replication.indexing off
geo-replication.indexing off
geo-replication.ignore-pid-check off
geo-replication.ignore-pid-check off
features.quota on
debug.trace off
debug.log-history no
debug.log-file no
debug.exclude-ops (null)
debug.include-ops (null)
debug.error-gen off
debug.error-failure (null)
debug.error-number (null)
debug.random-failure off
debug.error-fops (null)
nfs.enable-ino32 no
nfs.mem-factor 15
nfs.export-dirs on
nfs.export-volumes on
nfs.addr-namelookup off
nfs.dynamic-volumes off
nfs.register-with-portmap on
nfs.outstanding-rpc-limit 16
nfs.port 2049
nfs.rpc-auth-unix on
nfs.rpc-auth-null on
nfs.rpc-auth-allow all
nfs.rpc-auth-reject none
nfs.ports-insecure off
nfs.trusted-sync off
nfs.trusted-write off
nfs.volume-access read-write
nfs.export-dir
nfs.disable false
nfs.nlm on
nfs.acl on
nfs.mount-udp off
nfs.mount-rmtab /var/lib/glusterd/nfs/rmtab
nfs.rpc-statd /sbin/rpc.statd
nfs.server-aux-gids off
nfs.drc off
nfs.drc-size 0x20000
nfs.read-size (1 * 1048576ULL)
nfs.write-size (1 * 1048576ULL)
nfs.readdir-size (1 * 1048576ULL)
features.read-only off
features.worm off
storage.linux-aio off
storage.batch-fsync-mode reverse-fsync
storage.batch-fsync-delay-usec 0
storage.owner-uid -1
storage.owner-gid -1
storage.node-uuid-pathinfo off
storage.health-check-interval 30
storage.build-pgfid off
storage.bd-aio off
cluster.server-quorum-type off
cluster.server-quorum-ratio 0
changelog.changelog off
changelog.changelog-dir (null)
changelog.encoding ascii
changelog.rollover-time 15
changelog.fsync-interval 5
changelog.changelog-barrier-timeout 120
features.barrier disable
features.barrier-timeout 120
locks.trace (null)
cluster.disperse-self-heal-daemon enable
cluster.quorum-reads no
client.bind-insecure (null)
[root at vertigo ~]#
Gluster volume status:
=======================
[root at vertigo ~]# gluster v status
Status of volume: testvol
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick vertigo:/rhs/brick1/b1 49152 0 Y 18310
Brick ninja:/rhs/brick1/b1 49152 0 Y 14315
Brick vertigo:/rhs/brick2/b2 49153 0 Y 18323
Brick ninja:/rhs/brick2/b2 49153 0 Y 14328
Brick vertigo:/rhs/brick3/b3 49154 0 Y 19653
Brick ninja:/rhs/brick3/b3 49154 0 Y 14341
Brick vertigo:/rhs/brick4/b4 49155 0 Y 19666
Brick ninja:/rhs/brick4/b4 49155 0 Y 14354
Brick vertigo:/rhs/brick1/b1-1 49156 0 Y 19679
Brick ninja:/rhs/brick1/b1-1 49156 0 Y 14367
Brick vertigo:/rhs/brick2/b2-1 49157 0 Y 19692
Brick ninja:/rhs/brick2/b2-1 49157 0 Y 14380
Snapshot Daemon on localhost 49158 0 Y 21305
NFS Server on localhost 2049 0 Y 18337
Quota Daemon on localhost N/A N/A Y 18358
Snapshot Daemon on ninja 49158 0 Y 15763
NFS Server on ninja 2049 0 Y 16224
Quota Daemon on ninja N/A N/A Y 16245
Task Status of Volume testvol
------------------------------------------------------------------------------
There are no active volume tasks
[root at vertigo ~]#
Gluster volume info:
====================
[root at vertigo ~]# gluster v info
Volume Name: testvol
Type: Disperse
Volume ID: 7393260c-51d1-4dca-8fc8-e1f5ad6fee14
Status: Started
Number of Bricks: 1 x (8 + 4) = 12
Transport-type: tcp
Bricks:
Brick1: vertigo:/rhs/brick1/b1
Brick2: ninja:/rhs/brick1/b1
Brick3: vertigo:/rhs/brick2/b2
Brick4: ninja:/rhs/brick2/b2
Brick5: vertigo:/rhs/brick3/b3
Brick6: ninja:/rhs/brick3/b3
Brick7: vertigo:/rhs/brick4/b4
Brick8: ninja:/rhs/brick4/b4
Brick9: vertigo:/rhs/brick1/b1-1
Brick10: ninja:/rhs/brick1/b1-1
Brick11: vertigo:/rhs/brick2/b2-1
Brick12: ninja:/rhs/brick2/b2-1
Options Reconfigured:
features.uss: on
client.event-threads: 2
server.event-threads: 2
features.quota: on
[root at vertigo ~]#
Additional info:
================
Attaching the sosreports of the server nodes.
--- Additional comment from Bhaskarakiran on 2015-03-16 03:13:39 EDT ---
--- Additional comment from Bhaskarakiran on 2015-03-16 03:34:33 EDT ---
--- Additional comment from Pranith Kumar K on 2015-05-05 12:25:20 EDT ---
http://review.gluster.org/9407
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1186580
[Bug 1186580] QE tracker bug for Everglades
https://bugzilla.redhat.com/show_bug.cgi?id=1202218
[Bug 1202218] Disperse volume: Input/output error on nfs mount after the
volume start force
--
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug https://bugzilla.redhat.com/token.cgi?t=r4ZlEdeMSg&a=cc_unsubscribe
More information about the Bugs
mailing list